This paper proposes a new method for controlling a flow shop in terms of throughput and Work In Process (WIP). In order to achieve a throughput target, a Deep Q-Network (DQN) is used to define the constant WIP quantity in the system. The main contribution of this paper is the novel approach used to formulate the state, action space, and reward function. An extensive preexperimental campaign is conducted to determine the best network structure and appropriate hyperparameter values. Finally, the system's performance is compared to the known results of an analytical model from the literature (Practical Worst Case, PWC).

A deep reinforcement learning approach for the throughput control of a flow-shop production system

Marchesano M. G.
;
Guizzi G.;Santillo L. C.;Vespoli S.
2021

Abstract

This paper proposes a new method for controlling a flow shop in terms of throughput and Work In Process (WIP). In order to achieve a throughput target, a Deep Q-Network (DQN) is used to define the constant WIP quantity in the system. The main contribution of this paper is the novel approach used to formulate the state, action space, and reward function. An extensive preexperimental campaign is conducted to determine the best network structure and appropriate hyperparameter values. Finally, the system's performance is compared to the known results of an analytical model from the literature (Practical Worst Case, PWC).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/895569
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact