Browsing by Author "Heavey, C."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference Object Citation - Scopus: 2Deep Learning Enabling Digital Twin Applications in Production Scheduling: Case of Flexible Job Shop Manufacturing Environment(Institute of Electrical and Electronics Engineers Inc., 2023) Ghasemi, A.; Yeganeh, Y.T.; Matta, A.; Kabak, Kamil Erkan; Heavey, C.Digital twin-based Production Scheduling (DTPS) is a process in which a digital model replicates a manufacturing system, known as a "Digital Twin (DT)". DT is essentially a virtual representation of physical equipment and processes that are connected to the physical environment using an online data-sharing infrastructure within the Manufacturing Execution System (MES). In the case of reactive scheduling, DT is used to detect fluctuations in the scheduling plan and execute rescheduling plans. In proactive scheduling, it is used to simulate different production scenarios and optimize future states of production operations. Replicating detailed simulation models in most PS cases is highly computationally intensive, which negates against the main goal of DT (online decision making). Thus, this research aims to examine the possibility of using data-driven models within the DT of a Flexible Job Shop (FJS) production environment aiming to provide online estimations of PS metrics enabling DT-based reactive/proactive scheduling. © 2023 IEEE.Conference Object Citation - Scopus: 2A Reinforcement Learning Approach for Improved Photolithography Schedules(Institute of Electrical and Electronics Engineers Inc., 2023) Zhang, T.; Kabak, Kamil Erkan; Heavey, C.; Rose, O.A Reinforcement Learning (RL) model is applied for photolithography schedules with direct consideration of reentrant visits. The photolithography process is mainly regarded as a bottleneck process in semiconductor manufacturing, and improving its schedules would result in better performances. Most RL-based research do not consider revisits directly or guarantee convergence. A simplified discrete event simulation model of a fabrication facility is built, and a tabular Q-learning agent is embedded into the model to learn through scheduling. The learning environment considers states and actions consisting of information on reentrant flows. The agent dynamically chooses one rule from a pre-defined rule set to dispatch lots. The set includes the earliest stage first, the latest stage first, and 8 more composite rules. Finally, the proposed RL approach is compared with 7 single and 8 hybrid rules. The method presents a validated approach in terms of overall average cycle times. © 2023 IEEE.
