Browsing by Author "Prestwich S.D."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Conference Object Citation - Scopus: 3Computing Replenishment Cycle Policy Parameters for a Perishable Item*(Vilnius Gediminas Technical University, 2010) Rossi R.; Tarim S.A.; Hnich B.; Prestwich S.D.In many industrial environments there is a significant class of problems for which the perishable nature of the inventory cannot be ignored in developing replenishment order plans. Food is the most salient example of a perishable inventory item. In this work, we consider the periodic-review, single-location, single-product production/inventory control problem under non-stationary stochastic demand and service level constraints. The product we consider can be held in stock for a limited amount of time after which it expires and it must be disposed of at a cost. In addition to wastage costs, our cost structure comprises fixed and unit variable ordering costs, and inventory holding costs. We propose an easy-to-implement replenishment cycle inventory control policy that yields at most 2N control parameters, where N is the number of periods in our planning horizon. We also show, on a simple numerical example, the improvement brought by this policy over two other simpler inventory control rules of common use. © Izmir University of Economics, Turkey, 2010.Conference Object Citation - WoS: 3Citation - Scopus: 5A Cultural Algorithm for Pomdps From Stochastic Inventory Control(Springer Verlag, 2008) Prestwich S.D.; Tarim S.A.; Rossi R.; Hnich B.Reinforcement Learning algorithms such as SARSA with an eligibility trace, and Evolutionary Computation methods such as genetic algorithms, are competing approaches to solving Partially Observable Markov Decision Processes (POMDPs) which occur in many fields of Artificial Intelligence. A powerful form of evolutionary algorithm that has not previously been applied to POMDPs is the cultural algorithm, in which evolving agents share knowledge in a belief space that is used to guide their evolution. We describe a cultural algorithm for POMDPs that hybridises SARSA with a noisy genetic algorithm, and inherits the latter's convergence properties. Its belief space is a common set of state-action values that are updated during genetic exploration, and conversely used to modify chromosomes. We use it to solve problems from stochastic inventory control by finding memoryless policies for nondeterministic POMDPs. Neither SARSA nor the genetic algorithm dominates the other on these problems, but the cultural algorithm outperforms the genetic algorithm, and on highly non-Markovian instances also outperforms SARSA. © 2008 Springer Berlin Heidelberg.Conference Object Event-Driven Probabilistic Constraint Programming(Springer Verlag, 2006) Tarim S.A.; Hnich B.; Prestwich S.D.Real-life management decisions are usually made in uncertain environments, and decision support systems that ignore this uncertainty are unlikely to provide realistic guidance. We show that previous approaches fail to provide appropriate support for reasoning about reliability under uncertainty. We propose a new framework that addresses this issue by allowing logical dependencies between constraints. Reliability is then defined in terms of key constraints called "events", which are related to other constraints via these dependencies. We illustrate our approach on two problems, contrast it with existing frameworks, and discuss future developments. © Springer-Verlag Berlin Heidelberg 2006.Conference Object Citation - Scopus: 2Neuroevolutionary Inventory Control in Multi-Echelon Systems(2009) Prestwich S.D.; Tarim S.A.; Rossi R.; Hnich B.Stochastic inventory control in multi-echelon systems poses hard problems in optimisation under uncertainty. Stochastic programming can solve small instances optimally, and approximately solve large instances via scenario reduction techniques, but it cannot handle arbitrary nonlinear constraints or other non-standard features. Simulation optimisation is an alternative approach that has recently been applied to such problems, using policies that require only a few decision variables to be determined. However, to find optimal or near-optimal solutions we must consider exponentially large scenario trees with a corresponding number of decision variables. We propose a neuroevolutionary approach: using an artificial neural network to approximate the scenario tree, and training the network by a simulation-based evolutionary algorithm. We show experimentally that this method can quickly find good plans. © 2009 Springer-Verlag Berlin Heidelberg.Conference Object Citation - WoS: 3Citation - Scopus: 5Stochastic Constraint Programming by Neuroevolution With Filtering(2010) Prestwich S.D.; Tarim S.A.; Rossi R.; Hnich B.Stochastic Constraint Programming is an extension of Constraint Programming for modelling and solving combinatorial problems involving uncertainty. A solution to such a problem is a policy tree that specifies decision variable assignments in each scenario. Several complete solution methods have been proposed, but the authors recently showed that an incomplete approach based on neuroevolution is more scalable. In this paper we hybridise neuroevolution with constraint filtering on hard constraints, and show both theoretically and empirically that the hybrid can learn more complex policies more quickly. © 2010 Springer-Verlag.
