Single and Multiplayer Stochastic Dynamic Optimization
Abstract
In this thesis we investigate single and multiplayer stochastic dynamic optimization problems. We consider both discrete and continuous time processes. In the multiplayer setup we investigate zerosum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process being discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game and also obtain optimal strategies for both players. We also study a continuous time zerosum stochastic game with complete observation. In this case the state is a pure jump Markov process. We investigate the nite horizon total cost criterion. We characterise the value function via appropriate Isaacs equations. This also yields optimal Markov strategies for both players.
In the single player setup we investigate risksensitive control of continuous time Markov chains. We consider both nite and in nite horizon problems. For the nite horizon total cost problem and the in nite horizon discounted cost problem we characterise the value function as the unique solution of appropriate Hamilton Jacobi Bellman equations. We also derive optimal Markov controls in both the cases. For the in nite horizon average cost case we shown the existence of an optimal stationary control. we also give a value iteration scheme for computing the optimal control in the case of nite state and action spaces.
Further we introduce a new class of stochastic processes which we call stochastic processes with \agedependent transition rates". We give a rigorous construction of the process. We prove that under certain assunptions the process is Feller. We also compute the limiting probabilities for our process. We then study the controlled version of the above process. In this case we take the riskneutral cost criterion. We solve the in nite horizon discounted cost problem and the average cost problem for this process. The crucial step in analysing these problems is to prove that the original control problem is equivalent to an appropriate semiMarkov decision problem. Then the value functions and optimal controls are characterised using this equivalence and the theory of semiMarkov decision processes (SMDP). The analysis of nite horizon problems becomes di erent from that of in nite horizon problems because of the fact that in this case the idea of converting into an equivalent SMDP does not seem to work. So we deal with the nite horizon total cost problem by showing that our problem is equivalent to another appropriately de ned discrete time Markov decision problem. This allows us to characterise the value function and to nd an optimal Markov control.
Collections
 Mathematics (MA) [153]
Related items
Showing items related by title, author, creator and subject.

Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization
Abdulla, Mohammed Shahid (20100806)In Chapter 2, we propose several twotimescale simulationbased actorcritic algorithms for solution of infinite horizon Markov Decision Processes (MDPs) with finite statespace under the average cost criterion. On the ... 
Controlled SemiMarkov Processes With Partial Observation
Goswami, Anindya (20110928) 
Online Learning and Simulation Based Algorithms for Stochastic Optimization
Lakshmanan, K (20180307)In many optimization problems, the relationship between the objective and parameters is not known. The objective function itself may be stochastic such as a longrun average over some random cost samples. In such cases ...