Show simple item record

dc.contributor.advisorGhosh, Mrinal K
dc.contributor.authorSaha, Subhamay
dc.date.accessioned2018-04-06T06:52:51Z
dc.date.accessioned2018-07-31T06:09:19Z
dc.date.available2018-04-06T06:52:51Z
dc.date.available2018-07-31T06:09:19Z
dc.date.issued2018-04-06
dc.date.submitted2013
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/3357
dc.identifier.abstracthttp://etd.iisc.ac.in/static/etd/abstracts/4224/G25755-Abs.pdfen_US
dc.description.abstractIn this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time processes. In the multi-player setup we investigate zero-sum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process be-ing discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game and also obtain optimal strategies for both players. We also study a continuous time zero-sum stochastic game with complete observation. In this case the state is a pure jump Markov process. We investigate the nite horizon total cost criterion. We characterise the value function via appropriate Isaacs equations. This also yields optimal Markov strategies for both players. In the single player setup we investigate risk-sensitive control of continuous time Markov chains. We consider both nite and in nite horizon problems. For the nite horizon total cost problem and the in nite horizon discounted cost problem we characterise the value function as the unique solution of appropriate Hamilton Jacobi Bellman equations. We also derive optimal Markov controls in both the cases. For the in nite horizon average cost case we shown the existence of an optimal stationary control. we also give a value iteration scheme for computing the optimal control in the case of nite state and action spaces. Further we introduce a new class of stochastic processes which we call stochastic processes with \age-dependent transition rates". We give a rigorous construction of the process. We prove that under certain assunptions the process is Feller. We also compute the limiting probabilities for our process. We then study the controlled version of the above process. In this case we take the risk-neutral cost criterion. We solve the in nite horizon discounted cost problem and the average cost problem for this process. The crucial step in analysing these problems is to prove that the original control problem is equivalent to an appropriate semi-Markov decision problem. Then the value functions and optimal controls are characterised using this equivalence and the theory of semi-Markov decision processes (SMDP). The analysis of nite horizon problems becomes di erent from that of in nite horizon problems because of the fact that in this case the idea of converting into an equivalent SMDP does not seem to work. So we deal with the nite horizon total cost problem by showing that our problem is equivalent to another appropriately de ned discrete time Markov decision problem. This allows us to characterise the value function and to nd an optimal Markov control.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseriesG25755en_US
dc.subjectStochastic Dynamic Optimizationen_US
dc.subjectStochastic Control Theoryen_US
dc.subjectStochastic Processesen_US
dc.subjectMarkov Processesen_US
dc.subjectContinuous-Time Markov Chainsen_US
dc.subjectStochastic Gamesen_US
dc.subjectSemi-Markov Decision Processesen_US
dc.subjectMarkov Processes - Optimal Controlen_US
dc.subjectContinuous Time Stochastic Processesen_US
dc.subjectDicrete Time Stochastic Processesen_US
dc.subjectContinuous Time Markov Chainsen_US
dc.subjectSemi-Markov Decision Processes (SMDP)en_US
dc.subjectOptimal Markov Controlen_US
dc.subject.classificationMathematicsen_US
dc.titleSingle and Multi-player Stochastic Dynamic Optimizationen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.disciplineFaculty of Scienceen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record