## Approximate Dynamic Programming and Reinforcement Learning - Algorithms, Analysis and an Application

dc.contributor.advisor | Bhatnagar, Shalabh | |

dc.contributor.author | Lakshminarayanan, Chandrashekar | |

dc.date.accessioned | 2018-08-13T14:20:35Z | |

dc.date.accessioned | 2018-08-28T09:15:03Z | |

dc.date.available | 2018-08-13T14:20:35Z | |

dc.date.available | 2018-08-28T09:15:03Z | |

dc.date.issued | 2018-08-13 | |

dc.date.submitted | 2015 | |

dc.identifier.uri | http://etd.iisc.ac.in/handle/2005/3963 | |

dc.identifier.abstract | http://etd.iisc.ac.in/static/etd/abstracts/4850/G27265-Abs.pdf | en_US |

dc.description.abstract | Problems involving optimal sequential making in uncertain dynamic systems arise in domains such as engineering, science and economics. Such problems can often be cast in the framework of Markov Decision Process (MDP). Solving an MDP requires computing the optimal value function and the optimal policy. The idea of dynamic programming (DP) and the Bellman equation (BE) are at the heart of solution methods. The three important exact DP methods are value iteration, policy iteration and linear programming. The exact DP methods compute the optimal value function and the optimal policy. However, the exact DP methods are inadequate in practice because the state space is often large and in practice, one might have to resort to approximate methods that compute sub-optimal policies. Further, in certain cases, the system observations are known only in the form of noisy samples and we need to design algorithms that learn from these samples. In this thesis we study interesting theoretical questions pertaining to approximate and learning algorithms, and also present an interesting application of MDPs in the domain of crowd sourcing. Approximate Dynamic Programming (ADP) methods handle the issue of large state space by computing an approximate value function and/or a sub-optimal policy. In this thesis, we are concerned with conditions that result in provably good policies. Motivated by the limitations of the PBE in the conventional linear algebra, we study the PBE in the (min, +) linear algebra. It is a well known fact that deterministic optimal control problems with cost/reward criterion are (min, +)/(max, +) linear and ADP methods have been developed for such systems in literature. However, it is straightforward to show that inﬁnite horizon discounted reward/cost MDPs are neither (min, +) nor (max, +) linear. We develop novel ADP schemes namely the Approximate Q Iteration (AQI) and Variational Approximate Q Iteration (VAQI), where the approximate solution is a (min, +) linear combination of a set of basis functions whose span constitutes a subsemimodule. We show that the new ADP methods are convergent and we present a bound on the performance of the sub-optimal policy. The Approximate Linear Program (ALP) makes use of linear function approximation (LFA) and oﬀers theoretical performance guarantees. Nevertheless, the ALP is diﬃcult to solve due to the presence of a large number of constraints and in practice, a reduced linear program (RLP) is solved instead. The RLP has a tractable number of constraints sampled from the original constraints of the ALP. Though the RLP is known to perform well in experiments, theoretical guarantees are available only for a speciﬁc RLP obtained under idealized assumptions. In this thesis, we generalize the RLP to deﬁne a generalized reduced linear program (GRLP) which has a tractable number of constraints that are obtained as positive linear combinations of the original constraints of the ALP. The main contribution here is the novel theoretical framework developed to obtain error bounds for any given GRLP. Reinforcement Learning (RL) algorithms can be viewed as sample trajectory based solution methods for solving MDPs. Typically, RL algorithms that make use of stochastic approximation (SA) are iterative schemes taking small steps towards the desired value at each iteration. Actor-Critic algorithms form an important sub-class of RL algorithms, wherein, the critic is responsible for policy evaluation and the actor is responsible for policy improvement. The actor and critic iterations have deferent step-size schedules, in particular, the step-sizes used by the actor updates have to be generally much smaller than those used by the critic updates. Such SA schemes that use deferent step-size schedules for deferent sets of iterates are known as multitimescale stochastic approximation schemes. One of the most important conditions required to ensure the convergence of the iterates of a multi-timescale SA scheme is that the iterates need to be stable, i.e., they should be uniformly bounded almost surely. However, the conditions that imply the stability of the iterates in a multi-timescale SA scheme have not been well established. In this thesis, we provide veritable conditions that imply stability of two timescale stochastic approximation schemes. As an example, we also demonstrate that the stability of a widely used actor-critic RL algorithm follows from our analysis. Crowd sourcing (crowd) is a new mode of organizing work in multiple groups of smaller chunks of tasks and outsourcing them to a distributed and large group of people in the form of an open call. Recently, crowd sourcing has become a major pool for human intelligence tasks (HITs) such as image labeling, form digitization, natural language processing, machine translation evaluation and user surveys. Large organizations/requesters are increasingly interested in crowd sourcing the HITs generated out of their internal requirements. Task starvation leads to huge variation in the completion times of the tasks posted on to the crowd. This is an issue for frequent requesters desiring predictability in the completion times of tasks speciﬁed in terms of percentage of tasks completed within a stipulated amount of time. An important task attribute that aﬀects the completion time of a task is its price. However, a pricing policy that does not take the dynamics of the crowd into account might fail to achieve the desired predictability in completion times. Here, we make use of the MDP framework to compute a pricing policy that achieves predictable completion times in simulations as well as real world experiments. | en_US |

dc.language.iso | en_US | en_US |

dc.relation.ispartofseries | G27265 | en_US |

dc.subject | Dynamic Programming (DP) | en_US |

dc.subject | Reinforcement Learning - Machine Learning | en_US |

dc.subject | Markov Decision Process (MDP) | en_US |

dc.subject | Bellman Equation CBE | en_US |

dc.subject | Machine Learning | en_US |

dc.subject | Bellman Operator | en_US |

dc.subject | Crowdsourcing | en_US |

dc.subject | Approximate Linear Programming (ALP) | en_US |

dc.subject | Reinforcement Learning | en_US |

dc.subject | Stochastic Approximation | en_US |

dc.subject | Approximate Dynamic Programming (ADP) | en_US |

dc.subject | Approximate Linear Program | en_US |

dc.subject | Linear Function Approximation (LFA) | en_US |

dc.subject | Reduced Linear Program (RLP) | en_US |

dc.subject | Generalized Reduced Linear Program (GRLP) | en_US |

dc.subject | Crowd Sourcing | en_US |

dc.subject.classification | Computer Science and Automation | en_US |

dc.title | Approximate Dynamic Programming and Reinforcement Learning - Algorithms, Analysis and an Application | en_US |

dc.type | Thesis | en_US |

dc.degree.name | PhD | en_US |

dc.degree.level | Doctoral | en_US |

dc.degree.discipline | Faculty of Engineering | en_US |