• Login
    View Item 
    •   etd@IISc
    • Division of Electrical, Electronics, and Computer Science (EECS)
    • Computer Science and Automation (CSA)
    • View Item
    •   etd@IISc
    • Division of Electrical, Electronics, and Computer Science (EECS)
    • Computer Science and Automation (CSA)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Simulation based methods for optimization

    Thumbnail
    View/Open
    T07562.pdf (61.04Mb)
    Author
    Mishra, Vivek Kumar
    Metadata
    Show full item record
    Abstract
    In many engineering problems, one is often interested in optimizing a parameterized performance objective. If the objective function is analytically known and its derivatives easily computable, then a number of methods such as gradient descent, conjugate direction and quasi-Newton can be directly applied. However, in many real-life problems belonging to a variety of disciplines such as neural networks, supply chain management, computer networks etc., only noisy observations of the objective function can be obtained. Besides, such observations are time/resource consuming. In many of these applications, the objective function corresponds to a long-run average performance metric obtained from simulation. In such scenarios, often local search algorithms that perform incremental updates along descent directions are used. Different algorithms here differ in the way by which the descent direction is estimated. In one such approach that goes by the name of perturbation analysis (PA), one looks for ways to get a gradient estimate at any parameter value based on only one or a small number of simulation runs. However, PA-type methods require constraining regularity assumptions on the process parameters and performance objectives. Finite difference stochastic approximation (FDSA) is another approach for simulation optimization problems. FDSA does not require the constraining regularity assumptions and is more universally applicable. A classical FDSA method is the Kiefer-Wolfowitz (K-W) finite difference algorithm. In a random directions version (Simultaneous Perturbation Stochastic Approximation (SPSA)) of K-W, all parameters are randomly perturbed together to obtain two function measurements that are then used to approximate the gradient. This results in huge savings in the number of function evaluations especially for higher-dimensional problems. SPSA has been widely studied and is seen to perform well in general. Another algorithm based on random perturbations is the smoothed functional algorithm (SFA). Here one approximates the gradient of the objective function by a convolution of the same with a multivariate normal distribution. In this work, we will focus on a variety of FDSA algorithms based on random perturbations and study their applications. In particular, because of their scalability properties, we will mainly concentrate on SPSA and SFA-based algorithms. PA and FDSA-based algorithms were originally developed in the settings involving no functional constraints. In many real-life problems, we require optimization subject to constraints. For example, in communication networks, one is often interested in (say) maximizing the throughput of a connection subject to constraints on the quality of service (QoS) requirements. A constraint there could specify that the mean packet delay should be below a desired threshold such as a few milliseconds. Similarly, another constraint could specify that the expected fraction of packets lost in transmission is below another threshold. We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving a network of queues. The Newton-based SF algorithm is seen to show the best overall performance. A typical discrete parameter optimization problem can be expressed as finding a parameter value within a discrete set for which the parameterized objective function is minimized. In the setting that we consider, the objective is in fact a long-run average performance metric. We present two efficient discrete parameter simulation optimization (DPSO) algorithms in this setting. One of these algorithms uses a smoothed functional approximation (SFA) procedure while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. A stochastic differential equation (SDE) is a differential equation in which at least one of the terms is a stochastic process (usually a Brownian motion). Thus, the solution of SDE itself is a stochastic process. SDEs are used to model diverse phenomena such as portfolio optimization, aerospace systems and wireless networks. We consider the problem of estimating the optimal parameter trajectory over a finite time interval in a parameterized SDE and propose a simulation-based algorithm for this purpose. Towards this end, we consider a discretization of the SDE over finite time instants and reformulate the problem as one of finding an optimal parameter at each of these instants. A stochastic approximation algorithm based on the smoothed functional technique is adapted to this setting for finding the optimal parameter trajectory. A proof of convergence of the algorithm is presented and results of numerical experiments over two different settings are shown. The algorithm is seen to exhibit good performance. We also present extensions of our framework to the case of finding optimal parameterized feedback policies for controlled SDEs and present numerical results in this scenario as well. Finally, we consider a parameterized SDE model of slotted Aloha with the retransmission probability as the associated parameter. We formulate the problem in finite and infinite horizon cost settings. In the first setting, we obtain an optimal parameter trajectory that prescribes the parameter to use at any given instant while in the second setting, we obtain an optimal time-invariant (scalar) parameter. Our algorithms are seen to exhibit good performance.
    URI
    https://etd.iisc.ac.in/handle/2005/7316
    Collections
    • Computer Science and Automation (CSA) [506]

    etd@IISc is a joint service of SERC & J R D Tata Memorial (JRDTML) Library || Powered by DSpace software || DuraSpace
    Contact Us | Send Feedback | Thesis Templates
    Theme by 
    Atmire NV
     

     

    Browse

    All of etd@IIScCommunities & CollectionsTitlesAuthorsAdvisorsSubjectsBy Thesis Submission DateThis CollectionTitlesAuthorsAdvisorsSubjectsBy Thesis Submission Date

    My Account

    LoginRegister

    etd@IISc is a joint service of SERC & J R D Tata Memorial (JRDTML) Library || Powered by DSpace software || DuraSpace
    Contact Us | Send Feedback | Thesis Templates
    Theme by 
    Atmire NV