Show simple item record

dc.contributor.advisorBhatnagar, Shalabh
dc.contributor.authorReddy, Danda Sai Koti
dc.date.accessioned2018-05-22T15:14:41Z
dc.date.accessioned2018-07-31T04:39:11Z
dc.date.available2018-05-22T15:14:41Z
dc.date.available2018-07-31T04:39:11Z
dc.date.issued2018-05-22
dc.date.submitted2017
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/3582
dc.identifier.abstracthttp://etd.iisc.ac.in/static/etd/abstracts/4450/G28169-Abs.pdfen_US
dc.description.abstractOptimization problems involving uncertainties are common in a variety of engineering disciplines such as transportation systems, manufacturing, communication networks, healthcare and finance. The large number of input variables and the lack of a system model prohibit a precise analytical solution and a viable alternative is to employ simulation-based optimization. The idea here is to simulate a few times the stochastic system under consideration while updating the system parameters until a good enough solution is obtained. Formally, given only noise-corrupted measurements of an objective function, we wish to end a parameter which minimises the objective function. Iterative algorithms using statistical methods search the feasible region to improve upon the candidate parameter. Stochastic approximation algorithms are best suited; most studied and applied algorithms for funding solutions when the feasible region is a continuously valued set. One can use information on the gradient/Hessian of the objective to aid the search process. However, due to lack of knowledge of the noise distribution, one needs to estimate the gradient/Hessian from noisy samples of the cost function obtained from simulation. Simple gradient search schemes take much iteration to converge to a local minimum and are heavily dependent on the choice of step-sizes. Stochastic Newton methods, on the other hand, can counter the ill-conditioning of the objective function as they incorporate second-order information into the stochastic updates. Stochastic Newton methods are often more accurate than simple gradient search schemes. We propose enhancements to the Hessian estimation scheme used in two recently proposed stochastic Newton methods, based on the ideas of random directions stochastic approximation (2RDSA) [21] and simultaneous perturbation stochastic approximation (2SPSA-31) [6], respectively. The proposed scheme, inspired by [29], reduces the error in the Hessian estimate by (i) Incorporating a zero-mean feedback term; and (ii) optimizing the step-sizes used in the Hessian recursion. We prove that both 2RDSA and 2SPSA-3 with our Hessian improvement scheme converges asymptotically to the true Hessian. The key advantage with 2RDSA and 2SPSA-3 is that they require only 75% of the simulation cost per-iteration for 2SPSA with improved Hessian estimation (2SPSA-IH) [29]. Numerical experiments show that 2RDSA-IH outperforms both 2SPSA-IH and 2RDSA without the improved Hessian estimation scheme.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseriesG28169en_US
dc.subjectStochastic Newton Methodsen_US
dc.subjectHessian Estimationen_US
dc.subjectSimultaneous Perturbation Stochastic Approximation (SPSA)en_US
dc.subjectRandom Directions Stochastic Approximation (RDSA)en_US
dc.subjectFinite-difference Stochastic Approximation (FDSA)en_US
dc.subjectHessian Estimation Schemeen_US
dc.subject.classificationComputer Scienceen_US
dc.titleStochastic Newton Methods With Enhanced Hessian Estimationen_US
dc.typeThesisen_US
dc.degree.nameMSc Enggen_US
dc.degree.levelMastersen_US
dc.degree.disciplineFaculty of Engineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record