Show simple item record

dc.contributor.advisorD'Souza, Deepak
dc.contributor.authorHabeeb, P
dc.date.accessioned2025-04-17T09:19:15Z
dc.date.available2025-04-17T09:19:15Z
dc.date.submitted2024
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6900
dc.description.abstractAutonomous technologies are becoming increasingly prevalent due to the numerous benefits they offer, including improved safety and security, enhanced accessibility for mobility-challenged individuals, and the ability to operate in hazardous environments. Autonomous vehicles rely on various inputs such as camera images, LiDAR readings, and infrared signals, which are processed by advanced neural network-based controllers. These controllers play a critical role in determining the vehicle’s trajectory, making it essential that the closed-loop control system and perception modules are highly reliable and accurate. However, the inherent complexity and opacity of neural networks pose significant challenges to ensuring their safety and reliability. The work in this thesis addresses these challenges through novel contributions in two critical areas: the safety verification of camera-based autonomous systems and the approximate conformance checking of neural network controlled closed-loop systems. In the first part of this work, we address the “scene safety” problem: the problem of verifying the safety of trajectories of a camera-based autonomous vehicle operating in a synthetic 3D scene with specified initial and target regions. To simplify the analysis, we introduce the notion of “image-invariant” regions, which are regions in 3D space where the camera records identical images of the scene. Leveraging this concept, we propose algorithms for both verifying scene safety as well as falsification to efficiently identify unsafe trajectories. To improve scalability, we introduce an abstraction-refinement algorithm based on the notion of interval images - a novel abstract data structure representing a set of images in a 3D scene. This algorithm leverages recent neural network verification tools to interpret neural networks on interval images. Additionally, we present efficient approaches to compute interval images for a given region using computer graphics-style rendering. Experimental evaluations on complex 3D environments, including a case study of an autonomous drone, demonstrate the effectiveness, scalability, and practicality of our proposed methods for both safety analysis and falsification. While the ability of neural networks to handle uncertainty and adapt to dynamic environments through retraining as more data becomes available is a significant advantage, it also increases the complexity of their verification and certification. A promising solution to address this evolving nature of neural network controllers is to develop mechanisms that can transfer safety proofs established for one version of a controller to a newer version. In the second part of this thesis, we consider a fundamental problem toward achieving this goal by investigating the proximity of neural networks and its effect on the reachable sets of closed-loop systems in which they are employed. First, we define the epsilon-conformance problem to quantify output differences between two neural networks given identical inputs. We reduce the conformance checking problem to reachability analysis using neural network transformations. We provide an experimental comparison of epsilon-conformance checking based on our approach using various reachability analysis tools, as well as alternative epsilon-conformance checking algorithms. We illustrate the benefits of our approach and identify reachability analysis tools that are particularly effective for conformance checking. Additionally, we introduce the notion of (L,epsilon)-conformance for neural networks, which allows us to quantify semantically the deviations in closed-loop system behaviors with different neural network controllers. We then consider the computational problem of checking this notion of conformance for two neural networks. We reduce this problem to reachability analysis on a combined neural network, enabling the use of existing neural network verification tools for conformance checking. Our experimental results on an autonomous rocket landing system demonstrate the feasibility of checking approximate conformance for different neural networks trained on the same dynamics, as well as the practical semantic closeness exhibited by the corresponding closed-loop systems. This work provides efficient techniques to transfer safety proofs between evolving controllers, enabling scalable verification for neural network-driven closed-loop systems while maintaining safety guarantees.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00910
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectFormal Verificationen_US
dc.subjectAutonomous Systemsen_US
dc.subjectCamera-Based Systemsen_US
dc.subjectNeural Networken_US
dc.subjectclosed-loop systemsen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer scienceen_US
dc.titleReasoning about Safety of Camera-Based Neural Network Controlled Systemsen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record