Show simple item record

dc.contributor.advisorTalukdar, Partha Pratim
dc.contributor.authorKumar, Ashutosh
dc.date.accessioned2023-04-05T04:00:30Z
dc.date.available2023-04-05T04:00:30Z
dc.date.submitted2022
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6055
dc.description.abstractDeep learning models typically require a large volume of data. Manual curation of datasets is time-consuming and limited by imagination. As a result, natural language generation (NLG) has been employed to automate the process. However, in their vanilla formulation, NLG model are prone to producing degenerate, uninteresting, and often hallucinated outputs. Constrained generation aims to overcome these shortcomings by providing additional information to the generation process. Training data thus generated can help improve the robustness of deep learning models. Therefore, the central research question of the thesis is: “How can we constrain generation models, especially in NLP, to produce meaningful outputs and utilize them for building better classification models?” To demonstrate how generation models can be constrained, we present two approaches for paraphrase generation. Paraphrase generation involves the generation of text that conveys the same meaning as a reference text. We propose two strategies for paraphrase generation: (1) DiPS (Diversity in Paraphrases using Submodularity): The first approach deals with constraining paraphrase generation to ensure diversity, i.e., ensuring that generated text(s) are sufficiently different from each other. We propose a decoding algorithm for obtaining diverse texts. We provide a novel formulation of the problem in terms of monotone submodular function maximization, specifically targeted toward the task of paraphrase generation. We demonstrate the effectiveness of our method for data augmentation on multiple tasks such as intent classification and paraphrase recognition. (2) SGCP (Syntax Guided Controlled Paraphraser): The second approach deals with constraining paraphrase generation to ensure syntacticality, i.e., ensuring that the generated text is syntactically coherent with an exemplar sentence. We propose Syntax Guided Controlled Paraphraser (SGCP), an end-to-end framework for syntactic paraphrase generation without compromising relevance (fidelity). Through a battery of automated metrics and comprehensive human evaluation, we verify that this approach does better than prior works that utilize only limited syntactic information in the parse tree. The second part (meaningful outputs) of the research question pertains to ensuring that the generated output is meaningful. Towards this, we present an approach for paraphrase detection to ascertain that the generated output is semantically coherent with the reference text. Paraphrase Detection is the task of detecting whether or not the two input natural language statements are paraphrases of each other. Fine-tuning pre-trained models such as BERT and RoBERTa on paraphrastic datasets have become the go-to approaches for such tasks. However, tasks like paraphrase detection are symmetric - they require the output to be invariant of the order of the inputs. In the traditional fine-tuned approach for paraphrase classification, inconsistency is often observed in the predicted labels or confidence scores based on the order of the inputs. We validate this shortcoming and apply a consistency loss function to alleviate inconsistency in symmetric classification. Our results show an improved consistency in predictions for three paraphrase detection datasets without a significant drop in the accuracy scores. While these works address the research question via paraphrase generation and detection, the approaches presented here apply broadly to NLP-based deep learning models that require imposing constraints and ensuring consistency. The work on paraphrase generation can be extended to impose new kinds of constraints (for example, sentiment coherence) on generation, while paraphrase detection can be applied to ensure consistency in other symmetric classification tasks (for example, sarcasm interpretation) that use deep learning models.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00070
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectParaphrase Generationen_US
dc.subjectParaphrase Detectionen_US
dc.subjectNatural Language Processingen_US
dc.subjectDeep learning modelsen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer scienceen_US
dc.titleInducing Constraints in Paraphrase Generation and Consistency in Paraphrase Detectionen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record