Towards Robustness of Neural Legal Judgement System
Abstract
Legal Judgment Prediction (LJP) implements Natural Language Processing (NLP) techniques
to predict judgment results based on fact description. It can play a vital role as a legal assistant
and benefit legal practitioners and regular citizens. Recently, the rapid advances in transformer-
based pre-trained language models led to considerable improvement in this area. However,
empirical results show that existing LJP systems are not robust to adversaries and noise. Also,
they cannot handle large-length legal documents. In this work, we explore the robustness and
efficiency of LJP systems even in a low data regime.
In the first part, we empirically verify that existing state-of-the-art LJP systems are not robust.
We further provide our novel architecture for LJP tasks which can handle extensive text lengths
and adversarial examples. Our model performs better than state-of-the-art models, even in the
presence of adversarial examples of the legal domain.
In the second part, we investigate the approach for the LJP system in a low data regime. We
further divide our second work into two scenarios depending on the number of unseen classes in
the dataset which is being used for the LJP system. In the first scenario, we propose a few-shot
approach with only two labels for the Judgement prediction task. In the second scenario, we
propose an approach where we have an excessive number of labels for judgment prediction. For
both approaches, we provide novel architectures using few-shot learning that are also robust to
adversaries.
We conducted extensive experiments on American, European, and Indian legal datasets in the
few-shot scenario. Though trained using the few-shot approach, our models perform comparably
to state-of-the-art models that are trained using large datasets in the legal domain.