On the Role of Negative Precedent in Legal Outcome Prediction

Every legal case sets a precedent by developing the law in one of the following two ways. It either expands its scope, in which case it sets positive precedent, or it narrows it, in which case it sets negative precedent. Legal outcome prediction, the prediction of positive outcome, is an increasingly popular task in AI. In contrast, we turn our focus to negative outcomes here, and introduce a new task of negative outcome prediction. We discover an asymmetry in existing models’ ability to predict positive and negative outcomes. Where the state-of-the-art outcome prediction model we used predicts positive outcomes at 75.06 F1, it predicts negative outcomes at only 10.09 F1, worse than a random baseline. To address this performance gap, we develop two new models inspired by the dynamics of a court process. Our first model significantly improves positive outcome prediction score to 77.15 F1 and our second model more than doubles the negative outcome prediction performance to 24.01 F1. Despite this improvement, shifting focus to negative outcomes reveals that there is still much room for improvement for outcome prediction models. https://github.com/valvoda/Negative-Precedent-in-Legal-Outcome-Prediction


Introduction
The legal system is inherently adversarial.Every case pitches two parties against each other: the claimant, who alleges their rights have been breached, and the defendant, who denies breaching those rights.For each claim of the claimant, their lawyer will produce an argument, for which the defendant's lawyer will produce a counterargument.In precedential legal systems (Black, 2019), the decisions in the past judgements are binding on the judges in deciding new cases.1Therefore, both sides of the dispute will rely on the outcomes of previous cases to support their position (Duxbury, 2008;Lamond, 2016;Black, 2019).The claimant will assert that her circumstances are alike those of previous claimants whose rights have been breached.The defendant, on the other hand, will allege that the circumstances are in fact more alike those of unsuccessful claimants.The judge decides who is right, and by doing so establishes a new precedent.If it is the claimant who is successful in a particular claim, the precedent expands the law by including the new facts in its scope.If it is the defendant who is successful, the law is contracted by rejection of the new facts from its scope.The expansion or contraction is encoded in the case outcome; we will refer to them as positive outcome and negative outcome, respectively.
Positive and negative outcomes are equally binding, which means that the same reasons that motivate the research of the positive outcome also apply to the negative outcome.Both are important for computational legal analysis, a fact that has been known at least since Lawlor (1963).2However, the de facto interpretation of precedent in today's legal NLP landscape focuses only on positive outcomes.Several researchers have shown that a simple model can achieve very high performance for such formulation of the outcome prediction task (Aletras et al., 2016;Chalkidis et al., 2019;Clavié and Alphonsus, 2021;Chalkidis et al., 2021b), a finding that has been replicated for a number of jurisdictions (Zhong et al., 2018;Xu et al., 2020).
In this work, we reformulate outcome prediction as the task of predicting both the positive and negative outcome given the facts of the case.Our results indicate that while a simple BERT-based classification model can predict positive outcomes at an F 1 of 75.06, it predicts negative outcomes at an F 1 of 10.09, falling short of a random baseline which achieves 11.12 F 1 .This naturally raises the question: What causes such asymmetry?In §8, we argue that this disparity is caused by the fact that most legal NLP tasks are formulated without a deep understanding of how the law works.
Searching for a way to better predict negative outcomes, we hypothesise that building a probabilistic model that is more faithful to the legal process will improve both negative and positive outcome prediction.To test this hypothesis we develop two such models.Our first model, which we call the joint model, is trained to jointly predict positive and negative outcome.Our second model, which we call the claim-outcome model, enforces the relationship between the claims and outcomes.While the joint model significantly3 outperforms state-of-the-art models on positive outcome prediction with 77.15 F 1 , the claimoutcome model doubles the F 1 on negative outcome prediction at 24.01 F 1 .We take this result as strong evidence that building neural models of the legal process should incorporate domain-specific knowledge of how the legal process works.

The Judicial Process
In order to motivate our two models of outcome, it is necessary to first understand the process of how law is formed.Broadly speaking, the legal process can be understood as a task of narrowing down the legal space where the breach of law might have taken place.Initially, before the legal process begins, the space includes all the law there is, i.e., every legal Article. 4 It is the job of the lawyer to narrow it down to only a small number of Articles, a subset of all law.Finally, the judge determines which of the claimed Articles, if any, has been violated.We can therefore observe two distinct interactions between the real world and the law: (i) when a lawyer connects the real world and law via a claim, and (ii) when a judge connects them via an outcome.
In practice, this means that judges are constrained in their decision.They cannot decide that a law has been breached unless a lawyer has claimed it has.A corollary of the above is that lawyers actively shape the outcome by forcing a judge to consider a particular subset of law.In doing so a lawyer defines the set of laws from which the judge decides on the outcome. 5The power of a lawyer is also constrained.On one hand, lawyers want to claim as many legal Articles as possible, on the other there are only so many legal Articles that are relevant to their client's needs.Thus, there are two principles that arise from the interaction of a lawyer and a judge.First, positive outcome is a subset of claims.Second, negative outcome consists of the unsuccessful claims, i.e. the claims the judge rejected.
There is a close relationship between claims and negative outcomes: If we knew the claims the lawyer had made, we could define negative outcome as exactly those Articles that have been claimed, but that the judge found not to be violated.Much like how outcomes are a product of judges, claims are a product of lawyers.And, unlike facts, they are not known before human legal experts interact with the case.Therefore, to study the relationship of outcomes and facts, one can not rely on claims as an additional input to the model.The only input which is available and known before a case is processed by the court, are the facts.
Outcome prediction task.Legal facts are the transcript of the judges description of what has happened between the claimant and the defendant.Under the current formulation of the outcome prediction task, models are trained to predict whether case facts correspond to a violation of each Article, i.e., the models are trained to predict a vector in {0, 1} K where 1 indicates a positive outcome and K is the number of legal Articles under consideration.
What is wrong with current work?In the above formulation, 0 is ambiguous, it can indicate either that the Article not claimed or that the judge ruled that that specific Article was not breached.Existing models, which don't take any information about claims into accounts, implicitly assume that all Articles have been claimed, which is almost never the case in practice.Under this as-sumption, the role of the legal claim and of negative outcomes is therefore effectively ignored.
Reformulating the task.How should negative outcome then be modelled?Given the domain specific knowledge about the interaction of a judge and a lawyer, our position is that models that predict outcomes should model the claims and outcomes together.To this end, we first need information about which laws have been claimed.In §6, we discuss the creation of a new corpus which contains the necessary annotation for this task.In the next section we develop two models that jointly predict outcomes and claims using two basic assumptions about how the law operates.We believe that our reformulation of the task has two advantages.First, considering positive and negative outcomes together is a step towards better evaluation of legal outcome prediction models.Second, incorporating the roles of a judge and a lawyer within the models of outcome is a step towards better models of law.

Law-Abiding Models
In this section we formulate our two probabilistic models of law.Our law-abiding models are built on top of the two assumptions described below.
Notation.We define a probability distribution over three random variables.• The random variable F ranges over textual descriptions of facts, i.e., Σ * for a vocabulary Σ.Values of F are denoted as f .

Joint Model
We begin with a simple assumption that, given the facts of a case, legal Articles are independent.
Conditioned on the facts F = f , the random variables (O k , C k ) for the k th Article are jointly conditionally independent of the random variables (O , C ) for the th Article when = k.
This assumption is based in the origin of each Article as an independent Human Right, related by the spirit of ECHR, but otherwise orthogonal in nature.This is indeed how the law operates in general.A law, whether codified in an Article or a product of precedent, encodes a unique right or obligation.In practice this means that a breach of one law does not determine a breach of another.
For example, a breach of Article 3 of ECHR (the prohibition of torture) does not entail a breach of Article 6 (right to a fair trial).Even breaches of law that are closely related, for example libel and slander, do not entail each other, and allegation of each must be considered independently.By Assumption 1, the joint distribution over outcomes and claims decomposes over Articles as In the remainder of the text we write f in lieu of F = f to save space.We also write o k instead of when it is clear from context.

Claim-Outcome Model
Our second model builds on the first assumption with a second simple assumption: Assumption 2 (Claims and Outcomes).For an Article to be breached, i.e., for it to become an outcome, it first needs to be claimed.The judge provides an outcome if and only if a claim is made: By Assumption 2, we have that each distribution over outcome-claim pairs simplifies into the following equation: Crucially, Assumption 2 allows us to reduce the problem to two independent binary classification problems.First, we train a claim predictor p(c k | f ) that predicts whether a lawyer would claim that the k th Article is relevant to the facts f .Second, we train an outcome predictor that predicts whether the outcome is + or −, given that the lawyer has claimed a violation of Article k.

Neural Parameterization
We consider neural parametrizations for all the distributions discussed above.At the heart of all of our models is a high-dimensional representation enc(f ) ∈ R d 1 of the facts f .We obtain this representation from a pre-trained language model fine-tuned for our task (see §5).All our language models rely on f as their sole input.Except where we indicate otherwise, both the language model weights and classifier weights are learned separately for every model presented below.6 Joint Model.
, ρ is a ReLU activation function defined as ρ(x) = max(0, x); U k ∈ R 3×d 2 and V k ∈ R d 2 ×d 1 are per-Article learnable parameters.In total, the classifier has K(3d 2 + d 2 d 1 ) parameters, excluding those from the encoder enc.
Claim-Outcome Model.We parametrise the claim-outcome model as two binary classification tasks: one which is predicting the claims, the other which is predicting positive outcomes.For the latter binary classification task, one class corresponds to +, while the other to both − and ∅.This leads to the following pair of binary classifiers: is the sigmoid function, and enc and enc are two separate encoders.In total, we have parameters, excluding those from the encoder enc.We use primed symbols to denote separately learned parameters.Given these probabilities, we can marginalise out the claims to obtain the probability of a positive outcome: is always zero (since by Assumption 2 no positive outcome can be set on an unclaimed case).
We then predict the probability of negative outcome as the complement of the probability of a positive outcome multiplied by the probability of a claim: This step enforces that the negative outcome probability is always both lower than that of claims and sums up to 1 with the probability of positive outcome.Finally, we have that To make a decision, we compute the following argmax that marginalises over claims: Training and Fine-tuning.All models in §3.3 are trained by maximizing the log of the joint distribution p(o, c | f ).We are given a dataset Due to independence assumption made, this additively factorises over Articles

Baselines
We contextualise the performance of the joint and claim-outcome model with a number of baselines.As a starting point we build a simple classification model trained to predict positive or negative outcome separately, see Figure 1a.We further want to test whether the advantage of our joint model stems from encoding the relationship between positive and negative outcome, or whether it is down to simply training on more data.We test this by formulating the task as a multi-task learning objective, see Figure 1b.While this model is trained on the same amount of data as our joint model, it does not explicitly encode the relationship between positive and negative outcomes.
A Simple Baseline.For our simple baseline model we formulate the positive and negative outcome prediction as a multi-label classification task.Despite its conceptual simplicity, this model achieves state-of-the-art performance on the task of positive outcome prediction.Given the facts of a case f , we directly model the probability that the outcome is positive, as a binary classification problem where the first class is the positive + and the second is the union of negative and unclaimed {−, ∅}.Likewise, we separately model the probability that the outcome is negative, as a binary classification problem where the first class is the negative − and the second is the union of positive and unclaimed {+, ∅}.To this end, we define a pair of binary classifiers: where Thus, in total, we have parameters excluding those from the fine-tuned encoders enc 1 and enc 2 .The encoders enc 1 and enc 2 represent two different fine-tuned parameters of the encoder.Note that this approach does not model whether or not an Article is claimed, which stands in contrast to the main models proposed by this work.
MTL Baseline.We also consider a version of the simple baseline where we jointly fine-tune a single encoder.Symbolically, this is written as: where enc is shared between the classifiers.Apart from the sharing, the MTL baseline is identical to the simple baseline.
Random Baseline.Finally, we provide a simple random baseline by sampling the outcome vectors from discrete uniform distribution.The random baseline is an average performance over 100 instantiations of this baseline.
Pre-trained Language Models.We obtain high-dimensional representations enc(f ) by finetuning one of the following pre-trained language models with f as an input: • We first consider BERT because it is a widely used model in legal AI (Chalkidis et al., 2021b).
• Second, we consider LEGAL-BERT, because it is trained on legal text, which should give it an advantage in our setting.
• Finally we use the Longformer model.
Longformer is built on the same Transformer (Vaswani et al., 2017) architecture as BERT (Devlin et al., 2019) and LEGAL-BERT (Chalkidis et al., 2020), but it can process up to 4,096 tokens.We select this architecture because the facts of legal documents often exceed 512 tokens; a model that can process longer documents could therefore be better suited to our needs.
Training Details.All our models are trained with a batch size of 16.We conduct hyperparameter optimisation across learning rate {3e−4, 3e−5, 3e−6}, dropout {0.2, 0.3, 0.4} and hidden size of {50, 100, 200, 300}.We truncate individual case facts to 512 tokens for BERT and LEGAL-BERT or 4,096 tokens for the Longformer.Our models are implemented using the PyTorch (Paszke et al., 2019) and Hugginface (Wolf et al., 2020) libraries.We use Adam for optimization (Kingma and Ba, 2015) and train all our models on 1 Tesla V100 32GiB GPU's for a maximum of 1 hour.We train for a maximum of 10 epochs. 7The models are trained on the training set, see Table 1.We report the results on the test set for the models that have achieved the lowest loss on the validation set.

Legal Corpora
We work with the ECtHR corpus, 8 which contains thousands of instances of case law pertaining to the European Convention of Human rights (ECHR).ECtHR cases contain a written description of case facts, which is our f , and information 7 All our code is available on Github.
8 While ECtHR interacts with civil law jurisdictions, its judges rely on precedent (Valvoda et al., 2021).We conduct all our experiments on both corpora.However, not all of the Articles of ECHR are interesting from the perspective of a legal outcome, since not all of them can be claimed by a lawyer.Out of the 51 Articles of the convention, only Articles 2 to 18 contain the rights and freedoms, whereas the remaining Articles pertain to the court and its operation.The rights and freedoms are what a lawyer can claim, the focus of our work.We therefore restrict our study to predicting the outcome of these core rights.Furthermore, we remove any Articles that do not appear in the validation and test sets.This leaves us with K = 17 and K = 14 for the Chalkidis et al.Corpus and Outcome Corpus respectively.
Table 1 shows the number of cases containing negative outcome vs positive outcome across the training/validation/test splits.The full distribution of Articles over cases in both corpora can be found in Appendix C.

Results
Following Chalkidis et al. (2019), we report all results as F 1 micro-averaged.We report significance using the two tailed paired permutation tests with p < 0.05.The bulk of our results is contained in Table 2.We report individual conclusions in the following paragraphs: Negative outcome prediction is challenging.First, we compare the positive and negative outcome prediction performance on our outcome corpus and find that while the best simple baseline model achieves 75.06 F 1 on positive outcomes, the same model achieves only 10.09 F 1 on negative outcomes.In fact, the model fails to beat our random baseline of 11.12 F 1 on negative outcomes.The same trend holds over all our model architectures, all the underlying language models and both datasets under consideration.Every time, the negative outcome performance is significantly lower than that of positive outcomes.Therefore, our first conclusion is that negative outcome is simply harder to predict than its positive counterpart.
Claim-outcome model improves negative outcome prediction.We observe a large and significant improvements using our claimoutcome model on the task of negative outcome prediction; see Figure 2a and Figure 2b.Our claim-outcome model is better than every baseline model under consideration, a finding that holds over three underlying language models and both datasets.A single exception to this rule is the joint model, which insignificantly beats our claim-outcome model (by 0.1) on the outcome corpus using the LEGAL-BERT LLM.Overall, where the best claim-outcome model achieves 24.01 F 1 on the outcome corpus and 14.84 F 1 on the Chalkidis et al. corpus, the best simple baseline model only achieves 10.09 and 1.81 F 1 , respectively.Therefore, our second conclusion is that enforcing the relationship between claims and outcomes improves negative outcome prediction.We expand our discussion on this in Appendix A.
Joint-model improves positive outcome prediction.Turning to the positive outcome prediction task, we see that the simple baseline and our claim-outcome model have a comparable F 1 .The joint model on the other hand improves over either baseline and achieves the best F 1 on both the outcome corpus and the Chalkidis et al. corpus (77.15 and 67.08 F 1 , respectively).Since the simple baseline model using pre-trained BERT is the state-of-the-art model for positive outcome prediction (Chalkidis et al., 2021b), our third conclusion is that jointly training on positive and negative outcomes is a better way of learning how to predict a positive outcome of a case.
Impact of pre-training.In line with the standard results on LexGLUE task A (Chalkidis et al., 2021b), we find that LEGAL-BERT and Longformer fail to consistently outperform BERT.None the less, LEGAL-BERT has a significant positive effect on negative outcome prediction for the joint model.It improves over BERT (17.43 F 1 ) and Longformer (16.24 F 1 ) based models and achieves 21.93 F 1 .It is also the underlying language model for the highest performing model for the outcome corpus (achieving 64.87 F 1 overall).Meanwhile, Longformer sets the highest positive outcome performance on the same corpus (77.15 F 1 ).We therefore find both longer document encoding and legal language pretraining useful in certain narrow settings, although it seems that the choice of model architecture has a larger effect on the performance than the choice of the language model size or pre-training material.
Which model is the best?Finally, we turn to the question of what is the best model of outcome prediction; the joint model or the claimoutcome model?Towards answering this question we take an average F 1 over all three random variables under consideration; the best model of outcome should do well at distinguishing between positive, negative and null outcome.We find that while the joint model has an insignificant edge over the claim-outcome model on the outcome corpus (by 0.1), on the Chalkidis et al. corpus the claim-outcome model significantly improves over the joint model (by 3.48 F 1 ).This leads us to believe that claim-outcome model is overall the better model for legal outcome prediction.However, both models are valuable in their own right.Where the joint model improves over state-of-the-   art positive outcome prediction models, the claimoutcome model doubles their performance on the negative outcome task.

Discussion
The results reported above raise the question of why models severely underperform on negative outcome prediction.The simplest answer could be the amount of training data that is available for each task.We test this hypothesis by comparing the performance on Articles 8 (796 negative vs.
654 positive examples) and 13 (1197 negative vs. 1031 positive examples) of ECHR in our outcome corpus, where there is more training data available for the negative outcome than for the positive outcome.The results, given in Figure 3, show that even when the model has more training data for negative outcome than the positive outcome, predicting negative outcome is still harder.In particular for Article 13, the amount of training data is higher than for Article 8, yet the drop in performance between positive and negative outcome prediction is still dramatic.In fact, while the scores achieved by the claim-outcome model are still low, the other models (except our joint model) fail to predict a single negative outcome correctly for Article 13.We therefore believe that the performance drop is likely to be more related with the complexity of the underlying task, than with the imbalance of the underlying datasets.
To find a better explanation of the performance asymmetry, we now turn in our discussion to the legal perspective.In precedential jurisdictions, of which ECtHR is one (Zupancic, 2016;Lupu and Voeten, 2010;Valvoda et al., 2021), the decisions of a case are binding on future decisions of the court.Two cases with the same facts should therefore arrive at the same outcome.Of course, in reality, the facts are never the same.Rather, cases with similar circumstances will, broadly speaking, lead to similar outcomes.This is achieved by applying the precedent.In such cases, the judge will in effect say that the new case is not substantially different from an already existing case and therefore the same outcome will be propagated.
On the other hand, if the previous precedent is not to be followed, the judge needs to distinguish the case at hand from the precedent.Distinguishing the case from the precedent is a more involved task than applying the precedent.It requires identifying what exactly about the new facts sets the new case apart from the previous one.This can of course be done for both cases with positive and negative outcome.Both can be applied or distinguished. 10Since judges deal with claims, each of which comes with an argument built around the precedent that favours the claimant's viewpoint, we believe that negative outcomes overwhelmingly rely on distinguishing the case from the precedent.This is evidenced in the yearly reports of the ECtHR (2020), which list cases where the judges decided to distinguish the facts of the case at hand.Distinguishing almost always leads to a negative outcome.We observe the same trend in our ECtHR corpus.
It might therefore be the case that while there is such a thing as a prototypical positive precedent, there is no prototypical negative precedent.This could explain why the simpler architectures struggle to learn to predict it.While a simpler model is ill-suited for the task since it is trained to find a similarity between the negative outcome cases, our claim-outcome model does not assume that negative outcome cases are similar in the first place.Instead, our model assumes similarity between claims.Since claim prediction can be modelled with a high accuracy (Chalkidis et al., 2021a), we can reveal the negative outcome as a disagreement between a judge and a lawyer (i.e.claims and the outcomes).
By investigating individual cases in Chalkidis et al.Corpus, we can identify a further possible explanation for the baseline model performance.For instance, the case of Wetjen and Others v. Germany (Wetjen) is concerned with Article 8: Right to respect for private and family life.In this case, religious parents used caning (among other methods) as a punishment for their children.The Ger-man State intervened and placed the children in foster care.The parents claim interference with their right to family life.On a superficial level, the case is similar to two Article 8 cases both cited in Wetjen: that of Shuruk v. Switzerland (Shuruk), and Suss v. Germany (Suss).
In Shuruk, religious parents fight for an extradition of a child.The mother of the child argues that it would be an interference with her right to a family life if the child was to be extradited to the husband, who has joined an ultra-orthodox Jewish movement.A component of the case is an allegation of domestic violence the husband was supposed to have perpetrated against his wife.In Suss, the German State has denied a divorced father access to his daughter due to the frequent quarrels between the parents during the visits.The father alleges breach of Article 8.In Wetjen and Suss, the judges have decided a violation of Article 8 has not occurred, they have ruled the opposite in Shuruk.
On the surface, the facts are alike, especially between Shuruk and Wetjen -both cases contain elements of abuse, religion and state intervention.However, to a human lawyer, the distinction between the cases is fairly trivial.In Wetjen, the State is allowed to intervene to protect a child from an abusive ultra-religious parents, which is very similar situation to Suss, where a State is allowed to intervene to protect the child from quarrels between divorced parents.
All our models are exposed to both Shuruk and Suss in the training set.However, for the positive outcome baseline, the information about Suss being related to Article 8 is lost.Conversely, for the negative outcome baseline, the information about Shuruk being related to Article 8 is lost.It is therefore not surprising that the best performing negative11 and positive12 outcome baseline models both get the Wetjen outcome prediction wrong.On the other hand, the best claim-outcome model, which is trained to learn that both Shuruk and Suss are related to Article 8 via the claim prediction objective, makes the correct outcome prediction in the Wetjen case.
In conclusion, our claim-outcome model is indeed a better way of modelling negative outcome, but its superiority is not due to the fact that it is learning anything about the law itself.It simply leverages the fact that positive outcomes and claims are easy to predict and enforces the relationship between them.To identify the negative outcome with high F 1 will require deeper understanding of law than our models are currently capable of.

Related Work
Juris-informatics can trace its origins all the way to the late 1950s (Kort, 1957;Nagel, 1963).The pioneers used rule-based systems to successfully capture aspects of legal reasoning, using thousands of hand-crafted rules (Ashley, 1988).Yet due to the ever-changing rules of law, these systems were too brittle to be employed in practice.Particularly in common law countries, the majority of law is contained in case law, where cases are transcripts of the judicial decisions.This allows the law to constantly change in reaction to each new decision.The advances of natural language processing (NLP) in the past two decades have rejuvinated the interest in developing applications for the legal domain.Areas explored include question answering (Monroy et al., 2009), legal entity recognition (Cardellino et al., 2017), text summarisation (Hachey and Grover, 2006), judgement prediction (Xu et al., 2020), majority opinion prediction (Valvoda et al., 2018) and ratio decidendi extraction (Valvoda and Ray, 2018).
Our work is similar to the recent study of Chinese law judgement prediction by Zhong et al. (2018) and Xu et al. (2020), who break down court judgements into the applicable law, charges and terms of penalty.Operating in the civil law system (which outside of China is also used in Germany, and France, inter alia), they argue that predicting applicable law is a fundamental subtask, which will guide the prediction for other subtasks.In the context of ECHR law, we argue that legal claims are one such guiding element for outcome predic-tion.While similar, applicable law and claims are different.In the work above, the judge selects the applicable law from the facts as part of reaching the outcome.This is not the case for ECHR law, or any other precedential legal system known to the authors, where the breach of law is claimed by a lawyer, not a judge.
Finally, the ECtHR dataset has been collected by Chalkidis et al. (2019), who have predicted outcomes of the ECHR law and the corresponding Articles using neural architectures.Our work builds on their research by reinstantiating the outcome prediction task on this dataset to include negative precedent.Similar datasets, which one could apply our method to, include Caselaw Access Project and US Supreme Court caselaw. 13

Conclusion
While positive and negative outcomes are equally important from the legal perspective, the current legal AI research has neglected the latter.Our findings suggest that negative outcome is much harder to predict than positive outcome, at least for current deep learning models.This has severe implications for how well the current legal models can model judicial outcome.The same models that predict positive outcome with 75.06 F 1 fall short of a random baseline of 10.09 F 1 on the negative outcome prediction task.
We discuss possible reasons why negative outcome prediction is so much harder to learn.Specifically, we suspect that negative outcomes are mostly caused by a judge distinguishing the case from its precedent.This lead us to believe that learning to predict negative outcomes requires more legal understanding than the current models are capable of.We believe that negative outcome prediction is therefore a particularly attractive task for evaluating progress in legal AI.
Our work improves over the existing models by inducing the relationship between the judge and the lawyer in our claim-outcome model architecture.However, the best negative outcome prediction model achieves only a third of the performance of the positive outcome one.In future work we hope to study the phenomenon of precedent more closely, with the aim of building models capable of narrowing this performance gap.One possible avenue would be to relax our Assump-tion 1 to study the potential relationships between the individual legal Articles.We leave this direction for future work.

Ethical Considerations
Legal models similar to the ones we study above have been deployed in the real world.Known applications include risk assessment of detainees in the US (Kehl and Kessler, 2017) and sentencing of criminals in China (Xiao et al., 2018).We believe these are not ethical use cases for legal AI.One must not be tempted to think of outcome prediction as equivalent to some medical task, such as cancer detection, with a breach of law seen as a 'tumour' that is either there or not.This is a naive viewpoint that ignores the fact that the legal system is a human construct in which the decision makers, the judges, play a role that can shift the truth, something that is impossible when it comes to natural laws.
Herein lies the ethical dubiousness of any attempt at modelling judges using AI.Unlike in the domain of medicine, where identifying the underlying truth is essential for treatment, and thus a successful machine diagnostician is in theory a competition for the human one, in the domain of law the validity of the decision is poised solely on the best intentions of the judge.For some judges this pursuit of the 'right' outcome can go as far as defiance of legal precedent.We therefore argue a judge should not be replaced by a machine and caution against the use of our, or any other legal AI model currently available, towards automating the judicial system.

A Note on the Baselines
Comparing the MTL baseline and the joint model, one might come to the conclusion that there is no substantial difference between the models when it comes to predicting positive outcomes.While the joint model outperforms the MTL baseline on eleven out of the twelve experiments we test our models on, the improvement in performance on the positive outcome prediction over the outcome corpus is very narrow.However, there is an important difference between them.The MTL baseline, much like the simple baseline, can predict positive and negative outcome simultaneously for the same Article.This means that in our evaluation, the baseline models can cheat by predicting an Article to be simultaneously violated and not-violated.This is another reason that the outcome prediction task needs to consider the legal relationship between positive and negative outcomes.Ignoring the relationship of claims and outcomes makes both of our baselines fundamentally ill-suited for the task of outcome prediction.Hence, they are only useful for a comparison in our study.

B Glossary & Dataset Examples:
Legal Terms

Claim
The allegation of a breach of law usually put forth by their legal counsel on behalf of the claimant.

Positive Outcome
Claims are assessed by judges in courts of law.If they think a claim is valid, they rule it as successful.The outcome of the case is a victory for the claimant; which we call the positive outcome in this paper.

Negative Outcome
On the other hand, the claimant can be unsuccessful in the court.The judge has decided against them in the court, in favour of the defendant, and we call this the negative outcome in this paper.

Facts
The description of what happened to the claimant.This includes more general descriptions of who they are, circumstances of the perceived violation of their rights and the proceedings in domestic courts before their appeal to ECtHR.

Precedent
Cases that have been cited by the judges as part of their arguments.

Binding
Judges are expected to adhere to the binding rules of law and decide future access accordingly.

Stare Decisis
New cases with the same facts to the already decided case should lead to the same outcome.This is the doctrine of precedent by which judges can create law.

Caselaw
Transcripts of the court proceedings.

ECHR
European Convention of Human Rights, comprises of the Convention and the Protocols to the convention.The Protocols are the additions and amendments to the Convention introduced after the signing of the original Convention.

ECtHR
European Court of Human Rights, adjudicates ECHR cases.

Apply
A judge applies the precedent when she decides on the outcome of a case via an analogy to an already existing case.

Distinguish
Conversely, a judge distinguishes the case from the already existing cases when she believes they are not analogous.

Figure 1 :
Figure 1: The models under consideration.Green and red boxes represent positive and negative outcomes respectively.Blue boxes represent claims.
First we parameterise the joint model which gives us a joint distribution over all configurations of o k and c k for a specific Article k.In principle, there are six such configurations {+, −, ∅} × {Y, N}.However, after we enforce Assumption 2, we are left with only three configurations +, Y , −, Y , ∅, N .This reduces the problem to a 3-way classification, which we parameterise as follows: (a) Negative outcome results on the outcome corpus.(b) Negative outcome results on the Chalkidis et al. corpus.

Figure 2 :
Figure 2: Results for Simple, MTL, Joint and Claim-Outcome models.Dashed line is our random baseline.

Figure 3 :
Figure 3: Article 8 and 13 results for Simple, MTL, Joint and Precedent models.

Figure 4 :
Figure 4: Distribution of Articles over training data in our outcome corpus.

Figure 5 :
Figure 5: Distribution of Articles over training data in Chalkidis et al. corpus.
C k = c k , to refer to the random variable associated with the k th Article.Bolded C is a random variable ranging over C K .The values of C are denoted as c ∈ C K .

Table 1 :
Number of cases with at least one positive or negative outcome label in the dataset.about claims and outcomes.Since positive outcomes are a subset of all claims, the exclusion set of claims and positive outcomes constitutes the set of negative outcomes.Chalkidis et al.Corpus.To obtain the golden labels for outcomes and claims we first rely on the Chalkidis et al. (2021a) scrape of the ECHR corpus that contains alleged violations and violations labels.The violations are case outcomes, while the alleged violations are the main claims of the case.

Table 2 :
F 1 -micro averaged scores for all the models considered over the two datasets.
Ms Ivana Dvořáčková was born in 1981 with Down Syndrome (trisomy 21) and a damaged heart and lungs.She was in the care of a specialised health institution in Bratislava.In 1986 she was examined in the Centre of Paediatric Cardiology in Prague-Motole where it was established that, due to post-natal pathological developments, her heart chamber defect could no longer be remedied..." for more see Case of Dvoracek and Dvorackova v.