Design Choices for Crowdsourcing Implicit Discourse Relations: Revealing the Biases Introduced by Task Design

Abstract Disagreement in natural language annotation has mostly been studied from a perspective of biases introduced by the annotators and the annotation frameworks. Here, we propose to analyze another source of bias—task design bias, which has a particularly strong impact on crowdsourced linguistic annotations where natural language is used to elicit the interpretation of lay annotators. For this purpose we look at implicit discourse relation annotation, a task that has repeatedly been shown to be difficult due to the relations’ ambiguity. We compare the annotations of 1,200 discourse relations obtained using two distinct annotation tasks and quantify the biases of both methods across four different domains. Both methods are natural language annotation tasks designed for crowdsourcing. We show that the task design can push annotators towards certain relations and that some discourse relation senses can be better elicited with one or the other annotation approach. We also conclude that this type of bias should be taken into account when training and testing models.


Introduction
Crowdsourcing has become a popular method for data collection.It not only allows researchers to collect large amounts of annotated data in a shorter amount of time, but also captures human inference in natural language, which should be the goal of benchmark NLP tasks (Manning, 2006).In order to obtain reliable annotations, the crowdsourced labels are traditionally aggregated to a single label per item, using simple majority voting or annotation models that reduce noise from the data based on the disagreement among the annotators (Hovy et al., 2013;Passonneau and Carpenter, 2014).However, there is increasing consensus that disagreement in annotation cannot be generally dis-carded as noise in a range of NLP tasks, such as natural language inferences (De Marneffe et al., 2012;Pavlick and Kwiatkowski, 2019;Chen et al., 2020;Nie et al., 2020), word sense disambiguation (Jurgens, 2013), question answering (Min et al., 2020;Ferracane et al., 2021), anaphora resolution (Poesio and Artstein, 2005;Poesio et al., 2006), sentiment analysis (Díaz et al., 2018;Cowen et al., 2019) and stance classification (Waseem, 2016;Luo et al., 2020).Label distributions are proposed to replace categorical labels in order to represent the label ambiguity (Aroyo and Welty, 2013;Pavlick and Kwiatkowski, 2019;Uma et al., 2021;Dumitrache et al., 2021).
There are various reasons behind the ambiguity of linguistic annotations (Dumitrache, 2015;Jiang and de Marneffe, 2022).Aroyo and Welty (2013) summarize the sources of ambiguity into three categories: the text, the annotators, and the annotation scheme.In downstream NLP tasks, it would be helpful if models could detect possible alternative interpretations of ambiguous texts, or predict a distribution of interpretations by a population.In addition to the existing works on the disagreement due to annotators' bias, the effect of annotation frameworks has also been studied, such as the discussion on whether entailment should include pragmatic inferences (Pavlick and Kwiatkowski, 2019), the effect of the granularity of the collected labels (Chung et al., 2019), or the system of labels that categorize the linguistic phenomenon (Demberg et al., 2019).In this work, we examine the effect of task design bias, which is independent of the annotation framework, on the quality of crowdsourced annotations.Specifically, we look at inter-sentential implicit discourse relation (DR) annotation, i.e., semantic or pragmatic relations between two adjacent sentences without a discourse connective to which the sense of the relation can be attributed.Implicit DR annotation is arguably the hardest task in discourse parsing.Discourse coherence is a feature of the mental representation that readers form of a text, rather than of the linguistic material itself (Sanders et al., 1992).Discourse annotation thus relies on annotators' interpretation of a text.Further, relations can often be interpreted in various ways (Rohde et al., 2016), with multiple valid readings holding at the same time.These factors make discourse relation annotation, especially for implicit relations, a particularly difficult task.We collect 10 different annotations per DR, thereby focusing on distributional representations, which are more informative than categorical labels.
Since DR annotation labels are often abstract terms that are not easily understood by laymen, we focus on "natural language" task designs.Decomposing and simplifying an annotation task, where the DR labels can be obtained indirectly from the natural language annotations, has been shown to work well for crowdsourcing (Chang et al., 2016;Scholman and Demberg, 2017;Pyatkin et al., 2020).Crowdsourcing with natural language has become increasingly popular.This includes tasks such as NLI (Bowman et al., 2015), SRL (Fitzgerald et al., 2018) and QA (Rajpurkar et al., 2018).This trend is further visible in modeling approaches which cast traditional structured prediction tasks into NL tasks, such as for coreference (Aralikatte et al., 2021), discourse comprehension (Ko et al., 2021) or bridging anaphora (Hou, 2020;Elazar et al., 2022).It is therefore of interest to the broader research community to see how task design biases can arise, even when the tasks are more accessible to laymen.
We examine two distinct natural language crowdsourcing discourse relation annotation tasks (Fig. 1): Yung et al. (2019) derive relation labels from discourse connectives (DC) that crowd workers insert; Pyatkin et al. (2020) derive labels from Question Answer (QA) pairs that crowd workers write.Both task designs employ natural language annotations instead of labels from a taxonomy.The two task designs, DC and QA, are used to annotate 1,200 implicit discourse relations in 4 different domains.This allows us to explore how the task design impacts the obtained annotations, as well as the biases that are inherent to each method.To do so we showcase the difference of various inter-annotator agreement metrics on annotations On the Ides of March […] Caesar was assassinated by a group of rebellious senators led by Brutus and Cassius, who stabbed him to death.
A new series of civil wars broke out and the constitutional government of the Republic was never fully restored.

Consequently
What is the result of Caesar having been assassinated by a group of rebellious senators?with distributional and aggregated labels.We find that both methods have strengths and weaknesses in identifying certain types of relations.We further see that these biases are also affected by the domain.In a series of discourse relation classification experiments, we demonstrate the benefits of collecting annotations with mixed methodologies, we show that training with a soft loss with distributions as targets improves model performance and we find that cross-task generalization is harder than cross-domain generalization.

QA
The outline of the paper is as follows.We introduce the notion of task design bias and analyze its effect on crowdsourcing implicit DRs, using two different task designs .Next, we quantify strengths and weaknesses of each method using the obtained annotations, and suggest ways to reduce task bias (Sec.5).Then we look at genre specific task bias (Sec.6).Lastly, we demonstrate the task bias effect on DR classification performance (Sec.7).

Annotation Biases
Annotation tends to be an inherently ambiguous task, often with multiple possible interpretations and without a single ground truth (Aroyo and Welty, 2013).An increasing amount of research has studied annotation disagreements and biases.
Prior studies have focused on how crowdworkers can be biased.Worker biases are subject to various factors, such as their educational or cultural background, or other demographic characteristics.Prabhakaran et al. (2021) point out that for more subjective annotation tasks, the socio-demographic background of annotators contributes to multiple annotation perspectives and argue that label aggregation obfuscates such perspectives.Instead, soft labels are proposed, such as the ones provided by the CrowdTruth method (Dumitrache et al., 2018), which require multiple judgements to be collected per instance (Uma et al., 2021).Bowman and Dahl (2021) suggests that annotations that are subject to bias from methodological artifacts should not be included in benchmark datasets.In contrast, Basile et al. (2021) argues that all kinds of human disagreements should be predicted by NLU models and thus included in evaluation datasets.
In contrast to annotator bias, a limited amount of research is available on bias related to the formulation of the task.Jakobsen et al. (2022) show that argument annotations exhibit widely different levels of social group disparity depending on which guidelines the annotators followed.Similarly, Buechel and Hahn (2017a,b) study different design choices for crowdsourcing emotion annotations and show that the perspective that annotators are asked to take in the guidelines affects annotation quality and distribution.Jiang et al. (2017) study the effect of workflow for paraphrase collection and found that examples based on previous contributions prompt workers to produce more diverging paraphrases.Hube et al. (2019) show that biased subjective judgment annotations can be mitigated by asking workers to think about responses other workers might give and by making workers aware of their possible biases.Hence, the available research suggests that task design can affect the annotation output in various ways.Further research studied the collection of multiple labels: Jurgens (2013) compare between selection and scale rating and find that workers would choose an additional label for a word sense labelling task.In contrast, Scholman and Demberg (2017) find that workers usually opt not to provide an additional DR label even when allowed.Chung et al. (2019) compare various label collection methods including single / multiple labelling, ranking and probability assignment.We focus on the biases in DR annotation approaches using the same set of labels, but translated into different "natural language" for crowdsourcing.

DR annotation
Various frameworks exist that can be used to annotate discourse relations, such as RST (Mann and Thompson, 1988) and SDRT (Asher, 1993).
In this work, we focus on the annotation of implicit discourse relations, following the framework used to annotate the Penn Discourse Treebank 3.0 (PDTB, Webber et al., 2019).PDTB's sense classification is structured as a three-level hierarchy, with four coarse-grained sense groups in the first level and more fine-grained senses for each of the next levels. 1The process is a combination of manual and automated annotation: an automated process identifies potential explicit connectives, and annotators then decide on whether the potential connective is indeed a true connective.If so, they specify one or more senses that hold between its arguments.If no connective or alternative lexicalization is present (i.e., for implicit relations), each annotator provides one or more connectives that together express the sense(s) they infer.

Crowdsourcing DRs with the DC method
Yung et al. ( 2019) developed a crowdsourcing discourse relation annotation method using discourse connectives, referred to as the DC method.For every instance, participants first provide a connective which, in their view, best expresses the relation between the two arguments.Note that the connective chosen by the participant might be ambiguous.Therefore, participants disambiguate the relation in a second step, by selecting a connective from a list that is generated dynamically based on the connective provided in the first step.When the first step insertion does not match any entry in the connective bank (from which the list of disambiguating connectives is generated), participants are presented with a default list of twelve connec-tives expressing a variety of relations.Based on the connectives chosen in the two steps, the inferred relation sense can be extracted.For example, the CONJUNCTION reading in Fig. 1 can be expressed by in addition, and the RESULT reading can be expressed by consequently.
The DC method was used to create a crowdsourced corpus of 6,505 discourse-annotated implicit relations, named DiscoGeM (Scholman et al., 2022a).A subset of DiscoGeM is used in the current study (see Section 3).

Crowdsourcing
DRs by QA method Pyatkin et al. (2020) proposed to crowdsource discourse relations using QA pairs.They collected a dataset of intra-sentential QA annotations which aim to represent discourse relations by including one of the propositions in the question and the other in the respective answer, with the question prefix (What is similar to..?, What is an example of..?) mapping to a relation sense.Their method was later extended to also work inter-sententially (Scholman et al., 2022b).In this work we make use of the extended approach that relates two distinct sentences through a question and answer.The following QA pair, for example, connects the two sentences in Fig. 1 with a RESULT relation. (

1)
What is the result of Caesar being assassinated by a group of rebellious senators?(S1)-A new series of civil wars broke out [...](S2) The annotation process consists of the following steps: From two consecutive sentences, annotators are asked to choose a sentence that will be used to formulate a question.The other sentence functions as an answer to that question.Next they start building a question by choosing a question prefix and by completing the question with content from the chosen sentence.
Since it is possible to choose either of the two sentences as question/answer for a specific set of symmetric relations, (i.e.What is the reason a new series of civil wars broke out?), we consider both possible formulations as equivalent.
The set of possible question prefixes cover all PDTB 3.0 senses (excluding belief and speech-act relations).The direction of the relation sense, e.g.arg1-as-denier vs. arg2-as-denier, is determined by which of the two sentences is chosen for the question/answer.While Pyatkin et al. (2020) al-lowed crowdworkers to form multiple QA pairs per instance, i.e. annotate more than one discourse sense per relation, we decided to limit the task to 1 sense per relation per worker.We took this decision in order for the QA method to be more comparable to the DC method, which also only allows the insertion of a single connective.

Data
We annotated 1,200 inter-sentential discourse relations using both the DC and the QA task design.2Of these 1,200 relations, 900 were taken from the DiscoGeM corpus and 300 from the PDTB 3.0.

DiscoGeM relations
The 900 DiscoGeM instances that were included in the current study represent different domains: 296 instances were taken from the subset of DiscoGeM relations that were taken from Europarl proceedings (written proceedings of prepared political speech taken from the Europarl corpus; Koehn, 2005), 304 instances were taken from the literature subset (narrative text from five English books),3 and 300 instances from the Wikipedia subset of DiscoGeM (informative text, taken from the summaries of 30 Wikipedia articles).These different genres enable a cross-genre comparison.This is necessary, given that prevalence of certain relation types can differ across genres (Rehbein et al., 2016;Scholman et al., 2022a;Webber, 2009).
These 900 relations were already labeled using the DC method in DiscoGeM; we additionally collect labels using the QA method for the current study.In addition to crowd-sourced labels using the DC and QA methods, the Wikipedia subset was also annotated by three trained annotators.4 47% of these Wikipedia instances were labeled with multiple senses by the expert annotators (i.e., were considered to be ambiguous or express multiple readings).

PDTB relations
The PDTB relations were included for the purpose of comparing our annotations with traditional PDTB gold standard annotations.These instances (all inter-sentential) were selected to represent all relational classes, randomly sampling at most 15 and at least 2 (for classes with less than 15 relation instances we sampled all existing relations) relation instances per class.The reference labels for the PDTB instances consist of the original PDTB labels annotated as part of the PDTB3 corpus.Only 8% of these consisted of multiple senses.

Crowdworkers
Crowdworkers were recruited via Prolific using a selection approach (Scholman et al., 2022b), which has been shown to result in a good trade off between quality and time/monetary efforts for DR annotation.Crowdworkers had to meet the following requirements: be native English speakers, reside in UK, Ireland, USA, or Canada, and have obtained at least an undergraduate degree.
Workers who fulfilled these conditions could participate in an initial recruitment task, for which they were asked to annotate a text with either the DC or QA method and were shown immediate feedback on their performance.Workers with an accuracy ≥ 0.5 on this task were qualified to participate in further tasks.We hence created a unique set of crowdworkers for each method.The DC annotations (collected as part of DiscoGeM) were provided by a final set of 199 selected crowdworkers; QA had a final set of 43 selected crowdworkers. 5Quality was monitored throughout the production data collection and qualifications were adjusted according to performance.
Every instance was annotated by 10 workers per method.This number was chosen based on parity with previous research.For example, Snow et al. (2008) show that a sample of 10 crowdsourced annotations per instance yields satisfactory accuracy for various linguistic annotation tasks.Scholman and Demberg (2017) found that assigning a new group of 10 annotators to annotate the same instances resulted in a near-perfect replication of the connective insertions in an earlier DC study.
Instances were annotated in batches of 20.For QA, one batch took about 20 minutes to complete, and for DC 7 minutes.Workers were reimbursed about £2.50 and £1.88 per batch respectively.

Inter-annotator agreement
We evaluate the two DR annotation methods by the inter-annotator agreement (IAA) between the annotations collected by both methods and IAA with reference annotations collected from trained annotators.
Cohen's kappa (Cohen, 1960) is a metric frequently used to measure inter-annotator agreement (IAA).For DR annotations, a Cohen's kappa of .7 is considered to reflect good IAA (Spooren and Degand, 2010).However, prior research has shown that agreement on implicit relations is more difficult to reach than on explicit relations: Kishimoto et al. ( 2018) report an F1 of .51 on crowdsourced annotations of implicits using a tagset with 7 level-2 labels; Zikánová et al. (2019) report κ=.47 (58%) on expert annotations of implicits using a tagset with 23 level-2 labels; and Demberg et al. (2019) find that PDTB and RST-DT annotators agree on the relation sense on 37% of implicit relations.Cohen's kappa is primarily used for comparison between single labels and the IAAs reported in these works are also based on single aggregated labels.
However, we also want to compare the obtained 10 annotations per instance with our reference labels which also contain multiple labels.The comparison becomes less straightforward when there are multiple labels because the chance of agreement is inflated and partial agreement should be treated differently.We thus measure the IAA between multiple labels in terms of both full and partial agreement rates, as well as the multi-label kappa metric proposed by Marchal et al. (2022).This metric adjusts the multi-label agreements with bootstrapped expected agreement.We consider all the labels annotated by the crowdworkers in each instance, excluding minority labels with only one vote 6 .
In addition, we compare the distributions of the crowdsourced labels using the Jensen-Shannon divergence (JSD) following existing works (Erk and McCarthy, 2009;Nie et al., 2020;Zhang et al., 2021).Similarly, minority labels with only one vote are excluded.Since distributions are not available in the reference labels, when comparing with the reference labels, we evaluate by the JSD based on the flattened distributions of the labels, which means we replace the original distribution of the votes with an even distribution of the labels that have been voted by more than one annotator.We call this version JSD_flat.
As a third perspective on IAA we report agreement among annotators on an item annotated with QA/DC.Following previous work (Nie et al., 2020), we use entropy of the soft labels to quantify the uncertainty of the crowd annotation.Here labels with only one vote are also included as they contribute to the annotation uncertainty.When calculating the entropy, we use a logarithmic base of n = 29, where n is the number of possible labels.A lower entropy value suggests that the annotators agree with each other more and the annotated label is more certain.As discussed in Sec. 1, the source of disagreement in annotations could come from the items, the annotators and the methodology.High entropy across multiple annotations of a specific item within the same annotation task suggests that the item is ambiguous.

Results
We first compare the IAA between the two crowdsourced annotations, then we discuss IAA between DC/QA and the reference annotations, and lastly we perform an analysis based on annotation uncertainty.Here, "sub-labels" of an instance means all relations that have received more than one annotation; and "label distribution" is the distribution of the votes of the sub-labels.

IAA between the methods
Tab. 1 shows that both methods yield more than two sub-labels per instance after excluding minority labels with only one vote.This supports the idea that multi-sense annotations better capture the fact that often more than one sense can hold implicitly between two discourse arguments.
Tab. 1 also presents the IAA between the labels crowdsourced with QA and DC per domain.The agreement between the two methods is good: the labels assigned by the two methods (or at least one of the sub-labels in case of a multi-label annotation) match for about 88% of the items.This speaks for the fact that both methods are valid, as similar sets of labels are produced.
The full agreement scores, however, are very low.This is expected, as the chance to match on all sub-labels is also very low compared to a singlelabel setting.The multi-label kappa -which takes chance agreement of multiple labels into account-, and JSD -which compares the distributions of the multiple labels-, are hence more suitable.We note that the PDTB gold annotation that we use for evaluation does not assign multiple relations systematically and has a low rate of double labels.This explains why the PDTB subsets have a high partial agreement while the JSD ends up being worst.

IAA between crowdsourced and reference labels
Table 2 compares the labels crowdsourced by each method and the reference labels, which are available for the Wikipedia and PDTB subsets.It can be observed that both methods achieve higher full agreements with the reference labels than with each other on both domains.This indicates that the two methods are complementary, with each method better capturing different sense types.In particular, the QA method tends to show higher agreement with the reference for Wikipedia items, while the DC annotations show higher agreement with the reference for PDTB items.This can possibly be attributed to the development of the methodologies: the DC method was originally developed by testing on data from the PDTB in Yung et al. (2019), whereas the QA method was developed by testing on data from Wikipedia and Wikinews in Pyatkin et al. (2020).

Annotation uncertainty
Table 3 compares the average entropy of the soft labels collected by both methods.It can be observed that the uncertainty among the labels chosen by the crowdworkers is similar across domains but always slightly lower for DC.We further look at the correlation between annotation uncertainty and cross-method agreement, and find that agreement between methods is substantially higher for those instances where within-method entropy was low.Similarly, we find that agreement between crowdsourced annotations and gold labels is highest for those relations, where little entropy was found in crowdsourcing.with the reference of each item, of each method for the Wikipedia / PDTB subsets.It illustrates that annotations of both methods diverge with the reference more as the uncertainty of the annotation increases.While the effect of uncertainty is similar across methods on the Wikipedia subset, the quality of the QA annotations depends more on the uncertainty compared to the DC annotations on the PDTB subset.This means that method bias also exists on the level of annotation uncertainty and should be taken into account when, for example, entropy is used as a criterion to select reliable annotations.

Sources of the method bias
In this section, we analyze method bias in terms of the sense labels collected by each method.We also examine the potential limitations of the methods which could have contributed to the bias and demonstrate how we can utilize information on method bias to crowdsource more reliable labels.Lastly, we provide a cross-domain analysis.Table 5 presents the confusion matrix of the labels collected by both methods for the most frequent level-2 relations.Figure 3 and Table 4 in the Appendix shows the distribution of the true and false positives of the sub-labels.These results show that both methods are biased towards certain DRs.The source of these biases can be categorized into two types, which we will detail in the following subsections.

Limitation of natural language for annotation
There are limitations of representing DRs in natural languages using both QA and DC.For example, the QA method confuses workers when the question phrase contains a connective: 7 (2) "Little tyke, "chortled Mr. Dursley as he left the house.
He got into his car and backed out of number four's 7 The examples are presented in the following format: italics = argument 1; bolded = argument 2; plain = contexts.In the above example, the majority of the workers formed the question "After what he left the house?, which was likely a confusion with "What did he do after he left the house?".This could explain the frequent confusion between PRECE-DENCE and SUCCESSION by QA, resulting in the frequent FPs of SUCCESSION (Fig. 3).8 For DC, rare relations which lack a frequently used connective are harder to annotate, for ex.: (3) He had made an arrangement with one of the cockerels to call him in the mornings half an hour earlier than anyone else, and would put in some volunteer labour at whatever seemed to be most needed, before the regular day's work began.His answer to every problem, every setback, was "I will work harder!" -which he had adopted as his personal motto.
[QA:ARG1-AS-INSTANCE; DC:RESULT] It is difficult to use the DC method to annotate the ARG1-AS-INSTANCE relation due to a lack of typical, specific and context independent connective phrases that mark these rare relations, such as "this is an example of ...".By contrast, the QA method allows workers to make a question and answer pair in the reverse direction, with S1 being the answer to S2, using the same question words, e.g.What is an example of the fact that his answer to every problem [...] was "I will work harder!"?.This allows workers to label rarer relation types that were not even uncovered by trained annotators.Many common DCs are ambiguous, such as but and and, and can be hard to disambiguate.To address this, the DC method provides workers with unambiguous connectives in the second step.However, these unambiguous connectives are often relatively uncommon and come with different syntactic constraints, depending on whether they are coordinating or subordinating conjunctions or discourse adverbials.Hence, they do not fit in all contexts.Additionally, some of the unambiguous connectives sound very "heavy" and would not be used naturally in a given sentence.For example, however is often inserted in the first step, but it can mark multiple relations and is disambiguated in the second step by the choice among on the contrary for CONTRAST, despite for ARG1-AS-DENIER and despite this for ARG2-AS-DENIER.Despite this was chosen frequently since it can be applied to most contexts.This explains the DC method's bias towards arg2-as-denier against contrast (Figure 3: most FPs of arg2-as-denier and most FNs of contrast come from DC).
While the QA method also requires workers to select from a set of question starts, which also contain infrequent expressions (such as Unless what..?), workers are allowed to edit the text to improve the wordings of the questions.This helps reduce the effect of bias towards more frequent question prefixes and makes crowdworkers doing the QA task more likely to choose infrequent relation senses than those doing the DC task.

Guideline underspecification
Jiang and de Marneffe ( 2022) report that some disagreements in NLI tasks come from the loose definition of certain aspects of the task.We found that both QA and DC also do not give clear enough instructions in terms of argument spans.The DRs are annotated at the boundary of two consecutive sentences but both methods do not limit workers to annotate DRs that span exactly the two sentences.
More specifically, the QA method allows the crowdworkers to form questions by copying spans from one of the sentences.While this makes sure that the relation lies locally between two consecutive sentences, it also sometimes happens that workers highlight partial spans and annotate relations that span over parts of the sentences.For ex.: (4) I agree with Mr Pirker, and it is probably the only thing I will agree with him on if we do vote on the Ludford report.It is going to be an interesting vote.
[QA:ARG2-AS-DETAIL,REASON; DC:CONJUNCTION,RESULT] In Ex. (4), workers constructed the question "What provides more details on the vote on the Ludford report?".This is similar to the instructions in PDTB 2.0 and 3.0's annotation manuals, specifying that annotators should take minimal spans which don't have to span the entire sentence.Other relations should be inferred when the argument span is expanded to the whole sentence, for example a RESULT relation reflecting that there is little agreement, which will make the vote interesting.
Often, a sentence can be interpreted as the elaboration of certain entities in the previous sentence.This could explain why ARG1/2-AS-DETAIL tends to be overlabelled by QA.Fig. 3 shows that the QA has more than twice as many FP counts for ARG2-AS-DETAIL compared to DCthe contrast is even bigger for ARG1-AS-DETAIL.Yet it is not trivial to filter out such questions that only refer to a part of the sentence, because in some cases, the highlighted entity does represent the whole argument span. 9Clearer instructions in the guidelines are desirable.
Similarly, DC does not limit workers to annotate relations between the two sentences, consider: (5) When two differently-doped regions exist in the same crystal, a semiconductor junction is created.The behavior of charge carriers, which include electrons, ions and electron holes, at these junctions is the basis of diodes, transistors and all modern electronics.[Ref:ARG2-AS-DETAIL; QA:ARG2-AS-DETAIL, CONJUNCTION; DC:CONJUNCTION, RESULT] In this example, many people inserted as a result, which naturally marks the intra-sentence relation (...is created as a result.)Many relations are potentially spuriously labelled as RESULT, which are frequent between larger chunks of texts.Tab. 5 shows that the most frequent confusion is between DC's CAUSE and QA's CONJUNCTION. 10Within the level-2 CAUSE relation sense, it is the level-3 RESULT relation that turns out to be the main contributor to the observed bias.Fig. 3 also shows that most FPs of RESULT come from the DC method.

Aggregating DR annotations based on method bias
The qualitative analysis above provides insights on certain method biases observed in the label distributions, such as QA's bias towards ARG1/2-AS DETAIL and SUCCESSION and DC's bias towards 9 Such as "a few final comments" in this example: Ladies and gentlemen, I would like to make a few final comments.This is not about the implementation of the habitats directive.
10 A chi-squared test confirms that the observed distribution is significantly different from what could be expected based on chance disagreement.CONCESSION and RESULT.Being aware of these biases would allow to combine the methods: after first labelling all instances with the more costeffective DC method, RESULT relations, which we know tend to be overlabelled by the DC method, could be re-annotated using the QA method.We simulate this for our data and find that this would increase the partial agreement from 0.853 to 0.913 for wikipedia and from 0.569 to 0.596 for PDTB.

Analysis by Genre
For each of the four genres (novel, wikipedia, europarl and wsj) we have ~300 implicit DRs annotated by both DC and QA.Scholman et al. (2022a) showed, based on the DC method, that in DiscoGeM, CONJUNCTION is prevalent in the Wikipedia domain, PRECEDENCE in Literature and RESULT in Europarl.The QA annotations replicate this finding, as displayed in Fig. 4.
It appears more difficult to obtain agreement with the majority labels in Europarl than in other genres, which is reflected in the average entropy (see Table 3) of the distributions for each genre, where DC has the highest entropy in the Europarl domain and QA the second highest (after PDTB).Table 1 confirms these findings, showing that the agreement between the two methods is highest for Wikipedia and lowest for Europarl.
In the latter domain, the DC method results in more CAUSAL relations: 36% of the CONJUNC-TIONS labelled by QA are labelled as RESULT in DC.11 Manual inspection of these DC annotations reveals that workers chose considering this frequently only in the Europarl subset.This connective phrase is typically used to mark a pragmatic result relation, where the result reading comes from the belief of the speaker (Ex.( 4)).This type of relation is expected to be more frequent in speech and argumentative contexts and is labelled as RESULT-BELIEF in PDTB3.QA does not have a question prefix available that could capture RESULT-BELIEF senses.The RESULT labels obtained by DC are therefore a better fit with the PDTB3 framework than QA's CONJUNCTIONS.CONCESSION is generally more prevalent with the DC method, especially in Europarl, with 9% compared to 3% for QA.CONTRAST, on the other hand, seems to be favored by the QA method, of which most (6%) CONTRAST relations are found in Wikipedia, compared to 3% for DC. Figure 4 also highlights that for the QA approach, annotators tend to choose a wider variety of senses which are rarely ever annotated by DC, such as PUR-POSE, CONDITION and MANNER.
We conclude that encyclopedic and literary texts are the most suitable to be annotated using either DC or QA, as they show higher inter-method agreement (and for Wikipedia also higher agreement with gold).Spoken-language and argumentative domains on the other hand are trickier to annotate as they contain more pragmatic readings of the relations.

Case Studies: Effect of task design on DR classification models
Analysis of the crowdsourced annotations reveals that the two methods have different biases and different correlations with domains and the style (and possibly function) of the language used in the domains.We now investigate the effect of task design bias on automatic prediction of implicit discourse relations.Specifically, we carry out two case studies to demonstrate the effect that task design and the resulting label distributions have on discourse parsing models.
Task and setup We formulate the task of predicting implicit discourse relations as follows.The input to the model are two sequences S 1 and S 2 , which represent the arguments of a discourse relation.The targets are PDTB 3.0 sense types (including level-3).This model architecture is similar to the model for implicit DR prediction by Shi and Demberg (2019).We experiment with two different losses and targets: a cross-entropy loss where the target is a single majority label and a soft crossentropy loss where the target is a probability distribution over the annotated labels.Using the 10 annotations per instance we obtain label distributions for each relation, which we use as soft targets.
Training with a soft loss has been shown to improve generalization in vision and NLP tasks (Peterson et al., 2019;Uma et al., 2020).As suggested in Uma et al. (2020), we normalize the sensedistribution over the 30 possible labels12 with a softmax.
Assuming one has a relation with the following annotations: 4 RESULT, 3 CONJUNCTION, 2 SUC-CESSION, 1 ARG1-AS-DETAIL.For the hard loss, the target would be the majority label: RESULT.For the soft loss we normalize the counts (every label with no annotation has a count of 0) using a softmax, for a smoother distribution without zeros.
We fine-tune DeBERTa (deberta-base) (He et al., 2020) in a sequence classification setup using the huggingface checkpoint (Wolf et al., 2020).The model trains for 30 epochs with early stopping and a batch size of 8. We conclude that training on data that comes from different task designs does not hurt performance, and even slightly improves performance when using majority vote labels.When training with a distribution, the union setup (∪) seems to work best.

Case 2: cross-domain vs cross-method
The purpose of this study is to investigate how cross-domain generalization is affected by method bias.In other words, we want to compare a crossdomain and cross-method setup with a crossdomain and same-method setup.We test on the domain-specific data from the 1,200 instances annotated by QA and DC respectively and train on various domain configurations from DiscoGem (excluding dev and test), together with the extra 300 PDTB instances, annotated by DC.
Table 7 shows the different combinations of data sets we use in this study (columns) as well as the results of in-and cross-domain and in-and cross-method predictions (rows).Both a change in domain and a change in annotation task lead to lower performance.Interestingly, the results show that the task factor has a stronger effect on performance than the domain: When training on DC distributions, the QA test results are worse than the DC test results in all cases.This indicates that task bias is an important factor to consider when training models.Generally, except in the out-ofdomain novel test case, training with a soft loss leads to the same or considerably better generalization accuracy than training with a hard loss.We thus confirm the findings of Peterson et al. ( 2019) and Uma et al. (2020) also for DR classification.

Discussion and Conclusion
DR annotation is a notoriously difficult task with low IAA.Annotations are not only subject to the interpretation of the coder (Spooren and Degand, 2010), but also to the framework (Demberg et al., 2019).The current study extends these findings by showing that the task design also crucially af- fects the output.We investigated the effect of two distinct crowdsourced DR annotation tasks on the obtained relation distributions.These two tasks are unique in that they use natural language to annotate.Even though these designs are more intuitive to laymen, we show that also such natural language-based annotation designs suffer from bias and leave room for varying interpretations (as do traditional annotation tasks).
The results show that both methods have unique biases, but also that both methods are valid, as similar sets of labels are produced.Further, the methods seem to be complementary: both methods show higher agreement with the reference label than with each other.This indicates that the methods capture different sense types.The results further show that the textual domain can push each method towards different label distributions.Lastly we simulated how aggregating annotations based on method bias improves agreement.
We suggest several modifications to both methods for future work.For QA, we recommend to replace question prefix options which start with a connective, such as "After what".The revised options should ideally start with a Wh-question word, for ex."What happens after..".This would make the questions sound more natural and help to prevent confusion with respect to level-3 sense distinctions.For DC, an improved interface that allows workers to highlight argument spans could serve as a screen that confirms the relation is between the two consecutive sentences.Syntactic constraints making it difficult to insert certain rare connectives could also be mitigated if the workers are allowed to make minor edits to the texts.
Considering that both methods show benefits and possible downsides, it could be interesting to combine them for future crowdsourcing efforts.
Given that obtaining DC annotations is cheaper and quicker, it could make sense to collect DC annotations on a larger scale and then use the QA method for a specific subset that shows high label entropy.Another option would be to merge both methods, by first letting the crowdworkers insert a connective and then use QAs for the second connective-disambiguation step.Lastly, since we showed that often more than one relation sense can hold, it would make sense to allow annotators to write multiple QA pairs or insert multiple possible connectives for a given relation.
The DR classification experiments revealed that generalization across data from different task designs is hard, in the DC and QA case even harder than cross-domain generalization.Additionally, we found that merging data distributions coming from different task designs can help boost performance on data coming from a third source (traditional annotations).Lastly, we confirmed that soft modeling approaches using label distributions can improve discourse classification performance.
Task design bias has been identified as one source of annotation bias and acknowledged as an artifact of the dataset in other linguistic tasks as well (Pavlick and Kwiatkowski, 2019;Jiang and de Marneffe, 2022).Our findings show that the effect of this type of bias can be reduced by training with data collected by multiple methods.This could be the same for other NLP tasks, especially those cast in natural language, and comparing their task designs could be an interesting future research direction.We therefore encourage researchers to be more conscious about the biases crowdsourcing task design introduces.
Fig. 1 shows an example of an implicit relation that can be annotated as arXiv:2304.00815v1[cs.CL] 3 Apr 2023 Conjunction or Result.

Figure 1 :
Figure 1: Example of two relational arguments (S1 and S2) and the DC and QA annotation in the middle.

Figure 3 :
Figure 3: Distribution of the annotation errors by method.Labels annotated by at least 2 workers are compared against the reference labels of the Wikipedia and PDTB items.The relation types are arranged in descending order of the "ref.sub-label counts"

Figure 4 :
Figure 4: Level-2 sublabel counts of all the annotated labels of both methods, split by domain.

Table 1 :
Marchal et al. (2022)k if the item effect is similar across different methods and domains.Figure2in the Appendix shows the correlation between the annotation entropy and the agreement Comparison between the labels obtained by DC vs. QA.Full (or +partial) agreement means all (or at least one sub-label) match(es).Multi-label kappa is adapted fromMarchal et al. (2022).JSD is calculated based on the actual distributions of the crowdsourced sub-labels, excluding labels with only one vote (smaller values are better).

Table 2 :
Comparison against gold labels for the QA or DC methods.Since the distribution of the reference sub-labels is not available, JSD_flat is calculated between uniform distributions of the sub-labels.
Table 3: Average entropy of the label distributions (10 annotations per relation) for QA/DC, split by domain.
labelFN QA FN DC FP QA FP DC