As the pandemic forces public and private institutions to move online, many court and business leaders are looking to the field of online dispute resolution (ODR) for best practices and lessons learned. Developed over the last twenty years, largely in response to the growth of e‐commerce, the ODR field has generated a deep well of theory and practice while also identifying potential ethical dilemmas and risks. The application of technology, the “fourth party,” plays an increasingly integral role in how we negotiate resolutions to our disputes, with or without a third party. A brief overview of the history of ODR’s development will set the context for the exploration of the range of tools and techniques encompassed by online dispute resolution. Consideration of the ethical challenges raised by ODR practice will illuminate key questions and choices that need to be made in designing ODR systems and in governing their use.

The Internet was invented in 1969 but online dispute resolution did not arrive until twenty‐five years later. During the first half of the Internet’s life, dispute resolution processes were not needed because disputes were rare. There were no viruses, identity theft, spam, phishing, music downloading, cyberattacks, ransomware, or consumer disputes. Until the 1990s, the population of the Internet was relatively small and homogeneous, with use being limited to circumscribed populations within the military and academia. In this environment, the few disputes that occurred were settled informally. Since there were no Internet service providers enabling the general population to interact online, there were no multiplayer games, social networks, search engines, e‐commerce, or other engines of disputes.

In 1981, Roger Fisher and William Ury wrote that “conflict is a growth industry” (xvii). That may have been true of the physical world at that time, but it was not true of the virtual. The small population of Internet users, however, was only one reason there were few disputes. The other main reason was that certain online activities, most notably e‐commerce, were prohibited until 1992. With no consumers, an online world with a limited population and a difficult‐to‐navigate environment generated few disputes.

This all changed when the ban on online commercial activities was lifted in 1992 (Kesan and Shah 2001). In addition, and about the same time, large numbers of college students acquired access through universities and citizens began to obtain access through Internet service providers. The appearance of the World Wide Web during the same period also made engaging in online activities easier and more attractive. All of this growth in the use and population of the Internet provided the ingredients for large numbers of disputes, and created a need for new dispute resolution processes.

The ADR (alternative or appropriate dispute resolution) field was slow to recognize this. It did see the need for ODR in resolving cross‐border and low‐value, high‐volume consumer disputes, such as those occurring between buyers and sellers in online auctions (Katsh, Rifkin, and Gaitenby 2000; Katsh and Wing 2006). With parties located at a distance, such disputes could not be handled by courts or any face‐to‐face process. ODR was premised on a belief that software could substitute for many offline redress mechanisms. This belief was summed up in the metaphor of ODR as the “fourth party” (Katsh and Rifkin 2001), which could manage the communication and processing of information that is at the heart of every ADR process. In this view, face‐to‐face interaction might be highly useful but is not inherently necessary and with improvements in software and advances in artificial intelligence, the capabilities of ODR and its ability to resolve increasingly complex cases would expand.

Twenty‐five years after the first ODR experiments in the mid‐1990s, the use of technology in dispute resolution is no longer resisted. Indeed, it is present in just about every third‐party tool kit, used in offline as well as online disputes, and growing steadily with the passage of time. “ODR is no more ‘Online ADR’ than the online versions of banking, education, or gaming are simply the offline versions of those systems moved online. Once a process moves online, its very nature begins to change” (Katsh and Rule 2006: 330). As Marshall McLuhan once wrote, “[W]hen a new technology comes into a social milieu it cannot cease to permeate that milieu until every institution is saturated” (1964: 161). That is what has been occurring with ODR and ADR over the last two decades.

The concept of the fourth party has evolved alongside ODR’s expansion. When the metaphor was first coined, we did not have iPhones or Facebook or Alexa. But as new technologies have arisen the fourth party has expanded in capability, reach, and roles. The fourth party now regularly takes a seat at the table along with party one and party two (the disputants) and the third party (the human neutral, such as a mediator or arbitrator). Fourth parties are foundational to the practice of ODR, and the concept undergirds our understanding of how algorithmic and machine learning tools fit appropriately into dispute resolution processes. In ODR trainings, third parties are encouraged to regard fourth parties as partners in the resolution process. The same can be said for disputants utilizing technology for negotiation. Fourth parties are already leveraging rule‐based systems to generate settlement offers, diagnose problems, and issue decisions, especially in low‐value, high‐volume caseloads.1 These tools are currently lightening the administrative load on parties and neutrals, saving time and money, and enhancing the performance and credibility of the ODR process, but they represent only the beginning of what the fourth party can offer.

The real question is not where the fourth party is today but where it is going. The fourth party is becoming more capable all the time. As computer processors become more powerful and user experience designs more intuitive, the fourth party expands what it is able to provide. Also, the fourth party can operate as a service, so it can be available on the phones in the parties’ pockets all day every day, which can increase accessibility and improve responsiveness. The fourth party can do things that a third party cannot (or should not) do because of its concern that it will be perceived as partial (algorithms cannot be influenced by compliments or charisma). Parties may also react differently to suggestions from a third party as opposed to a fourth party, perhaps because the fourth party has no feelings that will be hurt if its suggestion is rejected.

Some of the things we ask fourth parties to do today include case intake, problem diagnosis, payment processing, document management, notifications and reminders, calendar management and scheduling, and overall case management. Many of these tasks are quite time‐consuming for human negotiators and third parties, so it makes sense to ask fourth parties to handle them. And because most people rely on technology to do these things in other contexts (e.g., when making a purchase, signing up for a dentist appointment, or filing for an insurance reimbursement), it is not at all jarring for parties and dispute resolution providers to rely upon it for similar tasks within an ODR process.

With rapid technological innovation and machine learning, fourth parties are constantly expanding their skill set, so the real question becomes, what will we rely on them to do over the next five or ten years? These future tasks may include case research and evaluation (perhaps helping us to envision our zone of potential agreement, or ZOPA), conflict coaching, communication reframing, evaluation of alternatives to a proposed settlement, enforcement of outcomes, document drafting and submission to legal bodies, or even automated negotiation or binding algorithmic evaluations. Many dispute resolvers may pause when they think about giving technology control over these more fundamental aspects of the dispute resolution process, but as technology becomes more demonstrably competent and parties become more welcoming of algorithmic assistance, the things we ask our fourth parties to do will evolve in kind.

Technological advances just around the corner enable us to envision an increasingly sophisticated fourth party soon to come that not only performs additional tasks, but is able to play multiple roles: manage information and communication as a partner or stand in for a party in a negotiation; intervene as a third‐party neutral to facilitate agreement between disputing parties; or engage as a third‐party neutral to render a binding outcome to a dispute. When fourth parties take on these more substantive roles in a negotiation, the identity of the decision maker falls on a human‐algorithm continuum. Negotiating entities may range from unassisted humans, to avatars (digital models driven by real‐time humans), to agents (digital models driven by computer algorithms), with hybrids in the interstices (Russell 2019). Current research is probing the boundaries and relative effectiveness of these diverse actors in negotiation environments. The more human an actor is perceived to be, the more responsive the other party behaves in social interactions (Bailenson et al. 2006). The algorithmic actor has greater analytic (descriptive, diagnostic, predictive, and prescriptive) and data analysis capacity than humans, at a lower cost and greater controllability (Fox et al. 2015; Sela 2018; Engstrom and Ho 2020). Some hybrid processes may capture the best of both ends of the continuum. Process factors such as efficiency, transparency, case flow, settlement rates, outcome patterns, transaction costs, accessibility, and outcome consistency may all be enhanced through machine learning; while autonomy, satisfaction, and due process may be more challenging to track and measure. Communication and data collection increase the speed, range, and impact of resolution processes, such that any actor in the dispute resolution process who has access to the largest pool of data will have the upper hand in the negotiation (Ashbrook and Zalba 2021).

Software agents are growing in prominence and are perfectly aligned with the concept of the fourth party. Agents are computer programs that automatically perform actions on behalf of individuals or organizations. We rely on agents to do things such as sort incoming e‐mail messages (forwarding ones that meet certain criteria), automatically bid on an item in an online auction, and sell a certain number of shares when they hit a certain price. Agents have been used for these automated tasks for years. And now, increasingly agents are being combined with expert systems and artificial intelligence (AI) to handle more sophisticated tasks than just “if … then” type actions. Agents are now learning how to collaborate and interact with other agents, or how to collect large amounts of information, and then distill the content down into more easily understood components. While beyond the scope of this article, it is worth noting the significant efforts of federal administrative agencies to use AI for improved governance relating to data management, adjudication, enforcement, and accountability functions (see Engstrom and Ho 2020; Engstrom et al. 2020).

It is not difficult to imagine a future where disputants will enlist the assistance of sophisticated software agents to attempt to resolve their disagreements. In fact, individual citizens are already using the services of websites such as donotpay.com and rentervention.com to respond to issues such as parking tickets, unemployment benefits, and eviction proceedings. The agent collects the necessary and relevant information from the individual, and then pursues the matter on their behalf, keeping them updated on progress. Businesses are similarly using agents through sites such as eConciliador.com to negotiate repayment of debts and to resolve consumer complaints. In these cases, a business creates a negotiation “model” that maps out the kinds of concessions it is willing to make in certain circumstances, and then the agent uses that model to automatically reach out to individual customers to engage in a direct negotiation through e‐mail or text message to find a solution.

In the future, mediators may have their own software agents capable of creating a fair and transparent process and ensuring both sides agree to any outcome achieved. Regulators also may have agents that monitor resolution processes in real time to ensure they are within the guardrails of the law. So each disputant in a dispute (along with their lawyers) would be able to create an agent, populate it with their preferences and relevant information, and then empower it to negotiate on their behalf. The agents from each side would engage in a direct negotiation, under the supervision of the agents of mediators and regulators (agents supervising other agents), and once a proposed settlement is achieved, it could be shared with the humans for final approval.

Central to the consideration of technology’s place in dispute resolution—both at present and in the future—is its role within systems such as courts, e‐marketplaces, and mediation programs. Dispute resolution in the legal realm has long been equated with courts. Litigation can be expensive in terms of time, money, relationships, and uncertainty of outcome. During recent decades there has been a swing toward forms of ADR that shift the locus of decision making around the process and outcome from a judge to the parties, either directly or facilitated by a third‐party neutral. As the use of ADR has expanded, so has the need for designing systems to manage the plurilateral processes. The task of applying one or more dispute resolution processes to resolve a specific case volume of disputes has come to be known as dispute system design (DSD).

In an ODR world, dispute system design is increasingly important because new technologies add both flexibility and complexity to information and communications processes that are at the heart of DSD. A system designer (or design team) will weigh an array of elements that shape a stream of disputes, taking into consideration the goals, stakeholders, context and culture, process options, resources, and metrics for success. Systems range across public and community justice (courts and claims facilities); organizations (commercial firms with their vendors, customers, and employees); and international boundaries (treaties, transitional justice, and commerce). Systems can be coded into an ODR process and made available to users (Amsler, Martinez, and Smith 2020).

While DSD frameworks have been around for years, many of them have not adequately accounted for technology’s expanding influence. Below we describe key elements of DSD analysis in which the disputing parties, third parties, and fourth parties play a central role. Our focus here is on goals, stakeholders, and process. The other three framework elements (context, resources, and accountability) are discussed more briefly.

Goals form the most critical diagnostic. What kinds of disputes are anticipated and what metrics are most important for measuring effectiveness? Goals can include efficiency, compliance with law, access to justice, innovation, dispute prevention, reputation of parties, and more, all of which are likely desirable. But someone or some decision‐making body will need to decide which goals are valued most. Articulating the priority goals will focus the choice of processes that will form the system, a user’s choice of specific process within the system, and the respective measures for success (Amsler, Martinez, and Smith 2020).

Stakeholders, the second framework element, is the identification of parties and others with interests, relationships, and relative power. Stakeholders comprise the users, as well as other parties, and organizations that are affected by the dispute. Ideally, representative stakeholders will be involved in the design, implementation, and assessment of the system. Stakeholders become more complex with the addition of nonhuman actors whose interests will be hard to identify and give voice to.

Process options range from the formality of court adjudication, which determines legal rights and obligations, to more informal and flexible processes such as those offered by ADR. The system may have one or more processes to prevent, manage, and resolve disputes. Technological advances make it possible for decision making to rest with the disputing parties in negotiation or mediation, or with a third party in evaluation, arbitration, or court adjudication and potentially progress seamlessly from one to another in a coordinated process with stages if designed to streamline the transition between processes and reduce the commensurate transaction costs.

Context is comprised of the circumstances in which the dispute arises, including physical, social, economic, and political factors. Context also includes culture—the implicit assumptions and values held by the surrounding community that help define the dispute. These are all crucial features to consider in general, and specifically in relation to technology as described below.

Resources—financial, technological, and human—will enhance or constrain the capacity to use and operate the system. Leadership from the top, combined with understanding of users’ perspectives, are both critical to understanding motives and building scale capacity. Metrics of accountability and success are important to assess whether the designers have achieved their specified goals over time and ensure that system managers are fulfilling them.

Table One (Martinez 2020: 145) applies these framework elements to four well‐known dispute resolution systems including courts and ADR (Nextdoor, Kleros, and eBay). Courts and tribunals are often the main state‐sponsored judicial dispute resolution models; Nextdoor (http://nextdoor.com) is a social media platform focused on neighborhoods; Kleros (http://kleros.io) is a crowdsourced jury resolution platform focused on virtual currencies; and eBay is a global e‐commerce auction marketplace.

Table One

Online Dispute System Comparison

DSD ElementeBayCourts and TribunalsNextdoorKleros
Goals Fast and fair resolutions for transaction problems. Justice, efficiency, and streamlined user experience. Intervene on fake news and bullying; promote civility, politeness, and neighborliness. Fair, transparent, scalable, and self‐administering. 
Stakeholders eBay, consumers, sellers, and regulators. Courts, court staff, judges, the public, counsel, and litigants. Citizens/neighbors, journalists, and regulators. Commercial disputants in employment or insurance smart contracts, and coders. 
Context and Culture High volume, low value; international/cross‐border; transactional relationship. Public; diverse; formal; various levels of literacy, education, and comfort with technology. Geographic proximity; personal relationships; different races, ages, and income levels. Online first; international diversity; informality; high comfort with technology. 
Processes Diagnosis, negotiation, facilitation, and evaluation. Settlement, mediation, and trial. Discussion forums, technology‐based coaching and advice, and facilitative process. Online evaluation, crowdsourced jurors, and incentivized participation. 
Resources eBay investments in the software and case management staff. Public funds, parties, public employees, supporting nonprofits. Nextdoor investments in the software and case management staff. Kleros overall management, but designed to be self‐sustaining. 
Evaluation eBay teams using surveys, user experience research, and data capture and monitoring. Internal and external evaluation programs and court satisfaction data. Surveys and user experience monitoring. Overall usage and growth of the Kleros caseload and user base. 
Designer of System eBay Court with external vendors and partners. Nextdoor Kleros and the worldwide developer community. 
Process Selection for Individual Case Specified in user agreement; initiated by consumer. Opt in by filer/plaintiff. Required by Nextdoor software. Can be initiated by either complainant or respondent. 
DSD ElementeBayCourts and TribunalsNextdoorKleros
Goals Fast and fair resolutions for transaction problems. Justice, efficiency, and streamlined user experience. Intervene on fake news and bullying; promote civility, politeness, and neighborliness. Fair, transparent, scalable, and self‐administering. 
Stakeholders eBay, consumers, sellers, and regulators. Courts, court staff, judges, the public, counsel, and litigants. Citizens/neighbors, journalists, and regulators. Commercial disputants in employment or insurance smart contracts, and coders. 
Context and Culture High volume, low value; international/cross‐border; transactional relationship. Public; diverse; formal; various levels of literacy, education, and comfort with technology. Geographic proximity; personal relationships; different races, ages, and income levels. Online first; international diversity; informality; high comfort with technology. 
Processes Diagnosis, negotiation, facilitation, and evaluation. Settlement, mediation, and trial. Discussion forums, technology‐based coaching and advice, and facilitative process. Online evaluation, crowdsourced jurors, and incentivized participation. 
Resources eBay investments in the software and case management staff. Public funds, parties, public employees, supporting nonprofits. Nextdoor investments in the software and case management staff. Kleros overall management, but designed to be self‐sustaining. 
Evaluation eBay teams using surveys, user experience research, and data capture and monitoring. Internal and external evaluation programs and court satisfaction data. Surveys and user experience monitoring. Overall usage and growth of the Kleros caseload and user base. 
Designer of System eBay Court with external vendors and partners. Nextdoor Kleros and the worldwide developer community. 
Process Selection for Individual Case Specified in user agreement; initiated by consumer. Opt in by filer/plaintiff. Required by Nextdoor software. Can be initiated by either complainant or respondent. 

Each of these online and offline systems generate and resolve disputes, and each leverages the power of the fourth party in different ways. In addition to the DSD framework elements, the table notes the identity of the designer as well as who chooses the process(es) for a specific case. Applying the DSD framework analysis helps to triangulate among the different stakeholders, their respective interests and goals, and the process choices within each system. If ODR is selected as one or more processes, such analysis helps ensure that the fourth party is constructed in a way that promotes the goals of the designer and stakeholders. New agents—whether for disputing first and second parties, or a third‐party neutral—emerging within ADR processes will increasingly need to be assessed for their function along the human‐algorithmic continuum, and considered as part of the overall system design.

As we place increasing responsibilities on the fourth party, we need to be mindful of the pitfalls as well as the possibilities that technology brings to dispute resolution. The terrain of technology‐infused dispute resolution systems and processes recently exploded within courts and ADR in response to the COVID‐19 pandemic—following the footsteps of the small but growing number of ADR practitioners and courts that had already employed technology. ODR is proving critical for access to justice under social distancing restrictions and is demonstrating its relevance and usefulness to many who had formerly been reluctant to use it. Although advocates have long argued for the use of technology by courts and for ODR usage by ADR practitioners and dispute resolution systems, clearly necessity has been the mother of application to tens of thousands of new adherents worldwide. However, while the use of a video conferencing platform2 and other forms of technology can enhance access to justice, they also raise risks when they are employed in ODR. Data security, privacy, and power imbalances based on familiarity with technology are just a few of the areas in which risks can enter or be exacerbated in technology‐infused dispute resolution. Have our ADR trainings sufficiently upskilled practitioners to prevent and address such risks when they begin using technology in their practice? Do our ADR standards include the necessary guidance and rules to ensure ethical ODR practice? Are dispute handling systems that are suddenly incorporating technology including requirements for data security and privacy protection? Are related legal liabilities covered by insurance and well known to ODR platform providers? These are just a few of the many ethical considerations that are raised by the application of technology to dispute resolution that the field needs to address. Without doing so, dispute resolution programs and practitioners as well as disputing parties will often unknowingly carry the burden of these risks.

In 2020, we are poised to have approximately one billion e‐disputes worldwide (Rule 2017), many separated by jurisdiction, culture, and language, and inaccessible to court redress. These disputes often involve the collection and usage of big data with little transparency. The explosion in e‐disputes is fertile ground for expanding ODR—including artificial intelligence—for process management and the generation of outcome options and decisions. Simultaneously, the lack of transparency demands increased consideration of how ODR should be governed. There is already a robust discussion about this in the ODR literature (Liyanage 2012; Raymond and Shackelford 2014; Ebner and Zeleznikow 2016; Wing 2016; Schmitz and Wing 2021) and now attention must be paid to regulating and governing ODR practice by more legal scholars and court personnel as well. From both an ethical and a practical standpoint, employing technology makes sense since it can increase efficiency as well as dispute prevention and detection, and expand the capability to manage complex data. This not only can increase access to justice, but also offer the potential for more creative outcomes. These benefits, as noted, also come with substantial increases in risk. The lack of transparency of AI usage can, at a minimum, raise concerns and erode trust or further entrench a lack of trust in courts and other forms of dispute resolution. With new input points and methods, and integration with mega systems (courts, private enterprise, social media, etc.), come enhanced possibilities for power imbalances, confidentiality breaches, privacy violations, the compounding of bad data, and foul play.

Until recently, there has been a dearth of ODR‐related standards, legislation, and regulation. Instead, practitioners have continued to rely predominantly upon ADR standards and guidance that, on the whole, have not addressed the application of technology nor reflected the reality of the increasingly cross‐border nature of many disputes. Given the growing awareness of the challenges that ODR raises, the cross‐jurisdictional nature of so many disputes that ODR platforms and practitioners manage, and the virtual absence of technology‐related ADR standards, there has been a growing interest in filling this gap (Wing 2016).

The Ethical Principles for Online Dispute Resolution (Wing 2016; National Center for Technology and Dispute Resolution 2016) articulate a set of values (not rules) that can serve as guidance for creating accountability mechanisms for the ethical design and function of ODR.3 They build upon shared values in the ADR field and overtly integrate technology. For example, the principle on Competence states: “ODR systems, processes, and practitioners will be competent in or provide access to relevant technological or human competency required for the effective implementation of the dispute resolution process that they undertake to assist with. This includes but is not limited to relevant dispute resolution, legal, and technical knowledge; languages; and culture” (Wing 2016; National Center for Technology and Dispute Resolution 2016). Like the Competence ODR principle, each of the other principles is framed on a high‐level order to offer flexibility in interpretation across technology, sector, jurisdiction, and culture.

Dispute resolution membership organizations, courts, and governmental bodies can use these ODR principles as a guide for the creation of regulations and standards for training competencies, ODR system design requirements, and expectations for ODR practitioner and platform performance. ODR practitioners can use them to assist in selecting an ODR platform and when contemplating new ethical dilemmas they face in their practice. For example, a mediator may use the Competence principle in considering how to ensure that their parties will have access to and understand how to use the ODR tools they will employ during the process. Will the mediator hire a technical expert to offer training to disputants? Or will the mediator teach the parties how to use the software? What training and teaching materials will the mediator need to deliver these services? What protocols and strategies must the mediator have in place if one of the parties has technical difficulty during a joint session with the mediator and the other party? These questions raise not only ethical issues, but also practical challenges that provide opportunities for either increasing or reducing legal liabilities and access to justice. Without the use of ethical guidance for the development of new ODR guidelines and standards, we hand over access to justice perimeters to software developers and increase risks for the parties, practitioners, and ODR platform providers, leaving AI unchecked as it expands into dispute resolution processes.

However, while ethical principles for ODR may continue to be useful on an ongoing basis as technology changes more rapidly than ODR legislation and standards, such principles alone are arguably insufficient. Standards with measures of accountability and mechanisms for enforcing them are also important to ensure quality and access. There are an increasing number of efforts to create ODR guidance and standards by membership organizations, private enterprises, and government agencies, some of which have formally addressed AI.4 Some examples include a set of ODR Standards promulgated by the International Council for Online Dispute Resolution (2017) (based on the Ethical Principles for Online Dispute Resolution) and a collaboration that is underway between the American Bar Association Dispute Resolution Section, the International Council for Online Dispute Resolution, and the National Center for Technology and Dispute Resolution to develop robust standards and guidance on the application of technology (including AI) to dispute resolution systems and software design, platform management, and practitioner behavior.

Efforts to advance ODR guidance and accountability through governance standards and system design should engage a wide variety of stakeholders, including end users and potential collaborators from other disciplines such as data science and engineering. Together, we can best prepare to ethically harness the wonders of artificial intelligence and other forms of technology to address some of the greatest challenges with which the field of dispute resolution has long struggled, challenges that can either be magnified or reduced by employing such technology—such as power imbalances, insufficient access to justice, process control, and repeat player bias.

Twenty years ago, the idea of using technology to resolve a dispute was considered futuristic and somewhat dehumanizing. Now, the use of technology to resolve disputes is commonplace; parties even complain when it is not available. But if the first twenty years of ODR’s growth has been impressive, the next twenty years are poised to be truly revolutionary. The range of technological options for preventing and resolving our disputes will continue to grow and expand alongside the increasing power of computer processors and the wider reach of global networks. We will become comfortable with the notion of algorithmic agents and artificial intelligence acting as our proxy in negotiations, evaluating our BATNAs, and even providing decisions in cases we cannot resolve ourselves through direct negotiation. We will all become ODR savvy, selecting the appropriate tools and communication types for each stage of dispute handling.

This new reality must spark deeper research into the core tenets of dispute system design in online dispute resolution, as well as a comprehensive reevaluation of ethical principles and standards of practice in the dispute resolution field more generally. There will soon be a worldwide flowering of diverse platforms and services for online dispute resolution in the public, private, and nonprofit sectors so we must act now to establish rules and guidelines to ensure that the core tenets and objectives of ADR practice (and indeed the wider judicial system) are not lost in the shuffle and are effectively adapted to reflect the impact of technology’s integration. The introduction of machine learning into ODR processes will magnify the challenges of ensuring confidentiality, fairness, accountability, and transparency, especially when outcomes are algorithmically generated inside the “black box” of artificial intelligence. If we do not act now, monitoring ODR systems for abuse may become impossibly complicated; but if we do it right, we have a chance to expand access to justice and fair redress on a scale that would have been unimaginable to the founders of the ADR field.

1.

Ethan Katsh (co‐conceiver of the “fourth party”) and Orna Rabinovich‐Einy trace the shift of the fourth party from a tool for facilitating communication between parties; to an aid for instilling convenience, expertise, and trust into the process; to a tool aiding third parties and capable of forming decision‐generating algorithms (2017). See also a description of the case administration and expert solution generation tool of the British Columbia Civil Resolution Tribunal (Susskind 2020).

2.

It is worth noting that the vast majority of ODR technology that has been newly deployed since the start of the pandemic has concentrated only on video conferencing; and while it is a valuable addition to many processes, video conferencing alone does not reflect the depth and breadth of what technology can provide. It remains unclear whether the majority of practitioners and court personnel who are now using video conferencing are aware of the panoply of all that ODR can offer as well as the increased risks that can accompany its usage.

3.

The Ethical Principles for Online Dispute Resolution are: accessibility, accountability, competence, confidentiality, empowerment, equality, fairness, honesty, impartiality, informed participation, innovation, integration, legal obligation, neutrality, protection from harm, security, and transparency (see http://odr.info/ethics‐and‐odr/). Overall, they are in sync with the values articulated in the Ethical Principles & Standards for Artificial Intelligence and Autonomous Systems (see IEEE Standards Association 2016).

4.

See the ODR standards, principles, and guidelines archive on the website of the National Center for Technology and Dispute Resolution (http://odr.info/standards/). Like ODR, artificial intelligence advances the human capacity to integrate information with data to improve decision making; the reasons for regulating AI and the mechanisms for doing so (Cuellar and Mashaw 2016; Scherer 2016) are as important to ODR as they are to AI.

Amsler
,
L. B.
,
J. K.
Martinez
, and
S. E.
Smith
.
2020
.
Dispute system design: Preventing, managing, and resolving conflict
.
Stanford, CA
:
Stanford University Press
.
Ashbrook
,
C. C.
, and
A. R.
Zalba
.
2021
.
Social media influence on diplomatic negotiation: Shifting the shape of the table
.
Negotiation Journal
37
(
1
):
83
96
.
Bailenson
,
J. N.
,
N.
Yee
,
D.
Merget
, and
R.
Schroeder
.
2006
.
The effect of behavioral realism and form realism of real‐time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction
.
Presence
15
(
4
):
359
372
.
Cuellar
,
M.
, and
J. L.
Mashaw
.
2016
.
Regulatory decision‐making and economic analysis. Working Paper Series, Paper No. 525, John Olin Program in Law and Economics, Stanford Law School.
Ebner
,
N.
, and
J.
Zeleznikow
.
2016
.
No sheriff in town: Governance for the ODR field
.
Negotiation Journal
32
(
4
):
297
323
.
Engstrom
,
D. F.
, and
D. E.
Ho
.
2020
.
Algorithmic accountability in the administrative state
.
Yale Journal on Regulation
37
:
800
854
.
Engstrom
,
D. F.
,
D. E.
Ho
,
C. M.
Sharkey
, and
M.‐F.
Cuellar
.
2020
.
Government by algorithm: Artificial intelligence in federal administrative agencies. NYU School of Law Research Paper No. 20‐54.
Fisher
,
R.
, and
W.
Ury
.
1981
.
Getting to yes: Negotiating agreement without giving in
.
Boston, MA
:
Houghton Mifflin
.
Fox
,
J.
,
S. J.
Ahn
,
J. H.
Janssen
,
L.
Yeykelis
,
K. Y.
Segovia
, and
J. N.
Bailenson
.
2015
.
Avatars versus agents: A meta‐analysis quantifying the effect of agency on social influence
.
Human‐Computer Interaction
30
(
5
):
401
432
.
IEEE Standards Association
.
2016
.
Ethically aligned design: A vision for prioritizing human well‐being with autonomous and intelligent systems, version 1
. Available from https://standards.ieee.org/industry‐connections/ec/ead1e‐infographic.html.
International Council for Online Dispute Resolution
.
2017
.
ICODR standards
. Available from https://icodr.org/standards/.
Katsh
,
E.
, and
O.
Rabinovich‐Einy
.
2017
.
Digital justice: Technology and the internet of disputes
.
New York
:
Oxford University Press
.
Katsh
,
E.
, and
J.
Rifkin
.
2001
.
Online dispute resolution: Resolving conflicts in cyberspace
.
San Francisco, CA
:
Jossey‐Bass
.
Katsh
,
E.
,
J.
Rifkin
, and
A.
Gaitenby
.
2000
.
E‐commerce, e‐disputes, and e‐dispute resolution: In the shadow of “eBay law”
.
Ohio State Journal on Dispute Resolution
15
(
3
):
705
734
.
Katsh
,
E.
, and
C.
Rule
.
2016
.
What we know and need to know about online dispute resolution
.
South Carolina Law Review
67
(
2
):
329
344
.
Katsh
,
E.
, and
L.
Wing
.
2006
.
Ten years of online dispute resolution (ODR): Looking at the past and constructing the future
.
University of Toledo Law Review
38
:
101
126
.
Kesan
,
J. P.
, and
R. C.
Shah
.
2001
.
Fool us once shame on you—fool us twice shame on us: What we can learn from the privatizations of the internet backbone network and the domain name system
.
Washington University Law Review
79
(
1
):
89
220
.
Liyanage
,
K. C.
2012
.
The regulation of online dispute resolution: Effectiveness of online consumer protection guidelines
.
Deakin Law Review
17
(
2
):
251
282
.
Martinez
,
J. K.
2020
.
Designing online dispute resolution
.
Journal of Dispute Resolution
1
:
135
149
.
McLuhan
,
M.
1964
.
Understanding media
.
New York
:
McGraw‐Hill
.
National Center for Technology and Dispute Resolution
.
2016
.
Ethical principles for online dispute resolution
. Available from http://odr.info/ethics‐and‐odr/.
Raymond
,
A. H.
, and
S. J.
Shackelford
.
2014
.
Technology, ethics, and access to justice: Should an algorithm be deciding your case?
Michigan Journal of International Law
35
(
3
):
485
524
.
Rule
,
C.
2017
.
Keynote Workshop on Private International Online Dispute Resolution
.
Stanford, CA
:
Stanford University
.
Russell
,
S.
2019
.
Human compatible: Artificial intelligence and the problem of control
.
New York
:
Viking Books
.
Scherer
,
M. U.
2016
.
Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies
.
Harvard Journal of Law & Technology
29
(
2
):
353
400
.
Schmitz
,
A.
, and
L.
Wing
.
2021
.
Beneficial and ethical ODR for family issues
.
Family Court Review
.
Sela
,
A.
2018
.
Can computers be fair? How automated and human‐powered online dispute resolution affect procedural justice in mediation and arbitration
.
Ohio State Journal on Dispute Resolution
33
(
1
):
91
148
.
Susskind
,
R.
2020
.
Online courts and the future of justice
.
Oxford
:
Oxford University Press
.
Wing
,
L.
2016
.
Ethical principles for online dispute resolution: A GPS device for the field
.
International Journal of Online Dispute Resolution
3
(
1
):
12
29
.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.