Abstract
The randomized control trial and pre/post research designs are commonly used in applied research and provide common standards for mediation evaluation research. These approaches have many benefits, particularly for evaluating whether mediation as an experimental intervention works or not.
Scholars and practitioners, however, want to know not only whether mediation can work as expected but also how it works in a range of real‐world contexts over time. In these contexts, ideal experimental conditions are less likely to occur. Challenges include such circumstances as the following: the number of cases suitable for statistical comparison is insufficient; researchers lack control over how mediation is implemented; researchers lack clear, objective variables to measure; and the variability of mediation outcomes when studied over time makes it difficult to draw conclusions about them.
My research has involved each of these challenges, and I have used ethnographic research as a way to evaluate mediation in these contexts. In this article, I explain ethnographic methods and present two studies as examples of mediation evaluation research that began with a standard program evaluation design, and then incorporated ethnography to allow more complete data collection and analysis. My purpose here is not to argue that experimental methods in mediation evaluation research should be displaced but rather to demonstrate how ethnographic methods can be used when the conditions necessary for standard evaluation cannot be met. The two studies used as examples are from an elder mediation study in Ghana and a family court mediation study in the United States.
Introduction
As mediation has become a professionalized practice, mediation researchers have focused on establishing and evaluating the state of that practice. For example, in their recent literature review, James Wall and Timothy Dunne (2012) assessed how well mediation research has helped identify best practices. They argued that mediation researchers should do more testing of and less testifying about the benefits of mediation. They also called for more empirical research to test best practices with control groups and for direct observation of actual cases.
At the same time, mediation practice has grown to extend beyond the boundaries of what has been explored by existing empirical research. My own mediation research in Ghana, for example, involved court reform projects funded by the United States Agency for International Development (USAID) that inspired an elder advocacy organization to implement mediation as a component of its legal rights program. In U.S. family courts, mediation was originally tested as an alternative to litigation to resolve child custody disputes. Pilot efforts were so successful that mediation is now commonly mandated through family courts as a court process run by court staff; and caseloads have expanded from first‐time divorces to post‐divorce and nonmarital cases.
Practitioners and researchers have increasingly questioned the effectiveness of mediation in these expanded circumstances, but few have undertaken any empirical studies, perhaps because such studies are so logistically difficult. Research on existing mediation programs, for example, requires receiving multiple permissions from agencies that run programs, mediation referral sources, mediators, and disputants.
Additional challenges for researchers, which I explore in this essay, are that larger social contexts may influence mediation processes and outcomes, and programs may use mediation in ways researchers have not anticipated. While a standard evaluation research design can assess how well the mediation program fits best practice standards, it may not fully capture how else the program works in practice given local, cultural contexts.
In this essay, I describe how the flexible, exploratory approaches of ethnography can enrich our understanding of mediation practice. Using ethnographic methods can facilitate research access because ethnographers spend extended time in the field, which enables them to develop rapport and trust before seeking further data collection. Their immersion in those contexts as a participant‐observer also enable researchers to interpret how social contexts influence mediation process and outcomes.
A key difference between standard evaluation research methods and those of ethnographic field research involves what H. Russell Bernard (2002) called “individual attribute data” versus “cultural data,” or what might be called operationalized versus locally experienced data. Individual attribute data are units of analysis and variables that researchers can operationalize prior to data collection. For example, mediation is typically operationalized according to established professional models and practice standards. Mediation evaluation research then uses mediation cases or case sessions as the unit of analysis and operationalizes key variables under study by defining such variables as process steps and success measures with reference to the professional literature.
Ethnographic field researchers, however, are often more interested in cultural than individual data. Cultural data refers to general patterns of social norms and behaviors, such as how culture or local context may influence how participants (including mediation program staff, mediators, and disputants) use and interpret mediation. Cultural data can help explain why programs, when they are applied in various settings, may seem different or produce different outcomes than existing theory and/or best practice models would suggest. This analysis, in turn, can be used to challenge and enrich existing mediation models and to develop new training and program changes that can ensure best practice standards are met.
In this article, I first explain how standard mediation evaluation researchers collect individual data for prescriptive analysis, and then explain how ethnographic field research collects cultural data for descriptive analysis. The former type of research has been instrumental in establishing the field and codifying and upholding standards of best practice. The latter, I argue, can be used to determine something different, which is not what ought to happen as much as what is happening out in the field as more people become involved in the process as both practitioners and disputants, and as programs are adopted by and adapt to organizational and institutional contexts.
In the article, I illustrate these ideas by describing two research projects in which I was involved. The first was a study of mediation and elder advocacy in Ghana; the second draws on my experience in a U.S. family court program. I argue that broadening our methodology for undertaking evaluative research beyond the “gold standard” of the randomized control trial (RCT) to include use of ethnography can help us answer basic questions about how mediation is put into actual practice in the field and how that practice affects both mediators and the disputants they seek to help.
The Randomized Control Trial and the Value of Experimental Control
The “gold standard” for research evaluating the effectiveness of practices (e.g. “evaluative research”) in fields such as alternative dispute resolution (ADR) is the RCT study (Reynolds 2008). In general, an RCT randomly assigns a large, homogenous sample of subjects into two groups that vary according to which intervention is applied (experimental group assignment) or no intervention (control group assignment). Researchers apply the intervention to the experimental group and compare the outcomes between the two groups. Statistically significant differences — that is, those that statistical analysis indicates cannot be explained by chance — indicate what impact the intervention had on the experimental group and whether intended outcomes were achieved. For example, in establishing the field of divorce mediation, Robert Emery, David Sbarra, and Tara Grover (2005) conducted an RCT study in which couples filing for divorce were randomly assigned to participate in mediation or not. The researchers compared differences in outcomes, cost, the time it took to come to a settlement, rates of compliance with the settlement, and the disputants’ satisfaction. They found positive results for mediation, which were used to establish the efficacy of mediation in divorce settlement. A follow‐up study conducted twelve years later indicated that positive results had been sustained (Emery 2012).
The RCT research design requires a great deal of preparation and control (Bickman and Rog 2009). For this reason, a research lab is often the ideal research site location. Researchers can then control the intervention models and applications, as well as subject recruitment and sampling. Accurate statistical analysis requires a sufficient number of cases to compare using objective variables. Some variables may already be objective, such as dollar amounts; others—such as “satisfaction” or “agreement”—are more indeterminate and subjective until researchers find a way to operationalize them. For example, the variable “mediation agreement” could be operationalized in several different ways: parties sign a full agreement and still honor its conditions six months later, or parties reach verbal agreement by the end of the mediation session, or parties achieve resolution on any one issue in the dispute — as well as a range of options in between. Objective, measurable experimental research requires that researchers make some judgment calls in defining variables, often based on measures established through previous published research.
A lab setting is ideal when the research seeks to test a particular theory or the efficacy of a model using predetermined measures of success, such as settlement of the dispute. In the lab, researchers can shape interactions through structured simulations and then record, observe, and perhaps interview subjects to evaluate results. The research roles are clear, and researchers maintain boundaries by following ethical protocols approved by human subjects review boards. Objective distance helps reduce researcher bias. Deductive reasoning and operationalization of variables yields individual attribute data that can be measured and analyzed using statistics. If samples are sufficiently large, researchers can assess whether results are statistically significant or merely random.
Benefits and Limitations of Standardization in Nonstandardized Research Locations
The RCT and related research designs have many benefits. Researchers (Emery, Sbarra, and Grover 2005; Emery 2012) frequently choose this method to investigate whether mediation can work in a new field of practice. Using standard measures makes studies easier to replicate and compare. Such studies can then establish a track record of success (one hopes) and ongoing questions for further research. Limiting variables and the scope of research to comparing the circumstances of subjects before and after the intervention also helps determine linear causation — that is, did this intervention cause the desired results? Laboratory research enables researchers to more easily follow best practice models for research design and achieve research fidelity because it gives them more control over sampling and experimental treatment. Standardizing research practices makes it easier to replicate them and compare results over time and across studies. In this way, the evidence base that proves mediation has benefits can be established (Greiner and Matthews 2016).
At the same time, now that professional mediation is practiced so widely around the world, we want to know not only whether interventions can work but also how they work in a range of real contexts over time. This includes the wide range of mediation practices that may deviate from professional models that have been tested through lab research. Scholars in disciplines that use both lab‐based and more natural settings distinguish between what is learned in the lab versus what is learned “in the wild.”1 As the latter phrase implies, researchers give up a lot of control when research moves into more natural settings. The less control the researcher has over the experimental conditions, the more difficult statistical analysis and verification of results become.
It is messy and challenging to allow real‐world contexts and contingencies to affect research results. When researchers analyze mediation in contexts in which random assignment is not possible, caseloads may not be sufficient to allow statistical comparison, researchers cannot control how the intervention is implemented, and larger personal, social, and legal contexts affect mediation processes and outcomes over time. But such efforts can yield benefits. My response to managing such messy research contexts as a cultural anthropologist has been to apply the “gold standard” of my own discipline, which is ethnography.
Ethnographic Research Design and Methods
In comparison to RCT research design, ethnography is both more flexible and vaguer. Rather than encompassing a clear set of steps, ethnography can refer to the overall research design, data collection methods, analysis, and/or the final written product. Flexibility in approach can mean revising hypotheses, research site boundaries, research roles, and data collection to best answer research questions; and the research questions themselves may change as new insights and hypotheses emerge from the data.
Finding order in the chaos and ensuring the quality of data collection and analysis can require extended periods of time in the field. The overall process is iterative rather than linear; that is, research steps are repeated in response to data collection and emergent analysis while in the field. For example, the researcher asks the same question in different ways to the same and new people over time through formal interviews, informal conversation, and direct observation. The endpoint is reached not through a particular number, such as sample size or number of observations, but rather through data saturation based on whether new data no longer challenges ongoing and overall data analysis (Padgett 1998).
The main research method is called “participant observation” because the researcher learns through direct participation and observation in local contexts, explicitly and intentionally engaging in conversation about related research topics, recording daily experiences and emergent experiences. The researcher identifies and analyzes data by reviewing these notes periodically, engaging those in the field as well as experts on general research topics, and consulting the literature. Hypotheses may emerge through this process, and those hypotheses can help determine the direction of further data collection and analysis.
Rather than rely on standard protocols or statistics to ensure research quality, ethnographers use reflexivity. Michael Burawoy (Burawoy 1998: 5) described reflexivity as comprising “three dialogues.” The first dialogue is between observers and interlocutors; the second is “between local processes and extralocal forces,” experienced through field immersion; and the third is theory, which refers to comparing results with academic theory and previous research. Asking the same or similar questions over time helps minimize researcher bias because the researcher continually invites multiple perspectives and critical responses (Burawoy 1998). Researchers may later test published results by sharing with research participants in “valedictory revisits” (Burowoy 2003: 672).
One danger of field research is that, over time, research roles can become blurred. The researcher begins as an often naïve observer who depends upon culture experts in the field for better understanding. Ongoing and often informal contact between researchers and research participants raises ethical and practical dilemmas that the objective research methods of RCT type studies avoid. Much has been addressed in the literature about how to achieve the trust and rapport that make accurate (and informed and insightful) data collection possible without overstepping ethical boundaries or “going native” and abandoning outsider understanding for insider presumptions (Crampton 2015). I discuss these risks and limitations below.
The process and its aspects (research role, data collection, and reflexive analysis) have developed in part as responses to the critiques of research respondents and others living in research sites. Historically, anthropologists have conducted ethnographic studies by traveling long distances to live for an extended period of time in remote villages. The goal was to understand local meanings and practices as they circumscribed what outsiders presumed to be a self‐contained culture or social system (Joseph 2002). Learning another culture from an insider or “emic” perspective required immersion in the daily lives of the people through participant observation (Schensul and LeCompte 2013). At the same time, research roles were objectively and clearly defined because the anthropologist would leave the field to return to his or her outsider perspective as an expert with “etic” analytic understanding of that culture even as insiders were considered experts of the emic perspective. Because culture in that village, for example, was objectified as data that could be recorded, analyzed, and compared with the self‐contained culture found in other villages, data collection methods were the same as found in standard evaluation studies, such as interviews, direct observation, and surveys.
These methods are still used today but research roles are often more blurred, our presumptions regarding objectivity have changed, and our research boundaries have extended to accommodate cross‐cultural understanding. These changes arose through decolonization and globalization, which reduced the relative isolation of villages and allowed people who lived in them (and who sometimes became anthropologists) to react to published research. Although researchers had presumed themselves to be objective observers, neutrally recording the lived experiences of others, critics argued that data collection had been filtered through the biases and presumptions of Western‐trained scholars (Owusu 1978).
These biases extended to the boundaries of basic categories of research, such as culture, law, and kinship (Schneider 1984). For example, studies of law as comparative studies of formal legal systems were broadened to include what we interpret as informal dispute resolution (Moore 2001). Today, study of the anthropology of law includes study of disputes as they develop and are addressed within both formal and informal processes. So today, anthropologists seek to study culture — and other social phenomena, including mediation — not as a comprehensive description of a thing that is static and self‐contained but rather as always “on the make” (Strathern 2000: 1). This phrase has a double meaning: it suggests that culture is always “in the making” even as it “makes” (e.g., structures) the ways in which people interact, interpret, and reach their own conclusions about problems and solutions.
The ethnographer's job, therefore, is not so much to record what can be objectively collected as the components of culture (or cultural practice such as mediation) but rather to continually sift through ongoing observations, conversations, and experiences in dialogue with local people to sort through complexities and develop an informed understanding that is both authentic to local knowledge and also makes a contribution to broader academic scholarship. Rather than achieving statistical validity following individual data collection, the goal is to achieve the interpretive validity that a researcher develops over time.
Ethnography is thus a highly inductive method in which the researcher invites research participants to engage directly in the research process. Interviews may be recorded and transcribed if this is culturally appropriate. The quality of data collection depends upon developing trust and rapport between researchers and their subjects to ensure authentic answers and candid responses to initially naïve questions based on outsider expectations. This is in contrast to the more rigid roles and objective distance between researchers and research subjects that controls the quality of RCT studies. Over time, the researcher may learn that what seemed important based on available literature is different or interpreted differently in this local setting. In fact, ethnographers typically describe the surprises they encountered during data collection to show how they had to adapt their understanding to gain insider perspectives.
How to plan for and create opportunities for those surprises? Extended participant observation should allow for candid and unexpected moments. The rather vague term often used — even in textbooks — is “hanging out” (Bernard 2000: 333). Determining how to “hang out” in a research site can seem awkward to those used to more explicit methods. By definition, there is no specific task to complete or way of knowing if one is gaining the experience and insight that will help address research questions. Ideally, researchers find ways to embed themselves such that insiders eventually engage with them as a normal course of action within that setting. Anthropologists in the field keep field note journals to simply record daily experiences that may later prove useful as they develop insider knowledge. They may also write brief analytic memos to explore their initial responses to research questions in the light of their emerging understandings about the local culture (Corbin and Strauss 2014).
A key challenge for the ethnographer is to become aware of what linguistic anthropologists refer to as the “unmarked” and “marked” categories we bring to the field (Avruch 2003). That is, each of us has developed an internal representation of external reality as we are socialized within different professional and personal contexts. Our internal representations are “unmarked” to us because they are normative, so we may not be aware of them. What seems “marked” to us are those aspects of social reality (human behaviors, norms, attitudes, expectations, understandings) that significantly differ from our expectations, most noticeably those that directly challenge our presumed constructions of what constitutes universal or natural behaviors and attitudes. This contrast between our “unmarked presumptions” and what we encounter in local contexts can generate those surprises I noted above.
Things that seem to be cultural data at first, therefore, are not necessarily culturally significant to insiders in the society we seek to understand, but they are “marked” in comparison to our own expectations. For example, my research question in Ghana centered on how well the American, interest‐based, problem‐solving mediation model worked in local contexts generally and in the context of elder advocacy services in particular. Within the U.S. mediation models, age is generally not a marked category except within the specialty of elder mediation. To directly transplant elder mediation as practiced within the United States to Ghana requires using age as a trait the way it is used in the United States, where those who are considered “old” are considered to need assistance to resolve conflicts or to gain social supports. In Ghana, however, age is marked and unmarked differently. For example, Ghanaians may be more likely to consider age as a factor in evaluating the potential expertise of the mediator, who may be chosen for his or her demonstrated life experience in resolving conflicts. And in Ghana age may potentially stratify disputants differently than in the United States because younger generations are socially obligated to publicly defer to older generations. I learned this in part through explicit data collection, that is, by conducting interviews and observing mediation. I also learned this through “hanging out.” Extended time in the field helps the researcher to challenge his or her inner representations of reality often enough to help him or her distinguish which cultural presumptions are marked and unmarked from local perspectives.
Overall, ethnographic research standards — what creates validity and controls quality — differ significantly from those of RCT research. While RCT depends on control and structure in experimental research processes, ethnography requires the researcher to be flexible and even vulnerable as a guest or “interlocutor” in local communities and contexts (Behar 1996; Davies 2008). Research roles can be complex because the researcher is ultimately responsible for data collection and analysis but does not want to direct the process so well that he or she loses true insider perspectives.
To make an informed analysis, the researcher must have flexible expectations regarding data collection and analysis and allow local understanding to redirect the process when appropriate. The researcher could arrive in the field with one project in mind only to realize that this project was based on scholarly presumptions that did not hold in this particular context. This contrast was made clear to me in graduate school when I successfully defended a standard mediation evaluation project in order to secure a research grant. At the defense, however, an anthropologist on my committee commented that while this was a nice proposal, it would be rendered useless once I was in the field and immersed in data collection. In reality both that study and the child custody mediation study retained elements of standard program evaluation design while also incorporating ethnographic methods. Ultimately, my choice of overall research design and data collection methods was based on how to best identify the data needed to answer the research questions that endured and those that emerged through initial data collection.
Study Project Examples
Both study projects began as standard program evaluation projects. That is, I began both with the goal of evaluating a specific example of mediation practice and measuring any changes caused by that mediation by comparing cases before and after the use of mediation. Like other similar studies, a central research question was whether mediation reduces and/or resolves conflict to settle cases and/or improve relationships among parties. I also proposed common success measures, such as agreement and party satisfaction. Among the multiple models of mediation available, I focused on an interest‐based, problem‐solving model because that was the best description of the dominant model used in both research settings.
Although I began each research project with this standard approach, the realities of the research contexts in each case complicated data collection and analysis. In the study of elder mediation in Ghana, my research adapted to a lack of cases and an emergent question of whether the mediation model in the study would be innovative or merely a professionalized version of local practice. In the second study, of family law mediation in the United States, mediation evaluation was complicated by my need to better understand the complex personal concerns of disputants that I had not found in my literature review of previous research.
In both studies, I retained program evaluation standards as much as possible and then also used ethnographic methods both to supplement and to complicate findings. For example, I began investigating issues beyond mediation cases, case sessions, and mediation programs. In the first study, the overall research question changed from determining whether a U.S. mediation model worked in Ghanaian contexts to determining how the U.S. model was both culturally similar to and distinct from mediation as locally practiced. In the second, the research question changed from evaluating how well mediation worked in resolving child custody disputes according to mediation best practice standards to an effort to understand the meanings and practice of mediation from the perspectives not only of mediators but also of court staff, legal professionals, and disputants.
Study One: Elder Mediation in Ghana
Background
Elder mediation is a relatively new specialization within alternative dispute resolution (ADR) practices. In the 1980s, elder law attorneys and advocates in the United States piloted mediation as a way to better resolve conflicts that had resulted in adult guardianship cases (Wood and Kestner 1989; Parsons and Cox 1997). In particular, they were concerned that such case filings often arise from family conflict over “what to do with Dad” (or Mom) rather than because of an older adult's actual incapacity. Over time, elder mediation has expanded to include mediation cases involving older adults with an emphasis on elder law cases and family caregiving conflicts in which older adults can be caregivers (such as grandparents raising grandkids) as well as care recipients (Crampton 2007; Barry 2015). Studies of elder mediation have reported positive results, demonstrated by, for example, satisfaction and settlement rates (Butterwick and Hommel 2001).
In addition to the United States, elder mediation projects have been piloted in England, Australia, and Canada (Craig 1997). In 2003, I was asked by the director of an elder advocacy organization in Ghana to conduct an elder mediation training. He wanted to add mediation to a “legal challenge” program funded to uphold the rights of older adults in Ghana. As he explained, however, most rights violations happened within families and the use of court process and lawsuits would be, “quite alien to the family system.” As a “parachute consultant” (Yarn 2002: 303), I used a common, interest‐based, problem‐solving mediation model for training the “team,” which comprised volunteers and staff chosen by the elder advocacy organization. Because mediation is already part of everyday cultural practice in Ghana, I spent time during the training comparing the model I brought with the lived practices of trainees through observing their mediation of mock cases and debriefing. Together we compared the unmarked and marked practices of professional and local mediation. I returned a year later to conduct research assessing whether elder mediation could be usefully applied there in the everyday contexts of ADR programs.
The Study
I conducted ADR evaluation research in Ghana over nine months during 2004 and 2005. I was unable to complete the project I had initially planned, to assess mediation cases, because neither the mediation team nor the program to which it was supposed to have been added were able to generate cases. I took this as an opportunity to refocus my research question from how well the U.S. model worked in a particular program to a broader question of cultural compatibility. I approached this question from two directions. First, how does a U.S. model work in a Ghanaian context? And second, what is the cultural context in Ghana in which mediation is already part of daily practice, of what social scientists call “lived experience”? And given this cultural context, what was marked and unmarked about the U.S. mediation model?
To answer the first question, I collected proxy data. That is, I interviewed experts familiar with both local mediation and professional mediation practices. I also found additional research sites in which Ghanaian professionals had been trained in the same U.S. model I had taught and were applying it in different organizations. I observed cases and interviewed mediators afterwards. I also continued working with my mediation trainees by creating a case log of mediation cases they had completed as part of local, cultural practice. The process of case recording helped debrief their cases so that we could identify what was unmarked and marked to them about the U.S. model. In particular, creating paperwork, such as forms, schedules, and logs, was new to them. As one culture expert explained, formal mediation training provided a “scientific coloring” to mediation as generally practiced in Ghana.
To answer the second question (about cultural context), I needed to hang out. I spent time at the elder advocacy organization, participating in other agency programs and engaging staff in conversations about their work. Over time, I realized that forms of mediation were practiced in this setting that were unrecognized because they were outside of a formally funded program. For example, I was told about an incident at one of the adult day centers that generated conflict among staff and older adults who attended the center. In response, they convened a “panel” with representation from both groups. This panel listened to the parties, counseled them about the case, and helped them reach a resolution. In addition, I learned that the mediation training itself was used by the organization's director to help address conflict by including staff, volunteers, and older adults who were known as good mediators as well as those who were sometimes a source of conflict. These incidents and insights became the data that I recorded as field notes, asking follow‐up questions as they occurred, inviting challenges to my emerging insights through “member checking” (Padgett 1998: 190), and writing up results in analytic memos.
I also hung out outside of the agency. For example, I traveled to a village just outside the capital that was part of the legal challenge program. I stayed in the home of a respected leader in the village who also referred me to a research assistant. I conducted a modest survey research project about the villagers’ experiences growing older. Over time, I learned how my research assistant had developed her own form of elder advocacy intervention, which included mediating family conflicts. In following her around as she counseled older adults and families, I realized how such counseling served not only to improve caregiving relationships but also helped to sustain my assistant's status as a valued member of the local community. She was an example of what the cross‐cultural mediation literature refers to as a respected elder. And her work provided interesting data: I learned that the ability to mediate and resolve conflicts is one way in which people demonstrate maturity and earn a place of respect in local communities.
While these data were useful, they were based on observation without much direct participation. My understanding of local mediation as part of lived experience was enriched when I needed mediation intervention myself. This was definitely an unintended surprise. The conflict arose through my difficulty in determining how much to pay the assistant and compensate research subjects for the small survey. I was given conflicting advice from professors in different disciplines about whether to give cash or gifts, and how much. Negotiating directly with my assistant was difficult, and I sought advice from my host, who had introduced us. Another complication was that I wanted to compensate my assistant not only for her work on the survey itself but for all of the cultural data she had helped me to learn as I hung out with her. I wanted to be respectful of her time, effort, and position in the village. As I tried to sort this out, relations became strained until my host complained that both my assistant and I had been seeking him out when we should meet “to thrash things out.”
I then realized this would be my first mediation as a disputant. As a mediator in the United States, I had been trained to only help those disputants I did not know previously in case there might be conflict of interest. In this situation, however, my host was a good mediator choice because he was already known and trusted by both parties. And, my concern about the outcome of the dispute was minute compared to my concern about any damage caused to the relationship.
The mediation itself was short and was the only time I saw my assistant demonstrate anger toward me. She demanded a sum of money that was ironically the amount I had been planning to pay. In other words, there was little material dispute but there had been a breakdown in understanding and trust, which was mended with the help of a third party who was already familiar with both parties as well as with the case. After the mediation, I perceived no residual tension. In my reflexive analysis, I concluded that the mediation had allowed the conflict to be expressed in a socially acceptable way in order to then settle the case while allowing both sides to save face. As Victor Turner described these events in anthropological research, it was a “social drama” (Turner 1975: 15) in which relationships were renegotiated and underlying cultural values affirmed. I turned this experience into data by recording it in my field note journal, by engaging in ongoing reflection about the experience, and by consulting with culture experts and reviewing academic literature about cultural forms of mediation in Ghana.
Study Two: Child Custody Mediation in the United States
Background
Mediation was first pilot‐tested in family court as a means to reconcile couples and then to help them separate more amicably. The focus was on nuclear families in which this was a first divorce for both parties. As I noted previously, one of the foundational studies of family court mediation was an RCT study, and this helped establish both the efficacy of mediation in these contexts and also established what have become the standard measures of success: satisfaction, cost, and settlement.
Family court evaluation literature continues to use a comparative frame. That is, a central question is whether mediation is better for parents than the court process — as if these two processes can and should be separated. This question presumes that “court” equates to adversarial and attorney‐based processes while mediation allows parties to negotiate openly and therefore more freely and amicably.
In addition, mediation scholars have begun to consider complications that disputants bring to mediation, such as mental health and substance abuse problems, that were not factored into earlier, experimental studies (Emery 2012). Continuing contextual considerations in terms of relationship dynamics include domestic violence, “high conflict” patterns of interaction, and the intensely personal emotional and psychological reasons why people choose to marry and raise children in today's social contexts (Salem 2009; Crampton 2015).
Because of the contextual complexities that can potentially affect mediation processes and outcomes, I chose to expand the original research design of the family court mediation case study by adding ethnographic data collection and analysis, and by extending mediation case study beyond observation and recording of individual mediation sessions within individual cases. This helped me analyze and interpret the experiences and insights of family court professionals and disputants as mediators and court staff sought to bring rational decision making into an often emotional and highly subjective enterprise.
Research Setting
My study, which I began in 2012, focused on the Milwaukee County Child Custody Mediation Program. In Wisconsin, mediation is mandatory for parents who file disputes over child custody or placement. Custody is legal decision making regarding such areas as health and education, while placement refers to child residence as measured by how many nights per week each child stays with a parent. In the Milwaukee program, parents referred to mediation must first attend an evaluation session, which serves to screen for domestic violence or substance abuse issues that would make mediation inappropriate, before they agree to continue to mediation.
Parents who choose mediation pay a one‐time fee of $200 that can be waived for low‐income litigants. If mediation fails or is otherwise terminated, a guardian ad litem (GAL) is appointed to investigate and make a recommendation to the court on behalf of the children. The majority of case filings are “paternity cases” (PA) in which parents have not married and the first step is determination of paternity. “Family cases” (FA) include never‐married parents with no question about paternity, divorce cases, and postjudgment divorce cases.
Each year, approximately eight hundred cases are referred to mediation by court commissioners, judges, attorneys, or by parent request. The PA cases outnumber FA cases. Roughly half the cases in which parents agree to continue to mediation end in agreements. For example, according to mediation program statistics, in 2012 (the bulk of cases in my sample), 874 cases were referred to the mediation program. Of those cases, 386 were mediated, and of those mediated cases, 191 reached agreement (that number includes forty‐two cases in which parents reported that they reached agreement after court referral and/or evaluation session and before mediation sessions). This results in a 55 percent agreement rate for mediated cases, which is consistent with previous studies of family court mediation (Kelly 2004; Emery, Sbarra, and Grover 2005; Ballard et al. 2011).
Study Design, Study Sample, and Data Collection
I began my research using presumptions found in the mediation evaluation literature. I thought my biggest problem would be gaining research access and consent, which, interestingly, turned out to be the case with mediators more often than with disputants. For logistical reasons, my research sample was a “convenience sample,” that is, it comprised subjects who were available and not necessarily representative. This was necessary in part because mandatory mediation programs do not allow for random sampling. Another complication was that only half of the cases referred to the mediation result in a full mediation session, and it was not uncommon for disputants to postpone or cancel. Thus, it was time‐consuming both to be available for mediation sessions and to attend for the purpose of observation and recording. Consequently, the best way to collect mediation cases was through mediators who were willing to contact me after they had scheduled a session.
I collected data between 2012 and 2015. The study sample for mediation case evaluation comprises forty‐two mediation cases with sixty mediation case sessions and ten different mediators. (Each set of disputants met for one to four mediation sessions).
The sample comprises thirty‐one FA cases and eleven PA cases. (As noted above, PA cases typically outnumber FA cases, so this proportion is clearly unrepresentative.) It includes fourteen first‐time divorces (33 percent), twelve postjudgment divorce cases (29 percent), and sixteen nonmarital cases (38 percent). I conducted interviews with each of the five mediators whom I observed during mediation sessions, and six interviews with mediators whom I observed during evaluation sessions. I also conducted thirty‐six interviews with parents. Of those, thirty were conducted with pairs of parents in mediation; one could only be conducted with one parent, and five were conducted with parents who never continued beyond the initial evaluation session.
I paid parents $30 per semi‐structured interview, but otherwise research participants were not paid. I audio‐recorded most of the mediation sessions, twenty‐seven parent interviews, and eleven mediator interviews. The agreement rate for the study sample was about the same as in the court program as a whole: that is, it included twenty‐three mediated agreements and nineteen mediation terminations, which is an agreement rate of 54.7 percent.
Because I observed cases directly and got to know mediators over time, I became skeptical of some of the basic unmarked presumptions I had been using to evaluate the program. My presumptions thus became my research questions. Two of these questions were: Is mediation truly voluntary? And, is domestic violence an unambiguous variable?
The data I used to address each of these questions came from my immersion in the court context as well as my exposure to the personal concerns that disputants brought to their mediation sessions but did not express there. Instead, I learned these through informal conversation and post‐mediation interviews with them. I hung out as an observer of court hearings that might lead to mediation referral, mediation evaluation sessions, mediation sessions, and follow‐up court hearings. The latter gave me an excuse for maintaining ongoing contact with disputants and enabled an extension of case study analysis as part of a court process that continued after mediation case closure.
I also immersed myself within larger mediation program contexts by attending a training session that mediators often take to qualify for the court roster, by attending roundtable discussions in which mediators discuss cases with a court commissioner and family court judge, and by observing mediation evaluation sessions. In all cases, I recorded field data through field notes and periodically summarized emergent insights through analytic memos (Corbin and Strauss 2014). I also collected program brochures, copies of program paperwork forms, court forms, mediation training materials, the program manual created for the court mediators, and study sample mediation case court files.
What I learned through additional hanging out and subsequent ethnographic data collection is that typical mediation evaluation research designs reflect the perspectives of mediation professionals. That is, for those whose only contact with family court is through the mediation program, the entire case consists of the court case referral, process and outcomes of the orientation sessions, and then process and final outcomes of any mediated sessions. By following cases over time, I gained additional relevant perspectives on mediation from court staff, legal professionals, and parents. The more frequent contact with parents that I made by following their cases also helped me to develop rapport and trust with the parents that in turn helped me better interpret observed interactions during mediation as well as parental interview responses following mediation sessions. In two cases, parents who had previously declined to be interviewed following mediation sessions changed their minds. As one said, “Now I will talk to you.” In general, parents were often frustrated and felt unheard and so were more than willing to talk to a sympathetic and uninvolved third party. One parent stated that the postmediation interview was more helpful than the mediation.
Postmediation interviews have been especially helpful; I used them to check the interpretive validity of my mediation observations and also uncovered what information parents withheld from the session but found important.
In general, parents clearly understood the intent of mediation. At the same time, it became clear that many experienced the process as involuntary. Despite programmatic efforts to fulfill state law mandating mediation through the evaluation session and then make continuation to mediation sessions voluntary, most parents interpreted court referral as a requirement. And it was indeed true that commissioners and judges were free to assign (and sometimes did) greater responsibility for paying the GAL fee to a party who had refused mediation. Although mediation is technically voluntary in terms of program structure and policy, participants often experience it as involuntary because mediation referral is embedded within a court process that remains prominent in the minds of parents even as they are told that mediation is kept separate and confidential.
Because the legal process as a whole was experienced as involuntary, mediation was experienced as involuntary by extension. That is, petitioners who filed (i.e., the parent bringing the other parent to court) often perceived that they had been forced by circumstance, particularly by the unreasonable behavior or attitudes of their ex‐partner. In response, the respondents brought to court often perceived that they had been forced by the petitioner. To then be referred to a process in which parties can voluntarily engage in direct negotiations was sometimes experienced as an inconvenience at best and a cost of time and resources that exacerbated conflict at worst. This was particularly true for parents who expected the court process to provide answers and resolution. For example, one parent in divorce proceedings argued that mediation was not really possible until financial negotiations were completed because the placement (the term for what was previously known as custody and visitation) schedule was tied to child support, which in turn would determine whether the other parent could afford to stay in the marital home. For these parties, mediation was only useful if it became part of court decision making, such as an enforceable court order. Voluntary yet unenforceable mediation did not address these interests.
Following cases over time also revealed the difficulty of operationalizing such variables as domestic violence. Two examples illustrate how domestic violence is a relationship dynamic that can change over time and be re‐interpreted over time. In the first example, my observations began with the first mediation session. The parents were so amicable during mediation that they teased the mediator for having posted a list of ground rules. “Is that really necessary?” they asked. The follow‐up mediation session never took place, however, because of a domestic violence incident reported to the police. The domestic violence case was later dropped, and the mother complained that consequently she received less help with parenting and incurred additional legal costs. After the domestic violence case withdrawal, the divorce was suspended so that the parents could attempt reconciliation. One attorney requested a copy of a draft agreement that had come from the first session in case the reconciliation failed, which it eventually did.
The mediated agreement was then used in negotiations for the final marital settlement agreement. The day of the divorce, the mother explained that there would always be disagreement about the domestic violence incident, and both parents suggested that the court should offer counseling to help save marriages. Assessing both domestic violence and agreement — their occurrence, their impacts, their meanings — in a case like this is complicated.
The second example is a post‐divorce case. During the evaluation session, the mediator screened for domestic violence and determined that the case was appropriate for mediation. After three amicable sessions, parties signed an agreement. In post‐mediation interviews, both parents spoke positively of their mediation experience. They praised the mediator, comparing their experience favorably to previous mediation experiences. Two years later, however, they were back in court, and a GAL and then a social worker were appointed to assess the case. Both professionals later testified in a trial that the case should not have been mediated because of one of the disputant's coercive controlling behavior. Neither party had reported differently when asked about domestic violence but the professional interpretation of the dynamic changed. Conclusions about whether the case was “mediatable” because of domestic violence as well as whether the mediated agreement was successful both changed over time.
My results suggest that success may be a more elusive goal within less than ideal contexts of parenting in cases of domestic violence and/or substance abuse. How to best conduct case evaluation then becomes an ongoing question for research analysis. In this way, the ideal of mediation and the need for intervention persists, even as assessment becomes a “wicked problem” that may not have clear answers or solutions.2 While such cases can be frustrating to evaluate—and to mediate—ethnographic methods can be used to clarify underlying complexities and provide data that can be used to evaluate case outcomes over time.
Study Limitations in Ethnographic Research
Like all forms of research, ethnographic research has its limitations. In the case of mediation research, one limitation is that expanding the scope of data collection beyond mediation sessions requires much more time and the request for participant's consent must be ongoing — participants can drop out at any time. It is difficult to reach “data saturation” because multiple and complex layers of personal and legal contexts become part of both the data collection and the analysis. In addition, the strengths of ethnographic methods can also be their weaknesses. The intimate engagements of ethnographers with their research subjects can help them build rapport and gain trust, which then generates greater participation and encourages subjects to respond candidly. But anthropologists can use personal connection to manipulate or coerce responses and can be manipulated or co‐opted by their subjects, and this dynamic is a much greater risk of ethnographic research than it is for lab‐ or survey‐based research. To remove artificial boundaries and avoid canned responses, exchange is far less buffered by formal research protocols (Crampton 2015). This raises challenges involving both research ethics and internal validity (Schensul and LeCompte 2013). Although I used standard internal review board consent protocols in each study, I believed that the intimacies of ethnographic research methods warranted that I engage in additional reflection, member checking, and periodic breaks from field research to ensure ethical and accurate data collection and analysis (Bernard 2002).
Finally, ethnographic research typically examines a small study sample in depth or eschews the individual attribute data necessary for statistical analysis entirely. Without a sufficiently large sample size and clearly operationalized variables, statistical verification of results is not possible. This is a strength of RCT studies that ethnographic research typically lacks — unless one uses a mixed‐methods approach.
Conclusion
Mediation practitioners and those who use mediation services understandably prefer clear options, and mediation as a profession has grown, in part, because mediators offer it as an intervention with clear benefits. As James Wall and Timothy Dunne wrote, mediation has been proposed as a solution to so many problems that “Many scholars and practitioners consider mediation to be an industrial grade Swiss army knife — capable of accomplishing any task” (Wall and Dunne 2012: 10). Experimental methods can be an ideal way of testing whether mediation works. They require operationalizing variables in objective terms, clearly stating the problem to solve, isolating the intervention from other intervening factors, and defining clear evaluation measures. Just as entrepreneurs “incubate” innovations before fully launching in local markets, researchers can make sure that mediation works through experiments using a range of variables before implementation.
My interest as a field‐based researcher, however, lies in analyses of how interventions work over time as they are implemented across ever‐broadening fields of practice. As the study results presented in this article illustrate, contexts shape interventions such that mediation is not the same thing when experienced by different parties experiencing different conflicts.
This variation becomes even more apparent when mediation is studied beyond individual mediation sessions and when outcomes beyond immediate case resolution are investigated. As the lens through which mediation is evaluated widens, the controls used to strengthen experimental research become less useful in developing a full picture of mediation process, meanings, and outcome. Evaluating the use of mediation in cross‐cultural, complex, and highly variable contexts requires expanding research questions from, “Does it work?” to “What else is going on that affects process, meanings, and outcomes?” Both individual and cultural data may help answer these questions. While RCT will remain the gold standard to evaluate whether a particular intervention can work to achieve hypothesized outcomes, ethnography can be used to venture where RCT cannot, by, for example, answering questions in complex, dynamic, and potentially chaotic intervention contexts. In other words, the more “wild” or “wicked” factors that might affect mediation process or outcomes, the more ethnographic research can be useful to help identify and interpret how complicating factors become part of mediation failure and success.
NOTES
Funding for the elder mediation research and writeup was provided by NIA #AG00017, the Hartford Foundation, University of Michigan's International Institute, Anthropology Department and Joint Program in Social Work and Social Science, and the Program on Negotiation at Harvard Law School. Funding for the family court research has been provided by the Office of Research and Sponsored Programs and the Center for Peacemaking at Marquette University.
In a 2008 conference on negotiation and communication sponsored by the Program on Negotiation at Harvard Law School, communication studies scholars distinguished between university‐based lab studies and those conducted where communication naturally took place “in the wild.” In cultural anthropology, researchers refer to “lived experience” in the field (that is, research sites outside of university labs).
“Wicked problems” are contrasted with those that are relatively simple, easily defined, and therefore easier to assess and address using shared understandings of problems, problem solving, and how to know when solutions have been achieved (see Honeyman, Coben, and De Palo 2010 as well as commentary by Bordone and Viscomi 2015).