Measuring student progress toward the achievement of learning outcomes in negotiation skills courses is a difficult task. Measuring the effectiveness of the delivery of course instruction can be equally challenging. This article proposes some answers to these questions: How can student performance in skills such as negotiation, leadership, and teamwork (sometimes referred to as “soft skills”) be effectively measured and accurately evaluated? What standards can be used to determine whether student performance is superior, adequate, or inferior? How can teaching effectiveness be evaluated to determine whether students are receiving the instruction necessary to achieve the course learning objectives?

This article describes how the authors collaborated on an adaptation of the assessment processes used in the U.S. Army Reserve Officer Training Corps (ROTC) cadet Leadership Development Program for use in an MBA course on negotiation skills. We report on a pilot effort that has demonstrated that the ROTC‐style leadership assessment process can be successfully adapted for use in a graduate course on negotiation and that it provides useful means for evaluating both individual student performance and overall course effectiveness. While our work involved a negotiation course, we suggest that the process could be adapted for use in other skills‐oriented courses such as leadership.

Negotiation is recognized as an important skill for managerial success. Most courses on negotiation are largely built upon an experiential approach to teaching. They typically include numerous simulations in which students play assigned roles and engage in negotiations with their classmates. Students learn by participating in the exercises, by participating in guided discussion in debriefing sessions, and by writing reflective papers (Weiss 2003; Tyler and Cukier 2005). Instructors who must evaluate such activities and assignments often struggle to accurately, consistently, and fairly measure progress toward the stated learning objectives with regard to both improvement in student performance and the effectiveness of the delivery of instruction (Page and Mukherjee 2009).

The problems faced by negotiation instructors in developing and implementing an effective grading process have been described by Mary‐Lynne Fisher and Arnold Siegel (1987):

For professors in skills or clinical courses, grading student work is a troubling and time consuming process. For example, when a professor grades students in negotiation courses on criteria other than the result of the negotiation exercises, she must design comprehensible grading standards, observe the student in real or simulated exercise, critique the student's performance, and grade based on those standards (395).

Instructors must contend with certain logistical issues in evaluating student performance in negotiations courses. A single instructor cannot possibly closely observe up to twenty simultaneous negotiations taking place during a single class session. Videotaping the negotiations as they take place and reviewing the recordings later can be effective (Fisher and Siegel 1987), but it is also time consuming and requires expensive video cameras and playback equipment. Significant advances in digital recording technology and a dramatic reduction in the cost of cameras, especially web cams, have resulted in some innovative methods for providing negotiation students with useful feedback on their in‐class negotiation performance (Peppet 2002; Williams, Farmer, and Manwaring 2008).

Instructors often require students to submit written reflection papers about what they experienced in the simulated negotiation exercises. In evaluating these papers, the instructor must try to determine whether the student learned the intended lessons of the exercise and how one student's experiences compare with the others. One problem with this approach is that a poorly written paper by Student A may obscure the fact that some profound learning occurred, while a well‐written paper by Student B may convincingly present a description of meaningful learning that did not actually take place. Grading the quality of the experience without being influenced by the quality of the writing is difficult, which can diminish the instructor's ability to evaluate student negotiation performance — at least in part — and not just post hoc analysis.

In addition, instructors can use many different approaches to factor simulations into their grading schemes. Some negotiation instructors grade student performance without regard to the outcomes of the negotiation simulation exercises, while others do grade on the comparative results of such outcomes. Others use subjective standards, a pass–fail approach, or a combination of such methods (Page and Mukherjee 2009). Douglas Eder and Kathryn Martell (2005) explain the difficulty of grading students holistically:

Each teaching professor has a view of what (s)he wants students to accomplish. The view, even if it is an unconscious one, pictures ideal student achievements at the end of a particular class, a unit of instruction, or an entire curriculum. At the end of an assignment or a course, students who achieve the goals and “look like” the ideal tend to get As; those who look a bit less like the ideal get Bs, and so on. Because students (and professors) aren't perfect, achievement of goals is usually uneven. Students may excel in one area and be merely adequate in another. Nevertheless, professors record a single, holistic grade that tends to sum the student's performance and provide an overall judgment of merit (Section 3, 30).

As these observations illustrate, assessing student performance in learning “soft skills” is difficult. The assessment process described in this article provides a practical and effective method for evaluating student performance for the purpose of grading, as well as for assessing the effectiveness of instruction in achieving course‐learning objectives.

The word “assessment” has many different meanings and applications. The processes of both course assessment and program assessment are sometimes referred to by the term “assurance of learning” (Association to Advance Collegiate Schools of Business 2007).

We offer the following brief overview of assessment terminology and define our use of the terms for purposes of this article. As a starting point, New Horizons for Learning (1995), an education research network based at Johns Hopkins University, has provided the following general definition:

The Latin root assidere means “to sit beside.” In an educational context, it concerns the process of observing learning; describing, collecting, recording, scoring, and interpreting information about a student's or one's own learning. It is most useful as an episode in the learning process, part of reflection and autobiographical understanding of progress (2).

Assessment is often discussed in terms of “direct” and “indirect” modes. Direct assessment methods include evaluating student performance on measures such as tests, course‐embedded assignments, capstone projects, and portfolios (Walvoord 2004). Indirect assessment methods, which do not directly evaluate student performance but rather gather inferences about such performance, include student surveys, student interviews, alumni and employer surveys, and curriculum and syllabus analysis (Walvoord 2004).

Direct assessment tools and processes used at the individual performance level provide a mechanism for delivering feedback to learners. Instructors and teaching assistants, of course, assess their students, but students can assess each other and also conduct meaningful self‐assessment (McAdoo and Manwaring 2009). Effective assessment serves to enhance learner self‐awareness by providing benchmarks against which learners and instructors can measure progress (McAdoo and Manwaring 2009, citing Garrison and Anderson 2003). As summarized by Bobbi McAdoo and Melissa Manwaring (2009), effective assessment should be ongoing and public, connected to learner goals, incorporate feedback and suggestions for improvement, and include some self‐ and peer‐assessment (citing Mason 2002).

As discussed in this article, “assessment” may refer to evaluation of specific student performance in attempting to master a learning task or, depending on the context, to evaluation of the effectiveness of the delivery of instructional content within a course of instruction. For simplicity, the former will be referred to as “performance assessment” and the latter as “course assessment.” Another (higher) level of assessment, “program assessment,” for example, or the assessment of an entire degree program such as a BA, JD, or MBA program, is not covered in this discussion.

In considering the question of how to effectively grade student performance in skill‐based courses such as negotiation skills, the authors postulated that the basic framework and tools employed in the assessment system known as the U.S. Army Reserve Officer Training Corps (ROTC) cadet Leadership Development Program (LDP) might be adaptable to a business school setting. The LDP system appeared to us to be a promising model for several reasons:

  • 1.

    it has been successfully used, updated, and refined over a period of many years;

  • 2.

    it was designed and built specifically to assess leadership skills, which, like negotiation skills, are famously difficult to quantify and measure;

  • 3.

    it can be applied in “real time” (during training exercises) through the efforts of cadets/students themselves;

  • 4.

    it does not require the use of special technology to collect or analyze assessment data; and

  • 5.

    finally, the LDP is based upon a method of assessment, well known in academic circles, called “primary trait analysis” (PTA) (Walvoord and McCarthy 1990).1

The PTA assessment process can be summarized as follows: an “embedded assignment” that is linked to one or more learning outcomes and is graded by a group of stakeholder faculty (preferably) or just by the course instructor alone (Walvoord 2004). Student performance is measured against predetermined standards expressed as a scale (a rubric). Training of those who conduct the assessment in a “norming session” is required in order to establish an acceptable level of inter‐rater reliability, that is, to calibrate the application of the rubric's scale to the observed performance and ensure that the various assessors are applying the rubric consistently (Bean, Carrithers, and Earenfight 2005).

The resulting scores not only provide a rational basis for grading student performance (performance assessment) but also generate data that can reveal patterns of strengths and weaknesses in instruction delivery (course assessment). Through examination of this data, the faculty can plan ways to address any identified weaknesses in student performance as well as to craft any needed improvements in instructional methods or curricular design from the assignment. Examination of the data offers an additional benefit: the opportunity to develop improvements to the assessment process itself.

Operational Overview of the Reserve Officer Training Corps Cadet Leadership Development Program

The U.S. Army ROTC cadet LDP has been used in the training of military leaders for more than a decade. The LDP is based on the view that “[l]eader development is a continuous process of training, assessment, and feedback. . . .” (Department of the Army 2009: 1).

In the LDP, leader development is a process of instilling desirable attributes and competencies into future leaders. The program is designed around the development of the individual cadet. In the ROTC program, students are referred to as “cadets” and the faculty are known as “cadre.” The cadre are military personnel working in the ROTC program on the college campus who are responsible for delivering military instruction. Upon successful completion of the program, a student enrolled in ROTC will receive a commission as a second lieutenant in the U.S. Army.

As part of the ROTC training process, individual training needs are identified and a plan of development is created for each cadet. The cadet's performance is assessed at various points in the process, and he or she receives feedback on this performance.

Each assessment is built around a structured leadership opportunity that provides the cadet with a specific role along with specified and implied tasks that must be accomplished (Department of the Army 2009). A specific time frame is provided for the cadet to plan, prepare, and execute a mission while being assessed.

Timely performance feedback provides cadets with the tools they need to improve their performances. These assessments include a summary of strengths and weaknesses and establish a plan for the cadets' improvement. Each opportunity builds on the previous assessment and raises the required level of performance for each subsequent leadership opportunity.

Components of the Leadership Development Program

The principles of the LDP are contained in the army doctrinal manual (Department of the Army 2006). This model is standardized and used on all ROTC campuses in the Leadership Development and Assessment Course. Five components comprise the LDP: standardized assessment tools, individual focus, developmental feedback, structured leadership opportunities, and assessor qualifications (Department of the Army 2009).

The standardized assessment tool is the “leadership assessment report form,” often referred to as the “blue card,” which is printed on 5.5 × 8.5 inch blue‐tinted cardstock. Each card includes a list of leader attributes and competencies known as “standardized leadership performance indicators” (LPIs). The cards include check boxes for all LPIs so that the assessor can record observations and rank cadet performance during structured leadership opportunities (various training exercises) according to a three‐level rubric: “E” for exceeding the standard, “S” for satisfactory, or “N” for needs improvement. The LPIs provide the framework for every assessment of cadet performance. Not every LPI may be observed during a particular assessment, but no other indicators are used in their place.

Each of the leader attributes and core competencies comprises several dimensions. Attributes include a leader's character, presence, and intellectual capacity. Character is assessed by examining the leader's values, ability to empathize, and adherence to a “warrior's ethos.” Presence is assessed by the leader's military bearing, physical fitness, composure, confidence, and resiliency. Finally, intellectual capacity is measured by evaluating mental agility, judgment, ability to innovate, interpersonal tact, and domain knowledge. This is only part of assessing leadership, however. Attributes are important, but competencies, what a leader does, are most easily measured.

Core leader competencies are measured by how effectively a cadet leads others — how effectively she or he extends influence beyond the chain of command, leads by example, and communicates. Also crucial to leadership is the ability to create a positive environment for the development of others and, of course, to successfully achieve the intended results of the mission.

Using the Leadership Development Program System

As set forth in the field manual (Department of the Army 2006), the LDP system is designed to be used during structured leadership opportunities, many of which are military exercises that take place outdoors in difficult environments and under challenging conditions. The person responsible (a higher‐ranking senior cadet or a cadre) for assessing a cadet's actions focuses on examining “critical behavior” observed in real time as the exercise takes place. Critical behavior is defined as behavior that significantly affects the leader's performance in current or future situations.

The assessor records critical behavior as he or she observes it. Accurate note taking is important in order to chronologically record the events and behaviors necessary to make an assessment. The next step is to classify the recorded behavior. Which LPI (attribute or competency) applies to a particular behavior? For example, the assessor would note the behavior displayed by the cadet during a briefing of an operations order for an upcoming mission. This briefing demonstrates critical behavior because the success of the upcoming mission depends on how well the order is briefed. The assessor might note that the cadet demonstrated the attribute of “domain knowledge” by briefing the order using the proper format and covering the necessary information. The cadet might also demonstrate the core leader competency of “communicates” and “creates a positive environment” by briefing in a clear voice and by making his subordinates feel encouraged and confident.

Once the assessor classifies the behavior, he or she then rates it, answering the following questions: to what standard did the cadet perform, and was it a performance to be expected of a cadet of his or her experience level? The three‐level rubric (excellent, satisfactory, and needs improvement) is used to rate every LPI that can be linked to a behavior. At the end of the assessment, the assessor will give the cadet an overall rating of an E, S, or N.

The process for assessing behavior is cyclical. First, behavior is recognized, recorded, classified, and rated. The assessor later counsels the cadet, and then the cadet is trained and prepared by the cadre or a senior cadet for the next leadership opportunity.

If the cadet performed on the exercise in the manner one would expect from a cadet with the same experience, she or he will receive a satisfactory S rating for the mission. If the cadet performed in the dimensions of domain knowledge, communication, and creating a positive environment in an excellent manner, the cadet would be counseled to sustain those areas. If the cadet displayed a lack of sound judgment or interpersonal tact during the mission, a needs improvement or N would be recorded and the cadet would be counseled on how to improve those areas before the next assessment.

To increase the interrater reliability of the assessments, all assessors (ROTC cadre or senior cadets) are trained in the use of the assessment tools and operational details of the LDP. The goal of such training is this: if several assessors were evaluating the same cadet, they should all see the same thing and be able to filter out any observational errors, biases, or prejudices. Assessors receive classroom instruction to familiarize themselves with the LDP system. They then view “mock” exercises and practice assessing the observed behavior. Movie clips are often used for this exercise. The goal is to have each assessor recognize, classify, and rate the displayed behavior in the same way. This type of training may be revisited if assessments appear to lose reliability.

Compiling and Analyzing the Data

The assessment data entered in the leadership assessment report forms for all cadets in a class or unit are later transferred to another form, called the “job performance summary card” (or “yellow card”), which is printed on 5.5 × 8.5 inch yellow‐tinted cardstock.

The card is arranged in rows and columns. Each row provides a space for the name of the cadet, and the scores generated for each LPI via the assessment process are entered from left to right horizontally across the card. The scores in each row are summed to produce an overall total for each cadet.

Across the top of each column on the card are headings that correspond to the LPIs, clustered in groups with the following headings: values, attributes, skills, influencing, operating, and improving. A separate job performance summary card is completed for each structural leadership opportunity.

The data on the job performance summary card provide two kinds of information. First, the total row score for each cadet provides useful information for grading individual performance and for counseling individual cadets on how to maintain areas of strength and improve those areas in which he or she received relatively low scores (individual performance assessment). Second, the data in each column are summed and an average score computed for each attribute or skill. This aggregated score can be used to highlight those LPIs in which the assessment scores show that the cadets as a group scored below average (course‐level assessment) — perhaps signaling a need for different or more intensive training in those skills.

The desired learning outcomes for a civilian class on negotiation skills would, of course, be different from those of the ROTC's LDP. Our first step in adapting the LDP process to a negotiation course was to make revisions to the LDP leadership assessment report forms, which we retitled as the “good negotiator attributes/skills/actions” form. While some of the LDP attributes/skills/actions were easily transferable to the context of negotiation, other assessed areas were not relevant and were therefore omitted entirely.

The authors revised the LDP assessment form to realign with negotiation skills. New attributes, skills, and actions were included, arrayed, and printed on a form similar to the ROTC version. A three‐point rubric was established for assessing the performance of each skill: a needs improvement was given a rating of 1, satisfactory was rated as 2, and excellent was rated as 3. We linked all the negotiation skills selected for assessment purposes to one or more of the negotiation course learning objectives.

These objectives are:

  • 1.

    to develop a self‐awareness that will help increase the effectiveness of your negotiation skills;

  • 2.

    to understand your personal negotiation and conflict management style tendencies;

  • 3.

    to develop an ability to create value and exploit opportunities that others might overlook;

  • 4.

    to avoid common mistakes made by negotiators;

  • 5.

    to learn how to generate strategies for successful negotiation;

  • 6.

    to build a toolbox of effective negotiation skills;

  • 7.

    to understand your own perceptions of culture and ethics, and those of others;

  • 8.

    to work successfully with people with different backgrounds, expectations, and values and deal effectively with tensions and conflicts;

  • 9.

    to develop a capacity to reflect and learn from experience; and

  • 10.

    to sharpen your ability to be insightful and analytical.

Taken together, these learning outcomes describe the attributes, skills, and behaviors that the instructor wants students to possess and to be able to use effectively upon successful completion of the course.

The negotiation skills listed in the attributes section include five fundamental qualities and characteristics of good negotiators: mental self‐discipline, emotional self‐control, personal integrity, respect for others, and overall self‐awareness. The skills section includes broadly stated conceptual, interpersonal, and technical skills and key abilities that should be exhibited by a successful negotiator.

The actions section has three subparts: the preparation stage, the negotiation stage, and the postnegotiation stage. The negotiation stage is further segmented into a cluster of skills designated “communicating” and another subset designated “tactics.” There are twenty‐nine different actions that can be assessed in this section. Some of the actions are linked to the use of specific concepts as described in the course textbooks, for example, using the situational matrix (Shell 2006) to identify the context of the negotiation or developing a best alternative to a negotiated agreement (Fisher, Ury, and Patton 1991) as part of the prenegotiation planning.

The final section of the form is an overall assessment scoring area with spaces for the assessor to record general comments and to make notations about those skills that were particularly well performed or those that are in need of substantial improvement.

The authors recognize that the assessable attributes, skills, and actions included in our study are not the only ways to measure student learning progress. Different skills could be substituted for a number of those included in our pilot program. In addition, other instructors may have different course learning objectives and seek to develop different attributes, skills, and actions that would better serve their teaching and/or assessment purposes.

Pilot Testing the Process

Unlike the LDP's military setting, business school negotiation classrooms tend to have no senior cadets or officers to handle the process of conducting the assessment. In the authors' case, the only available personnel were students and instructors. Therefore, as more fully described in the succeeding discussions, in our pilot program, MBA students served as both assessors and assessment subjects. In order to effectively implement the system, students were assigned the tasks of:

  • 1.

    familiarizing themselves with the assessment instruments and process;

  • 2.

    participating in a norming session to become aware of how to rate performance using the three‐level rubric;

  • 3.

    assessing their classmates (peer assessment) who were closely observed as they participated in negotiation exercises; and

  • 4.

    conducting a self‐assessment.

Training the Student Assessors

To familiarize the students with the forms and procedures, we introduced the assessment tool and the rubric used in the scoring process via a norming session in connection with the use of a video replay of a negotiation exercise. The purpose of this activity was to facilitate the calibration or norming of the use of the rubric among student evaluators to arrive at agreed‐upon standards about which behaviors should be assessed as excellent, satisfactory, or needs improvement.

The norming process consisted of several steps and was similar in some key ways to the process used by Stephen Weiss (2003) in teaching about cultural issues in negotiation. First, all students participated as usual in a team negotiation exercise followed by a debriefing session. Next, following an introduction of the assessment process by the instructor, students were shown a prerecorded video of another team of students conducting the same negotiation exercise. Having already completed the exercise themselves first, the students had already become familiar with the facts and issues in the exercise. During the replay, students were directed to focus attention on the behavior and language demonstrated by the participants in the video, to make notes, and to enter scores on their assessment forms. After completing the assessment forms, each student was paired with a partner to compare scoring. Where the scores were divergent, students were directed to discuss the differences with each other and the reasons for their respective scores, with the goal of reaching agreement on a single score. Through this process, a commonly held view of the “correct” application of the rubric to the observed behaviors was established, and an agreed‐upon score was recorded.

Peer Assessment

In a class session held two weeks later, the assessment tool was used again in a peer assessment as students assessed the performance of fellow classmates. In back‐to‐back negotiation exercises, we divided the students into two groups. While one group participated in a team negotiation exercise as usual, the second group served as observers/assessors. We instructed each observer to focus on one selected team member and to record performance scores on the same type of assessment form as that previously introduced in the norming session. This time, the observers were able to see and hear the team discussions from the beginning of the negotiation process, including the prenegotiation preparation deliberations as they took place. Observers observed, recorded, and rated negotiators during all parts of the negotiation process, including the preparation phase. When the first negotiation exercise and follow‐up debrief were completed, the teams switched places and the previously observed group became observers themselves, repeating the process using a factually different but similarly structured team negotiation exercise.

Following each peer assessment, student negotiators met with their peer assessors to review their performance and to discuss any areas that need improvement. A general classroom discussion followed in which the instructor invited students to describe their best performances as well as to comment on skills they plan to work on in the future.

Self‐Assessment

On the last day of class, following the administration of the final exam, a third assessment was conducted: this time, each student completed a self‐assessment, again using a slightly modified version of the same assessment form. For this self‐assessment, students were directed to apply the rubric and the assessment criteria to their own overall performance for the entire quarter (see Figure One).

Figure One

Self‐Assessment Report

Compiling and Examining the Results

Both the peer assessment and the self‐assessment processes generated a set of data that reflected the perceived effectiveness of each individual student's performance of selected negotiation skills. These data can be used in two ways: for grading student performance and for course assessment, providing a source of feedback on the effectiveness of the delivery of instruction related to each of the selected negotiation skills.

To make full use of the data, we employed several steps. First, we compiled the data on the assessment forms (evaluation forms) and entered them on the negotiation performance summary form. Then, we entered the individual student scores in the rows and summed up the scores (one row for each name on the course roster). The total row score produced a quantified measure of each individual student's performance, which could be used for grading but would be even more valuable when used as an individual feedback and counseling tool.

While the individual performance scores are used for grading in the ROTC program, this practice has not been fully implemented for the MBA negotiation course. There are substantially different circumstances, norms, and expectations that are present in the two different classroom cultures of ROTC cadets as compared with MBA students.2

Using the same scores entered on the form, the values entered in the columns were summed and the averages computed at the bottom of the performance summary form. The resulting values were examined for the purpose of assessing the course: comparing the relative effectiveness of the delivery of instruction among the various attributes, skills, and actions listed across the columns. We reviewed the column totals for the additional purpose of identifying possible anomalies in the scoring and for the related purpose of determining possible improvements to the assessment form itself.

In our early‐phase pilot test, column totals averaged 2.27. Column scores with average values less than 2.00 (a score of 2 equals “satisfactory” performance) therefore indicated a relatively low performance level. Such low average scores in a particular skill area may signal a need for additional instruction on the concepts related to that skill, additional practiceopportunities to help the student become more familiar with the skill, or a combination of both.

For example, a consistently low‐scoring skill was found to be communicating item 4 (“arrange the sequence of negotiation points for maximum impact”). In both the peer and self‐assessment results, this particular skill consistently scored less than 2.00. In response to this revelation, the instructor provided the next class with additional training on this concept via a video guest lecture that clearly explains how proposals made in a negotiation can be more persuasive by sequencing them in a certain order (Cialdini 2001).3 The scores for this skill will later be reviewed again to determine if the newly collected data show that the additional instruction was sufficient to bring the performance in this area up to an acceptable level.

The LDP assessment process as adapted for use in negotiation skills training courses provides some pedagogically valid learning opportunities to students and a number of useful course management tools to instructors.

Peer Assessment Provides a Form of Observational Learning

A survey of negotiation training methods across four disciplines (management, law, education, and policy) has noted that negotiation instructors rely on experiential learning and simulations as teaching methods (Nadler, Thompson, and Van Boven 2003). Further, at least four methods for delivering experiential learning have been delineated (Nadler, Thompson, and Van Boven 2003: 529–530):

  • 1.

    didactic learning;

  • 2.

    learning via information revelation;

  • 3.

    analogical learning; and

  • 4.

    observational learning.

Of these four types, researchers have found that students trained using observational methods achieved the highest measurable outcomes in negotiation simulations and concluded that “watching a reenactment of a negotiation conducted by skilled negotiators was remarkably effective” as a training tool (Nadler, Thompson, and Van Boven 2003: 529–530).

Using the tools and processes adapted from the LDP model in a peer assessment of a negotiation exercise uses the observational learning method. As a student observes a classmate performing in negotiation simulations and in the subsesquent debriefing session, both good and bad models of negotiation behavior are displayed and analyzed. In the debriefing sessions, “bad” or ineffective behaviors can be identified by the instructor and held up as examples of what not to do. The use of “good” or effective behaviors, on the other hand, can be reinforced by instructor‐guided classroom discussion in a way that highlights their connection to established negotiation theory and encourages emulation by students.

In addition, the use of the assessment instrument in and of itself adds value to the observer's learning experience. Students find that just having the checklist of good negotiating behaviors serves as a reference to help them see and better understand not only the skill level of those students under observation but also their own (Nadler, Thompson, and Van Boven 2003).

For example, one student commented, “Observing and recording the negotiation allowed me to compare other strategies and negotiation skills. It highlighted areas and skills that I hadn't thought of in our negotiation with particular emphasis on the planning process.” Another student said, “It was a unique experience to be able to view the negotiations as an ‘outsider’ enabling me to focus on a single person while also observing both sides of the discourse. Being an outsider and also using the ‘observation sheet’ helped ‘prep’ me for our negotiations.” A third example: “Having the evaluation criteria prompted me to think about the skills of a successful negotiator and think about how I would have personally handled the situation.”

What if the reenactment of the negotiation is not performed by “skilled negotiators” but by other students instead? While not all student negotiators are “skilled,” there is value in observing peers who are engaged in applying themselves in the process of improving their negotiation skills. As observers, students sharpen their ability to discern and identify negotiation strategies and tactics and to assess the effectiveness of those tactics. When delivering feedback to their subjects, observers reinforce their own understanding of negotiation principles through the process of explaining the reasoning behind their suggestions for improvement.

As Fisher and Siegel (1987: 413) have noted,

A good negotiator develops only over time, and it is difficult to articulate one's strengths and weaknesses in a particular situation unless one has an organized framework for thinking about the experience. In a negotiation course, a student may participate in several negotiations, not all of which the professor can observe and criticize. Transferring some of this responsibility for criticism to students through a self‐evaluation mechanism will enhance the student's development by forcing the student to observe her performance more critically.

Even if the student does not do a perfect critique, the process of watching and analyzing a negotiation would be instructive. Not only will the student learn more about her own skills, she will also learn to recognize different behaviors and approaches used by other negotiators. For example, the student would be able to label an opponent's tactic that was used effectively against her, understand why it worked, and recognize it the next time it is used. Thus, the student will enhance her own skills and her understanding of the process by using this self‐evaluation tool.

Facilitating Metacognition

The self‐assessment process as used in our pilot program may provide students with another developmental benefit recognized by McAdoo and Manwaring (2009): a means to the acquisition of the skill of “metacognition,” that is, learning about how the student's learning process itself works. They wrote:

In addition to learning to reflect on and evaluate one's own negotiation performance, learning to reflect on one's own learning processes (or “metacognition”) can increase the likelihood that such learning will be sufficiently robust to apply outside the classroom. Every negotiation student learns somewhat differently, depending on his or her own experiences, preferences, level of epistemological development (Manwaring 2006) and numerous other factors. A metacognitive orientation helps students understand not just what they do and do not know but also the idiosyncratic ways in which their own learning processes work. Just as a good negotiator is aware enough of the negotiation process that she can proactively influence it, a good negotiation student is aware enough of her learning process to proactively manage it (210).

Improving the Reliability of Grading

Using the LDP method, student performance in a skills‐based course could be evaluated against objective standards. Using standardized rubrics and clearly delineated learning performance indicators should generate data that could be used for grading purposes in a way that is both objective and fair. (See Note 3 regarding questions about using student observations as part of the grading process.)

Improving Instructional Delivery Methods

It is to be expected that the assessment data generated by the LDP method may indicate gaps between the stated learning outcomes and the coverage, focus, intensity, method, and/or direction of the instruction delivered. When this occurs, as it has in the authors' experience, learning outcomes can be realigned more closely with teaching efforts in order to increase the effectiveness of student learning.

Improving the Assessment Process Itself

When examined over time, some of the data generated by the assessment process may not prove to be useful or particularly informative. When this occurs, the form of the assessment instruments and/or the methods of deploying them may need to be rethought, restructured, and evaluated as part of the assessment itself. Reiterations of this process will help lead to more refined and focused assessment efforts.

Our pilot experience has been positive and promising. Adapting the data collection forms and importing the LDP data collection process for use in a skills‐based MBA course was relatively simple and straightforward. This was especially true because the target course already had well‐established learning outcomes, classroom training exercises that were roughly equivalent to the LDP's structure leadership opportunities, and a classroom culture that validated the practice of students actively participating in the learning process of their classmates.

Students had no difficulty in using the system. Even though the assessment forms and processes were new to the MBA students, they were readily and easily accepted as a routine part of the course. Students reported that simply having a printed list of negotiation skills, arranged in a logical order, was in and of itself an aid to learning. Students quickly learned to apply the rubric via the norming process, by observing classmates in the peer assessment, and through self‐assessment.

The data generated via this early pilot process were considered reasonably reliable because the norming session helped to establish an acceptable level of interrater reliability. As more data are collected over time, further analysis will reveal whether more training of observers is necessary. In the peer‐assessment process, students seemed to strive to be accurate, and there were checks and balances on the process because each student served both as subject and as assessor in turn. The data generated by the self‐assessment process have proven useful as a vehicle for student development via feedback and reflection, as well as for course assessment.

The process can be adapted to other courses. Perhaps the most obvious use of the LDP assessment system would be in the leadership training courses in the MBA curriculum. After all, the LDP system originated in the field of military leadership training. Other possibilities for application of a similar system of peer‐ and self‐assessment in business school courses might include any courses involving group projects and/or presentation skills. Beyond schools of business, such assessment methods could be adapted for use in assessing skill development in other university departments such as law (trial practice skills, client counseling), education (teaching skills, course design), engineering (design and construction skills), and nursing (patient care).

The authors have identified three additional developmental steps that improve both the course and the assessment process itself. First, our early experiences with this pilot process have caused us to revisit the course learning outcomes for the negotiation course and consider how closely they connect to the skills selected for assessment. The linkage between the learning outcomes and the assessed skills as described on the data collection forms could be made stronger and more explicit, as they are presently in the U.S. Army ROTC program.

Second, the data collection forms could be further refined and streamlined to make them easier for students to use. As more experience is gathered, we anticipate making modifications to the data collection forms, to the way in which they are introduced, and perhaps to the frequency with which they are used. For example, the forms could be introduced much earlier in the course so that students would have a sense of the number of skills they will be expected to learn and master if possible. We plan to experiment with the forms and the data collection process to determine the possible benefits and drawbacks.

Third, more research is needed, over time, to determine the potential utility of this process as a program‐wide assessment tool in addition to its use as a tool for individual and course‐level assessment. For example, at the program level, it might be useful to use a version of this process to assess the effectiveness of teaching high‐level skills such as leadership. Courses in leadership, or portions of courses devoted to leadership development, are often scattered across the curriculum in many business schools. Using a common assessment tool might provide useful data that could assist students, faculty, and program directors by assisting their efforts toward improvement.

The authors thank John C. Bean, professor of English and consulting professor for academic writing and assessment at Seattle University, for his support of our efforts on this project. Professor Bean is a former army lieutenant (ROTC, Stanford) who served in the Ninth Infantry Division in Vietnam, 1966–1967.

1.

In PTA, the “foundational assessment act” is an instructor's grading of a student performance — an approach that validates an instructor's expertise and allows grades to be used also for assessments so long as the grades are justified by reference to shareable criteria (Bean, Carrithers, and Earenfight 2005). An early example, a PTA‐based process used in assessing performance in a negotiation course featured a lengthy list of over two hundred separately delineated negotiation behaviors, together with a two‐level rubric (“effective”–“ineffective”) for each observation (Fisher and Siegel 1987). This approach is structurally similar to, but operationally quite different from, the modified LDP assessment process used in our pilot project. One major difference is that the Fisher and Siegel system requires the instructor to review videotapes of negotiation simulations performed by students.

2.

Without a substantial amount of groundwork to change the current MBA classroom culture, there would likely be some degree of student pushback on the idea of using peer assessment scores as part of their grade. There are other possible concerns regarding this approach. Students might be unfairly biased toward friends and against others. If a course is graded on a curve, students might be incentivized to assess classmates too harshly.

3.

Our courses are offered on the quarter system, and negotiation skills is offered once each quarter (including summer quarter), or four times per year. The pilot testing of the new assessment system was completed in academic year 2008–2009. Beginning with the summer quarter of 2010, assessment data are now routinely collected in each course, each quarter.

Association to Advance Collegiate Schools of Business
.
2007
.
Assurance of learning standards: An interpretation
. AACSB White Paper issued by AACSB International Accreditation Coordinating Committee. Available from http://www.aacsb.edu/publications/whitepapers/AACSB_Assurance_of_Learning.pdf. Retrieved November 21, 2010.
Bean
,
J.
,
D.
Carrithers
, and
T.
Earenfight
.
2005
.
Transforming WAC through a discourse‐based approach to university outcomes assessment
.
WAC Journal: Writing Across the Curriculum
16
:
5
21
.
Cialdini
,
R.
2001
. The power of persuasion (digital video recording). Palo Alto, CA:
Stanford Executive Briefing Series
.
Department of the Army
.
2006
. Army leadership: Competent, confident, agile (Field Manual 6‐22). 2006. Washington, DC. Available from http://www.fas.org/irp/doddir/army/fm6‐22.pdf. Retrieved October 27, 2010.
Department of the Army
.
2009
. Cadet command, leadership development program: Army ROTC Military Science IV leader's notebook. Washington, DC. Available from http://www.usm.edu/armyrotc/LDP/LDP%20HANDBOOK%2031%20July%202009.pdf. Retrieved November 17, 2010. October 27, 2010.
Eder
,
D.
and
K.
Martell
.
2005
.
Putting assessment in its place: Reviving, surviving, and even thriving through assessment
. AACSB Assessment/Assurance of Learning Seminar. Section 3, p. 30.
Fisher
,
M.
and
A.
Siegel
.
1987
.
Evaluating negotiation behavior and results: Can we identify what we say we know?
Catholic University Law Review
36
:
395
448
.
Fisher
,
R.
,
W.
Ury
, and
B.
Patton
.
1991
.
Getting to yes: Negotiating agreement without giving in
, 2nd edn. New York:
Penguin
.
Garrison
,
D. R.
and
T.
Anderson
.
2003
.
E‐learning in the 21st century: A framework for research and practice
. London:
Routledge Palmer
.
Mason
,
R.
2002
.
Rethinking assessment for the online environment
. In
Distance education and distributed learning
, edited by
C.
Vrasidas
and
G.
Glass
. Greenwich, CT:
Information Age Publishing
.
McAdoo
,
B.
and
M.
Manwaring
.
2009
.
Teaching for implementation: Designing negotiation curricula to maximize long‐term learning
.
Negotiation Journal
25
(
2
):
195
215
.
Nadler
,
J.
,
L.
Thompson
, and
L.
Van Boven
.
2003
.
Learning negotiation skills: Four models of knowledge creation and transfer
.
Management Science
49
(
4
):
529
540
.
New Horizons for Learning
.
1995
.
Assessment terminology: A glossary of useful terms prepared for: Assessing learning. Should the tail wag the dog? Assessing Learning Conference
. Baltimore:
New Horizons for Learning
. Available from http://www.newhorizons.org/strategies/assess/terminology.htm. Retrieved November 22, 2010.
Page
,
D.
and
A.
Mukherjee
.
2009
.
Effective technique for consistent evaluation of negotiation skills
.
Education
129
(
3
):
521
533
.
Peppet
,
S.
2002
.
Teaching negotiation using web‐based streaming video
.
Negotiation Journal
18
(
3
):
271
283
.
Shell
,
G. R.
2006
.
Bargaining for advantage
. New York:
Penguin Books
.
Tyler
,
M.
and
N.
Cukier
.
2005
.
Nine lessons for teaching negotiation skills
.
Legal Education Review
15
(
1
):
61
86
.
Walvoord
,
B.
2004
.
Assessment clear and simple: A practical guide for institutions, departments and general education
, 1st edn. San Francisco:
Jossey‐Bass
.
Walvoord
,
B.
and
L. P.
McCarthy
.
1990
.
Thinking and writing in college
. Urbana, IL:
National Council of Teachers of English
cited in Eder, D. and M. Martell. 2005. Putting assessment in its place: Reviving, surviving, and even thriving through assessment. AACSB Assessment/Assurance of Learning Seminar, Section 3, page 30.
Weiss
,
S.
2003
.
Teaching the cultural aspects of negotiation: A range of experiential techniques
.
Journal of Management Education
27
(
1
):
96
121
.
Williams
,
G.
,
L.
Farmer
, and
M.
Manwaring
.
2008
.
New technology meets an old teaching challenge: Using digital video recordings, annotation software, and deliberative practice techniques to improve student negotiation skills
.
Negotiation Journal
24
(
1
):
71
87
.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.