Abstract
In this article, we consider the intersection of cognitive motor dissociation (CMD) and artificial intelligence (AI), hence when CMD meets AI. In covert consciousness, there is a discordance between the observed behavior, the traditional bedside mode of assessment, and the response to volitional commands as depicted by neuroimaging or EEG studies. This alphabet soup of acronyms represents both the promise and peril of nascent technology in covert consciousness. On the diagnostic side, there is the complexity and uncertainty of identifying the discordance between cognitive activity and overt behavior. On the therapeutic side, when AI is used to generate speech, there is the possibility of misrepresenting the thoughts and intentions of those who are otherwise voiceless. This concordance of factors makes the application of AI to CMD worthy of deeper consideration. We offer this analysis in the spirit of anticipatory governance, a prudential process by which one plans to prevent or mitigate unintended consequences of novel technology. We first consider the normative challenges posed by CMD for clinical practice, neuroethics, and the law. We then explore the history of covert consciousness and the relationship of severe brain injury to the right-to-die movement, before introducing three biographies of brain injury that highlight the potential impact of disability bias or ableism in clinical practice, assistive technology, and translational research. Subsequently, we explore how AI might give voice to conscious individuals who are unable to communicate and the ethical challenges that this technology must overcome to promote human flourishing drawing upon what Nussbaum and Sen have described as a “capabilities approach” to promote normative reasoning.
INTRODUCTION
In this article, we consider the intersection of cognitive motor dissociation (CMD) and artificial intelligence (AI), hence when CMD meets AI. In covert consciousness, there is a discordance between the observed behavior, the traditional mode of assessment, and the response to volitional commands as depicted by neuroimaging or EEG studies (Schiff, 2015). This alphabet soup of acronyms represents both the promise and peril of nascent technology in the wake of the identification of covert consciousness.
On the diagnostic side, there is the complexity and uncertainty of identifying the discordance between cognitive activity and overt behavior. To further complicate matters, individuals in CMD may present with a range of functional abilities, from those who can follow volitional commands to those who may benefit from a communication device, such as a brain–computer interface.
On the therapeutic side, when AI is used to generate speech, there is the possibility of misrepresenting the thoughts and intentions of those who are otherwise voiceless. This concordance of factors makes the application of AI to CMD particularly fraught and worthy of deeper consideration. Although it might be said that this is an article in search of a problem, we offer this analysis in the spirit of anticipatory governance, a prudential process by which one plans in advance to prevent or mitigate unintended consequences of novel technology (Guston, 2014).
We first consider the normative challenges posed by covert consciousness or CMD for clinical practice, neuroethics, and the law (Fins, 2019a). We then explore the history of covert consciousness and the relationship of severe brain injury to the right-to-die movement, before introducing three biographies of brain injury that highlight the impact of disability bias or ableism in clinical practice, assistive technology, and translational research (Iezzoni et al., 2021). Subsequently, we explore how AI might give voice to conscious individuals who are unable to communicate and the ethical challenges that this technology will need to overcome to promote human flourishing.
Covert consciousness is a biological and sociological phenomenon (Claassen et al., 2024). Biologically, covert consciousness does not manifest itself behaviorally at the bedside and can only be discerned through functional neuroimaging or EEG (Claassen et al., 2019; Edlow & Fins, 2018). Sociologically, covert consciousness goes undetected by practitioners because of disinterest, neglect, or clinical error (Fins, 2020a). Over 80% of physicians believe that people with disabilities have a worse quality of life than others without disability, indicating the prevalence of disability bias among physicians (Bowen et al., 2024; Iezzoni et al., 2021). Therefore, covert consciousness may go undetected for two reasons. First, covert consciousness may go undetected because of the absence of overt behavioral manifestations of consciousness. The second is because of the sociology of medicine, where covert consciousness could be discerned were it not for the disability bias embedded in medical care and society more broadly.
Both the biological and sociological classes of covert consciousness are denied voice for different reasons, and each could be assisted by speech-generating AI (Fins, 2016). Although AI could facilitate communication, there is a risk that generative speech might be inconsistent with the intentions of individuals with CMD and reflect a disability bias deeply embedded in society and its representations on the web. The risk of misrepresentation and miscommunication stems from the complex relationship of clinical practice, brain injury, and the history of the right-to-die movement, exemplified by landmark cases, notably, Quinlan, Cruzan, and Schiavo (see Fins, 2015; Fins, 2006; Schiavo, 2004; Cruzan, 1990; Quinlan, 1976).
Ideally, AI could be the ultimate assistive technology, facilitating communication through generative speech. To maximize the benefits of AI, and minimize the accompanying risks, we must apprehend the conceptual origins of CMD and how present-day attitudes toward individuals with disorders of consciousness (DoC) may prompt the errant application of generative AI. Although it is laudable to give voice to the voiceless, it is more important not to put words in their mouths. Machines can assist us, but we must be the first authors of our lives (Fins, 2023b).
HISTORICAL ORIGINS OF COVERT CONSCIOUSNESS
The conceptual origins of CMD date back to the landmark article, “Persistent Vegetative State After Brain Damage,” published in The Lancet in 1972 (Jennett & Plum, 1972). They described a “syndrome in search of a name” as the “persistent vegetative state,” echoing Aristotle's hierarchical conception of the soul in De Anima (Adams & Fins, 2017; Aristotle, 1957). As is well appreciated, the vegetative state results from the isolated recovery of the brainstem following coma. In this state of unconsciousness, the patient's eyes are open, but there is no awareness of self, others, or the environment. Presciently, Jennett and Plum note that in the vegetative state there “seems wakefulness without awareness.” (Jennett & Plum, 1972, p. 734) [italics added].
Jennett and Plum were hesitant to definitively assert that the eyes open state is invariably without underlying awareness. In 1972, they did not have neuroimaging, to peer inside the injured brain, looking for covert consciousness, although their cautious tone suggests that they anticipated this possibility. Lest the reader think that this phrasing was a gratuitous aside, we note that Plum was a careful writer and founding editor of the Annals of Neurology (Posner, 2010). Therefore, we surmise that the addition of “seems” was deliberate. This caveat implies that what the physician observes at the bedside may not correlate with the patient's actual brain activity.
Parenthetically, the more recent designation of the vegetative state as “unresponsive wakefulness syndrome” is problematic, if not misleading. In response to the use of unresponsive wakefulness syndrome, as proffered by prominent neurologists in Europe (Laureys et al., 2010) and cited in the 2018 guidelines on the diagnosis and treatment of people with DoCs (Giacino et al., 2018), Fins and Bernat provided an accompanying commentary on related ethical and legal concerns, noting:
Given the importance placed upon the detection of covert consciousness, we were puzzled by the Guideline's adoption of the behavioral term unresponsive wakefulness syndrome. This term, accepted in Europe to replace vegetative state, is a bedside description that obscures nonobserved biological differences underwriting consciousness … the endorsement of this descriptive category seems regressive because it fails to connote the underlying pathophysiology. … Functional neuroimaging demonstrating covert consciousness in some patients showed that the behavioral “phenotype” of unresponsive wakefulness may not always correlate with the underlying “genotype.” (Fins & Bernat, 2018, p. 473)
This nosology expanded in 2002 when a consensus panel of the American Academy of Neurology codified the minimally conscious state (MCS) as a new class of patients (Giacino et al., 2002). Patients in MCS demonstrate intention, attention, and memory. They may look up when someone enters the room, reach for a cup, and even say their name. However, because these behaviors are episodic and intermittent, patients in MCS are often mistaken at the bedside as being in the vegetative state, when, in fact, patients may experience a liminal state of consciousness (Schnakers et al., 2009).
Misdiagnosis also occurs in covert consciousness, when there is a discordance between behaviors observed at the bedside and underlying neural activity. This incongruity was demonstrated empirically using fMRI to evaluate a 23-year-old woman behaviorally in the vegetative state (Owen et al., 2006). While in the scanner, she was instructed to imagine walking around her house or playing tennis. In response, she demonstrated volitional neural activation in the motor strip and parietal lobe, revealing covert consciousness despite a behavioral examination consistent with the vegetative state. On the basis of this volitional response, the participant was not in the vegetative state but rather in a nonbehavioral MCS (Fins & Schiff, 2006). Notably, she did show visual fixation that at the time of the study was not yet recognized in the United Kingdom as evidence of consciousness.
Investigators from the Universities of Cambridge and Liège reported in the New England Journal of Medicine that the detection of covert consciousness could also serve as a communication tool for individuals thought to be in the vegetative state. Building upon the paradigm introduced by Owen and colleagues, this study demonstrated that the use of volitional commands could be toggled to simple yes/no responses. In this way, they showed how fMRI could be used to facilitate communication in individuals who behaviorally appear unconscious and in the vegetative state (Monti et al., 2010).
The term CMD was subsequently coined by Nicholas D. Schiff to describe the discordance between “measured bedside behavior and laboratory investigation” and encompasses a range of functional capabilities, in patients who appear to be vegetative but, in fact, are conscious (Schiff, 2015). These patients demonstrate command-following on fMRI and/or EEG, without observable behavioral evidence at the bedside of command following. This term would also include coma and MCS (−), that is, MCS without language production. Covert consciousness is further understood to include higher-order disorders such as the locked-in state (Claassen et al., 2024; Schnakers et al., 2022). AI would be relevant as a means of generative speech to CMD patients who retained the potential for language.
Although the detection of covert consciousness originated in patients with chronic DoC, the phenomena has also been described in acute brain injury in the intensive care unit using fMRI (Edlow & Fins, 2018; Edlow et al., 2017) and EEG (Claassen et al., 2019). Although both fMRI and EEG remain investigational, emerging data suggest their use in clinical practice to identify covert consciousness. EEG is especially promising as it may provide a more accessible means of detecting covert consciousness in low-resource settings (Claassen et al., 2024).
BRAIN INJURY AND THE RIGHT TO DIE
The relationship between covert consciousness and disability bias is deeply rooted in the history of bioethics and the evolution of the right-to-die movement. Originating in the 1960s, the field of bioethics asserted the values of self-determination and autonomy. In the 1970s, the Karen Ann Quinlan case brought the vegetative state to the forefront of the nation's conscience, with the emergence of the right-to-die movement predicated on the negative right to be left alone. In the Quinlan case, the New Jersey Supreme Court decided in favor of removing life-sustaining support, invoking the loss of a cognitive sapient state as a moral and juridical predicate (Fins, 2003, 2005, 2006, 2015). In his decision, Chief Justice Richard J. Hughes cited Plum, the court-appointed neurologist, and noted his distinction between a conscious and vegetative state. He wrote:
It was indicated by Dr. Plum that the brain works in essentially two ways, the vegetative and the sapient. … We have no hesitancy in deciding … that no external compelling interest of the State should compel Karen to endure the unendurable, only to vegetate a few more measurable months with no realistic possibility of returning to any semblance of cognitive or sapient life. (In re Quinlan, 1976)
Since Quinlan, physicians have become acculturated to the right to die, as the vegetative state was perceived as immutable and became the ultimate in medical futility. The belief that nothing can or should be done for this population was typified by the Quinlan autopsy results, which were published in the New England Journal of Medicine in 1994. Quinlan's brain weighed 835 g, about two thirds the mass of a normal brain (1300 g), with hydrocephalous ex vacuo, and thin cortices (Kinney et al., 1994). This pathology was inconsistent with a brain that could sustain consciousness and furthered notions of futility for patients with severe brain injury, given the centrality of “cognitive or sapient life” in the Quinlan decision (In re Quinlan, 1976).
Although this sociological and biological predicate was upheld in the Cruzan and Schiavo decisions (Fins, 2020b; Fins, 2006), the codification of MCS made these sweeping generalizations of futility highly problematic and illustrated the prognostic variance of patients with DoC.
Despite this emerging reality, the legacy of the right-to-die movement and Quinlan continued to perpetuate therapeutic nihilism and bias against patients with severe brain injury. As ethnographic research has demonstrated (Fins, 2015), patients and their families continue to suffer from a presumption of neglect and futility, epitomized by an article by neurointensivists from the Mayo Clinic:
The attending physician of a patient with a devastating neurologic illness will have to come to terms with the futility of care. … Those families who are unconvinced should be explicitly told they should have markedly diminished expectations for what intensive care can accomplish and that withdrawal of life support or abstaining from performing complex interventions is more commensurate with the neurologic status. (Wijdicks & Rabinstein, 2007)
BIOGRAPHIES OF BRAIN INJURY
We next turn to three biographies of brain injury that speak to the potential biases that undermine the use of emerging technologies.1
Our first narrative is the remarkable story of Terry Wallis. In 1984, Wallis was in a car accident, resulting in a severe TBI. While he was presumed permanently unconscious, he re-emerged 19 years later, in 2003. As reported to one of us (J. J. F.) by Terry Wallis' mother, Wallis was at lunch with a caregiver when his mother entered the cafeteria. As Mrs. Wallis recalled:
they were cleaning tables and this girl, Pam, was cleaning a table and I walked in and she said, “Terry who's that old woman coming through there” … she all the time asked him stuff like that just because, mostly teasing me. And Terry said “mom” I mean just like that “Mom …” (p. 63)
The event garnered international media attention, and 1 year later, Wallis and his family visited Weill Cornell to participate in clinical studies about the MCS (Fins, 2015, 2023a). During this evaluation, it became clear that Wallis had not been permanently unconscious but rather in MCS for an extended period (Schiff & Fins, 2003). Subsequent neuroimaging revealed dynamic changes in the white matter fibers that likely undergirded his recovery (Voss et al., 2006). This process, reminiscent of axonal sprouting and pruning, was a reharnessing of a developmental process in the service of repair (Wright & Fins, 2016; Fins, 2015). These data again brought Terry Wallis to the attention of national media. A later study conducted over a 54-month period demonstrated the longitudinal process of axonal sprouting and pruning that could only be inferred in the 2006 study of Terry Wallis (Thengone, Voss, Fridman, & Schiff, 2016).
Despite the prominence of Wallis' recovery, even he was vulnerable to the promise and peril of brain injury and the vagaries of the health care system. Despite living an extraordinary life, he suffered an all too ordinary death and was subjected to disability bias and the lingering right-to-die legacy of Quinlan.
In February of 2022, Wallis' sister reached out to Schiff and Fins (Fins, 2023a). Wallis had contracted a treatable pneumonia and was on a ventilator. However, his doctors wanted to remove life-sustaining support. As per his sister's account, they could not imagine that his life was worth living. Wallis was emotionally withdrawn, and although his brain function was unchanged, he had a heavy heart. He was still grieving his mother, Angilee, who had died in 2018. When one of his doctors asked if he wanted to be with his mother, Wallis indicated affirmatively. His sister became alarmed. She was concerned that the doctors misinterpreted this as a wish to die, when, in fact, he simply missed his mother. The doctors brought a disability bias to the situation. Although appearing to respect Wallis' wishes, they distorted them.
As reported previously in greater narrative detail (Fins, 2023a), Wallis died for lack of available medical care to address his needs. Pulmonary rehabilitation was not readily available in rural Arkansas, and Wallis was too frail to travel out of state to receive the care he required. Ultimately, he died of pulmonary complications. Despite the advocacy and concern of his surrogates, they were unable to change the focus of care or the course of events. As Tammy Baze, Wallis' sister and surrogate, wrote one of us (J. J. F.), “… his death was a needless tragedy. One that no matter what I did I couldn’t stop.” (Baze, 2023).
Terry Wallis' story highlights the compounding vulnerabilities and the intersectionality of the right-to-die movement, poverty, and access to care in rural America. The inciting catalyst of the cascade of neglect was a disability bias that prompted his physicians to question the value of his life given his disorder of consciousness.
Wallis' story was, in many ways, a sequelae of brain injury that is too often repeated. These individuals are prone to systemic neglect as illustrated by premature decisions to withdraw life-sustaining therapy and the decreasing availability of rehabilitation, despite scientific and medical progress in the care of patients with DoC (Giacino et al., 2018a, 2018b; Turgeon et al., 2011). Wallis' final chapter is emblematic of these challenges and underscores the need for greater civil rights protections, to ensure people with disabilities receive the care they need and have their voice preserved (Fins, Wright, & Bagenstos, 2020).
When patient and family preferences are distorted, care deteriorates. Indeed, it evaporates. Given the widespread disability bias embedded in the history of brain injury and the right to die, generative AI could amplify the erosion of patient self-determination when coupled with assistive technology. If generative AI is constricted by prevailing views about what constitutes a life that is worth living, then individuals who are dependent on such technology will be unable to justify their choices whether they be positive or negative (Johnson, 2011).
Next consider Maggie Worthen, the protagonist in Rights Come to Mind: Brain Injury, Ethics and the Struggle for Consciousness (Fins, 2015). Worthen was thought to be in the vegetative state when she first presented to Weill Cornell but found to have covert consciousness. Invoking today's nomenclature, we would classify her as being in CMD, because in addition to her lack of motor output consistent with a functional locked-in-state, she also had damage to her cortical-thalamic system with bilateral thalamic injury that compromised her higher integrative function (Thengone et al., 2016).
Worthen used an eye-tracking device to communicate. When queried, she directed her gaze at a letter and the device prompted her with choices to complete the word. For example, a “ca” sequence might be followed by suggested words like “car” or “cat” (My Tobbii Dynavox, 2023; Fins, 2015). This device facilitated the speed and ease of communication, enabling Worthen to vocalize and realize what we have previously described as “the right to be heard” (Lawrence et al., 2019; Fins, 2016).
However, what would have happened if her intended word choice was not among the options or, even worse, if there were bias in the choices that were presented? Suppose when spelling “ca…,” Worthen wanted to say “capable,” and that word was not offered as an option? Her intention may have been to counter the perception that she was incapable, but the technology did not provide her with the necessary vocabulary. Such engineering of a verbal outcome could be at cross purposes if the goal of assistive technology is to facilitate communication for people with disabilities and not hinder it.
This is more than a hypothetical worry as disability bias is prevalent in society (Garland-Thomson, 2005) and in the databases that inform AI programs. Meredith Whittaker, formerly of the AI Now Institute, observed that:
Disabled people have been subject to historical and present-day marginalization, much of which has systematically and structurally excluded them from access to power, resources, and opportunity. Such patterns of marginalization are imprinted in the data that shapes AI systems, and embed these histories in the logics of AI. Recent research demonstrates this, showing that social attitudes casting disability as bad and even violent are encoded in AI systems meant to “detect” hate speech and identify negative/positive sentiment in written text. Researchers found that “a machine-learned model to moderate conversations classifies texts which mention disability as more ‘toxic’” while “a machine-learned sentiment analysis model rates texts which mention disability as more negative.” (Whittaker et al., 2019)
People with undetected consciousness are dependent upon assistive devices to foster functional communication. Language is relational and key to the reintegration of social networks ruptured by brain injury. Therefore, the restoration of communication should be a goal of neuro-palliative care, as it reconstitutes relationships and community (Fins, 2008, 2019a) Nonetheless, we must acknowledge the risk of perpetuating these societal biases. Although this vulnerable population could benefit from generative speech prompted by AI, they are also susceptible to harm despite our good intentions.
A final illustration centers upon our group's development of deep-brain stimulation (DBS) in individuals with severe brain injury and how broader societal forces inform academe's reception of this novel technology. In 2007, Schiff and colleagues reported in Nature the first use of thalamic DBS in patients in MCS. During a 6-month, double-blind, cross-over study with bilateral thalamic DBS, the participant demonstrated increased cognitively mediated behaviors and language, improved limb control, and regained the ability to eat by mouth. This study provided the first evidence of DBS to promote late recovery from severe TBI (Schiff et al., 2007).
Despite the study's success, this work was met with a curious critique that reflected a disability bias. In an article entitled “Minimally Conscious States, Deep Brain Stimulation, and What is Worse than Futility,” neurosurgeon and bioethicist Grant Gillett alleged that the use of DBS in patients in the MCS could result in the individual gaining awareness of their disability, what he dubbed as a “risk of unbearable badness (RUB)” (Gillett, 2011).
Gillett's criticism initially prompted concern. Instead of restoring the participant's voice as intended, it was feared that he had been awakened to a dystopian reality. Our worries seemed to be confirmed when the participant's mother shared that her son was crying. She reported:
He was crying one day I was visiting him and I said, … why are you crying? … He says, ‘I'm crying for Cory.’” Innocently, I [J. J. F.] asked, “Who's Cory?” “That's his brother. That's his brother that does not come to see him, that's in denial. … And even I started crying because here he is, he's aware, he knows what's going on.” (Fins, 2015)
This narrative speaks eloquently to the guiding ethos of the disability rights movement that proclaims, “Nothing about us without us.” (Catapano & Garland-Thomson, 2019; Charlton, 1998). Rather than making assumptions about what people with disabilities are thinking, it is important to listen to them. Only then can they express themselves (Fleischer & Zames, 2011; Scotch, 2001).
These three narratives reveal potential biases in end-of-life care, assistive technology, and translational research that inevitably inform the databases upon which AI draws. Unless fully apprehended, the biases embedded in our cultural ecosystem could distort the use of generative AI as a means of communication for people with disabilities, inadvertently undermining their self-expression.
AI AND THE PROMOTION OF CAPABILITIES
As we have written elsewhere, neuroethics is an ethics of technology (Fins, 2011). Although there are other definitions of neuroethics, this pragmatic definition seems particularly apt when considering a new technology such as AI (Fins, 2017). Technology has the potential to both reveal and remedy problems that heretofore were unappreciated. Without neuroimaging, one could only speculate about the prospect of CMD or covert consciousness. The advent of new technology has revealed the phenomena and obliged a normative response to individuals now identified as conscious, albeit without behavioral manifestations. Identification, however, is just the first step. The proper application of AI could help ameliorate the isolation of this population by restoring their voice and granting them access to a fuller vocabulary.
John Dewey in Common Sense and Scientific Inquiry wrote that “Inventions of new agencies and instruments create new ends; they create new consequences which stir men [and women] to form new purposes.” (Dewey, 2005). The use of AI to augment assistive technology for people with DoC should prompt moral reflection given its potentiality to promote human flourishing. To achieve this laudable end, scientists and clinicians must be cognizant of the correlative risk of harm originating in prevailing disability bias. Instead, AI should be used to foster capabilities (Nussbaum, 2011; Sen, 1985) in a manner that acknowledges and encourages the unique contributions of people with disabilities (Fins, Shulman, Wright, & Shapiro, 2024; Shapiro et al., 2022).
An important article in the newly launched New England Journal of Medicine AI notes that the risk of bias will be exacerbated if the populations that have historically been distrustful of the medical establishment opt out from inclusion in AI databases (Goldberg et al., 2024). To prevent this marginalization, there must be a concerted effort to earn the trust of the disability community. In this way, their perspectives can help educate the databases upon which the legitimacy of generative speech will depend.
Will AI enhance the ability of people with DoC to communicate, develop relationships, work, and be reintegrated into the nexus of family and community? Or will it further estrange individuals from the goods to which we are all entitled? When AI meets CMD, it has the potential to aid or distort the voice of people with DoC, integrating or marginalizing an already vulnerable population. In a future where AI contributes to the care of individuals with DoC, we would hope that the aforementioned concerns be part of the discussion. They must not be an afterthought to the march of progress but instead viewed as integral to the assessment of generative speech. Developing safeguards to protect the authenticity of the individual's voice will require input from a wide range of stakeholders including scientists, practitioners, patients, and their families. The process must also be inclusive and sensitive to bias, privacy, and the social acceptability of all end users. As Meredith Ringel Morris importantly notes, this analysis must be “… particularly nuanced and salient when considering the large potential benefits and large potential risks of AI systems for people with disabilities.” (Morris, 2020). This is not the purview of science alone but rather an important point of civic deliberation.
Ultimately, whether AI perpetuates discrimination or facilitates communication is a choice we must collectively make now.
Acknowledgments
Dr. Fins acknowledges the support of: “Cognitive Restoration: Neuroethics and Disability Rights” [1RF1MH12378–01]; “Post-trial Access, Clinical Care, Psychosocial Support, and Scientific Progress in Experimental Deep Brain Stimulation Research” [RO1MH133657]; and Martin A. Fischer.
Corresponding author: Joseph J. Fins, Division of Medical Ethics, Weill Cornell Medical College, New York, United States, or via e-mail: [email protected].
Author Contributions
Joseph J. Fins: Conceptualization; Funding acquisition; Investigation; Methodology; Project administration; Supervision; Writing—Original draft; Writing—Review & editing. Kaiulani S. Shulman: Writing—Review & editing.
Funding Information
Funding was provided by Joseph J. Fins, grant from the Post-trial Access, Clinical Care, Psychosocial Support, and Scientific Progress in Experimental Deep Brain Stimulation Research, grant number: RO1MH133657. “Cognitive Restoration: Neuroethics and Disability Rights,” grant number: 1RF1MH12378-01.
Diversity in Citation Practices
Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.
Note
Permission to use family names and other identifiable information was obtained through a formal consent process for narrative interviews approved by the Weill Cornell Medical College Institutional Review Board for the writing of Rights Come to Mind. The authors also acknowledge the subsequent permission of Tammy Baze and her family to write about the end-of-life care provided to her brother Terry Wallis.