On the one hand, complexity science and enactive and embodied cognitive science approaches emphasize that people, as complex adaptive systems, are ambiguous, indeterminable, and inherently unpredictable. On the other, Machine Learning (ML) systems that claim to predict human behaviour are becoming ubiquitous in all spheres of social life. I contend that ubiquitous Artificial Intelligence (AI) and ML systems are close descendants of the Cartesian and Newtonian worldview in so far as they are tools that fundamentally sort, categorize, and classify the world, and forecast the future. Through the practice of clustering, sorting, and predicting human behaviour and action, these systems impose order, equilibrium, and stability to the active, fluid, messy, and unpredictable nature of human behaviour and the social world at large. Grounded in complexity science and enactive and embodied cognitive science approaches, this article emphasizes why people, embedded in social systems, are indeterminable and unpredictable. When ML systems “pick up” patterns and clusters, this often amounts to identifying historically and socially held norms, conventions, and stereotypes. Machine prediction of social behaviour, I argue, is not only erroneous but also presents real harm to those at the margins of society.
Post-Cartesian frameworks, including developments within the embodied and enactive cognitive sciences, complex systems science, and dialogical approaches to cognition, strongly emphasize the inherently indeterminable nature of the person and the inextricably entangled relationship between person, other, and technology. These traditions have challenged Cartesian ambitions that neatly delineate human behaviour and actions into dichotomies, instead emphasizing ambiguities, continuity, and fluidity. The person exists in a reciprocal relationship with others in a social, cultural, and increasingly digitized and automated milieu. People, far from being static Cartesian selves, are active, dynamic, and continually moving. The interactive turn (e.g., De Jaegher et al., 2010), for instance, has been playing a crucial role in shifting emphasis from the view of the individual as a relatively stable and fully autonomous entity that can be fully understood, to the view of the person as active and dynamic, pregnant with a myriad of open-ended possibilities. On a similar note, distributed cognition and extended mind (Clark & Chalmers, 1998) frameworks have challenged the idea that cognition ends at the skull and that the skin marks the contours of the self, fuzzing the traditionally held neat understanding of cognition and self. There is no clear line demarcating where the mind ends and the world begins. These nuanced approaches recognize that uncertainty, ambiguity, and fluidity, not static dichotomies, exemplify human beings and their interactions. We are fully embedded and enmeshed with our designed surroundings and we critically depend on this embeddedness to sustain ourselves. Furthermore, our historical paths and the moral and political values that we are embedded in constitute crucial components that contribute to who we are. The idea of defining the person once and for all, drawing static classifications, and making accurate predictions thus appears a seemingly futile endeavour. In complexity science terms, human beings and their behaviour are complex adaptive phenomena whose precise pathway is simply unpredictable (Juarrero, 2000).
Automation, on the one hand, is something that is achieved once a given process is complete, that is, it is understood, and discrete such that it can be implemented from a set beginning to a set finish reliably. People and social systems, on the other hand, are partially-open, always becoming, and inherently unfinalizable (Bakhtin, 1984). Automation as complete understanding, therefore, stands at odds with human behaviour, which is inherently incomplete, making machine classification and prediction futile. Given the open and incomplete nature of human beings and social systems, automating sensible (as opposed to automating nonsense and random) ambiguity and indeterminability is ill-conceived. A machine capable of grasping humanity by definition is capable of grasping open-endedness, incompleteness, fluidity, and ambiguity. Alas, this becomes something other than machine or automation as we know it.
Cartesianism, in nuanced forms, remains pervasive in various fields of enquiry from the physical to the human sciences, and computation and AI are no exception (Dreyfus, 2007; Weizenbaum, 1976; Winograd & Flores, 1986). Nonetheless, not all computation and AI is Cartesian. Grounded in a dynamically interactive, embodied, distributed, and fluid understanding of the world, various approaches address questions of AI in a manner that goes beyond Cartesianism. The broadly construed field of Artificial Life (ALife), for example, is concerned with generating artificial systems—via computer simulations, robotic agents, or biochemical processes—that behave like living organisms (Langton, 1997). Technology is conceived of as dynamic, interactive, and embedded in social systems within ALife inspired approaches. Through a proposal for dynamic interactive artificial intelligence, Dotov and Froese (2020), for example, call for systems that emphasize user-machine inter-dependence over autonomy. In recognition of the adaptive and self-organizing nature of technology, the term living technologies has gained momentum within the field of ALife (Aguilar et al., 2014; Bedau et al., 2010; Gershenson, 2013). And as technology becomes more “living”, the question of safety also becomes more central (Gershenson, 2013). The morphing of technology into society, brings both benefits and challenges, according to (Aguilar et al., 2014). This in turn, the authors argue, calls for the establishment of ethical principles for artificial life. Similarly, Bedau et al. (2010) have argued that the creation of living technologies requires the considerations of ethical issues, the development of safeguards, as well as “proper mechanisms to prevent its misuse” (p. 95). Echoing similar sentiments, Helbing et al. (2012), have pointed out the risk of such technologies benefiting only a few stakeholders instead of all humanity.
Although ALife puts dynamicity, embeddedness, and reciprocal and interactive technology–society relationships at its core, the social, moral, ethical, and political concerns of technology are merely explored. Although ethics and safety concerns are gaining more attention, they largely remain peripheral, and studies rarely treat these subjects in and of themselves in-depth and rigorously. This article builds on the fluid technology–society relationship underlying ALife, but focuses on the impossibility of predicting human behaviour. Furthermore, it examines the ethical consequences of attempting such predictions, as well the concrete impact on specific populations.
Machine learning (ML) systems increasingly pervade the social, political, legal, and commercial spheres sorting, classifying, and predicting human behaviour. Networked and ubiquitous AI systems such as the Internet of Things (IoT) and smart technologies that pervade day-to-day life reduce every corner of lived experience as behavioural data to be used as input into such systems (Zuboff, 2019). Patterns discerned from this huge volume of data by ML systems are used to infer and predict human behaviour and actions. The practice of sorting, classifying, and predicting using ML tools is often applauded as a beacon of technological progress and a revolutionary marvel that provides answers to long-standing problems. In a world marked by complexity, change, and uncertainty, shortcuts and simple answers are often championed (Birhane, 2021). Analytics companies boast their ability to provide insight into the human psyche and predict human behaviour (e.g., Qualtrics [https://www.qualtrics.com/uk/]). Some even go so far as to claim to have built AI systems that are able to map and predict “human states” based on speech analysis, images of faces, and other behavioural data (e.g., Affectiva [https://www.affectiva.com/]). Such practice of sorting, organizing, and forecasting the world necessarily has actionable impact with grave consequences. ML systems are not only an academic research endeavour, but a multi-billion dollar business where these tools are deployed into the real world in high-stakes decision-making including in hiring (Ajunwa et al., 2016; Sánchez-Monedero et al., 2020); medicine (Ferryman & Pitcan, 2018; Obermeyer et al., 2019); and criminal justice systems (Angwin et al., 2016; Lum & Isaac, 2016).
In this article, I place machine categorization and predictions within the broader and historical Western science and philosophy that aspires to pin down, taxonomize, and simplify the complex and interconnected world. Although ML and AI1 deal explicitly in probabilities and risks rather than in Newtonian determinacies, I contend that machine categorization and prediction of social outcomes limits possibilities and creates a world partially determined by prediction itself. The social world is messy and fluctuating but also inundated with persistent social norms, power asymmetries, and historical injustice. Historical norms and traditions are often unkind and unjust to individuals and groups at the margins of society, and accordingly, attempts to find stable patterns to sort and categorize the social world pick up these deeply ingrained norms and injustices. Far from being static, social realities are continually co-constructed and the integration of ML systems into day-to-day life increasingly plays a crucial role in influencing the kind of social reality that exists. In Barad's words, “Reality is sedimented out of the process of making the world intelligible through certain practices and not others. Therefore, we are not only responsible for the knowledge that we seek but, in part, for what exists” (Barad, 1998, p. 105). As systems that interact with and are inextricably linked to the social sphere, ML systems partly create social orders. However, while recognizing ML systems as practices that alter the social world, it is also important to acknowledge that responsibility and opportunity to create social orders are unequally distributed. Social, economic, and other privileges mean that a small homogeneous group is endowed with the creation of ML systems, and in part for what exists, contributing to the maintenance of the status quo, while the least privileged are forced to live in such realities that the few create, oftentimes subject to machine harm and injustice.
The rest of the article is organized as follows. Section 2 outlines the underlying Cartesian tendencies of current ML systems that strive for stability, order, and predictability. In section 3, I argue that machine classification and prediction impose determinability and limit possibilities. This is followed by section 4, where I illustrate the fluid, ambiguous, and non-determinable nature of people and social systems. I review current research that illustrates how prediction is a self-fulfilling prophecy in section 5. Next, I look at how machine-imposed determinablility, opportunity, and harm are distributed disproportionately within society in section 6, and I examine how the very practice of sorting and predicting are inherently political in section 7. Section 8 takes a brief look at creativity, which stands outside determinability as a potential transformative force to a just world, and I close in section 9.
2 Cartesian and Newtonian Inheritances
Traditional science in the Age of the Machine tended to emphasize stability, order, uniformity, and equilibrium…[whereas] most of reality, instead of being orderly, stable, and equilibrial, is seething and bubbling with change, disorder, and process. (Prigogine & Stengers, 1984, pp. xiv & xv)
Certainty and order have always been highly sought after in Western philosophy and sciences. Through the process of elimination of all things that can be doubted, Descartes attempted to get rid of unreliable and fallible human intuitions, senses, and emotions. This was fundamental in the quest to establish a secure foundation for absolute knowledge based solely on solid grounds: reason and rational thought (Descartes, 1984). Central to Descartes' work was uncovering the permanent structures beneath the changeable and fluctuating phenomena of nature in which he could build the edifice of unshakable foundations of knowledge. The view of the person that emerged from such a worldview was a primarily rational, static, self-contained, and self-sufficient subject that contemplates the external world from afar in a “purely cognitive” manner as a disembodied and disinterested observer (Gardiner, 1998, p. 129). In the desire to establish timeless and absolute certainty, cognitive capabilities and mental processes were privileged as of primary importance to what it means to be a person. Complete understanding, control, order, manipulation, formalization, and prediction find a comfortable home in this worldview. Although few, if any, scholars identify with the Cartesian view as originally proposed by Descartes, this worldview still prevails today in subtle forms.
In a similar vein, and with a similar fundamental influence as Cartesianism, the Newtonian worldview aspired to impose order and to arrive at universal and objective knowledge in a supposedly observer-free and deterministic world. This worldview sees the world as containing discrete, independent, and isolated atoms. Within the physical world, Newtonian mechanistic descriptions allowed precise predictions of systems at any particular moment in the future, given knowledge of the current position, speed, and acceleration of a system. This view fared poorly, however, when it came to the messy, interactive, fluid, and ambiguous world of the living, who are inherently context bound, socially embedded, and in continual flux. In a worldview that aspires for certainty and predictability, the very idea of ambiguity, complexity, and multivalence—the essence of being, so far as there can be any—is not tolerated. Despite the inadequacy of the billiard ball model of Newtonian science in approaching complex adaptive systems such as human affairs, its residue prevails today, directly or indirectly (Juarrero, 2000).
Descartes and Newton did not single-handedly carve out lasting worldviews that have come to dominate much of Western thought. Nonetheless, they represent the quintessential figures that envisaged an objective, universal, and relatively static worldview governed by laws. This striving for a universal law, Daston argues (Gross, 2020), is a predicament that fails when confronted with unanticipated particulars since no universal ever fits the particulars. Commenting on current ML practices Daston explains:
I think machine learning presents an extreme case of a very human predicament, which is that the only way we can generalize is on the basis of past experience. And yet we know from history—and I know from my lifetime—that our deepest intuitions about all sorts of things, and in particular justice and injustice, can change dramatically. (Gross, 2020, para. 45)
ML systems embody the core values of the Cartesian and Newtonian worldviews where historical, fluctuating, and interconnected behaviour is presumed to be formalized, clustered, and predicted in a value-free and neutral manner. The historic Bayesian framework of prediction is a primary example (Bayes, 1763). This framework has played a central role in establishing explanations of behaviour based from “rational principles alone” (Jones & Love, 2011, p. 169; see also Hahn, 2014). Bayes' approach, which is increasingly used in various areas including data science, machine learning, and cognitive science (Jones & Love, 2011; Seth, 2014), played a pivotal role in establishing the cultural privilege associated with statistical inference and set the “neutrality” of mathematical predictions. Bayes' essay, which was published after his death, included a note that Bayes' method of prediction “shows us, with distinctness and precision, in every case of any particular order or recurrency of events, what reason there is to think that such recurrency or order is derived from stable causes or regulations in nature, and not from any irregularities of chance” (Bayes, 1763, p. 374). However, despite the association of Bayes with rational predictions, Bayesian models are prone to spurious relationship and amplification of socially held stereotypes, a point I expand on in sections 6 and 7. Horgan (2016, para. 30) notes, “Embedded in Bayes' theorem is a moral message: If you aren't scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe” [emphasis in original].
3 Machine Imposed Determinability
Current AI and ML tools that are increasingly becoming an integral aspect of the social world are direct descendants of the Cartesian and Newtonian worldview insofar as they are tools that impose order and pin down the fluctuating nature of human behaviour through taxonomies, classifications, and predictions. These tools force determinability, limit possibilities, and in the process, create a world that resembles the past. Historical patterns and socially accepted norms are rife with histories of discrimination and injustice, and the implication of automating a future that resembles the past for those historically disadvantaged is dire. I discuss this in sections 6 and 7. Below, I examine how ML systems are tools that create a certain type of future through prediction.
Technological developments and their intimate connection to what it means to be a social, dynamic, embodied living being are not new but go as far back as the history of humankind itself, to prehistoric tools such as stone and spear. However, the current mass scale development and deployment of AI and ML systems pose new and unprecedented challenges and overall negative impacts towards marginalized communities, which are disproportionately negatively affected. Technological artifacts constitute a crucial part of the socio-technological milieu. They mediate and enrich our living world but also hold invisible and unprecedented power in shaping and altering reality. In other words, the design of technology is the design of possibilities and constraints (Suchman, 2007).
Technological tools constrain or enable actions while making day-to-day life seamless. A GPS application on a device, for example, can make travelling from point A to B considerably easier. In some circumstances, technological tools form crucial components that sustain lives—pacemakers, for example. Technological developments, especially ML systems, are not something that stand above and over humans but are integral parts of the active, fluid, and dynamic environment of complex, adaptive, self-organizing social systems. Through their power to classify and predict, ML systems direct behaviours and actions towards some things and away from others.
ML systems work by identifying patterns in vast amounts of data. Given immense, messy, and complex data, an ML system can sort, classify, and cluster similarities based on seemingly shared features. Feed a neural network labelled images of faces and it will learn to discern faces from not-faces. Not only do ML systems detect patterns and cluster similarities, they make predictions based on the observed patterns (O'Neil & Schutt, 2013; Véliz, 2020). Machine learning, at its core, is a tool that predicts. It reveals statistical correlations but with no understanding of causal mechanisms.
Furthermore, machine classification and prediction are practices that act directly upon the world and result in tangible impact (McQuillan, 2018). Various companies, institutes, and governments use ML systems across a variety of areas. These systems process and datafy people's behaviours, actions, and the social world, at large. Machine-detected patterns often provide answers to fuzzy, contingent, and open-ended questions. These “answers” neither reveal any causal relations nor provide explanation on why or how (Pasquale, 2015). Crucially, the more socially complex a problem is, the less capable ML systems are of “accurately”2 or reliably classifying or predicting. Narayanan (2019) broadly maps the application of AI systems into three crude categories: Perception (e.g., face recognition), Automating Judgment (e.g., detecting spam), and Predicting Social Outcomes (e.g., predictive policing). There has been rapid progress in the first category and “the fundamental reason for progress is that there is no uncertainty or ambiguity in these tasks” (Narayanan, p. 7). Automating Judgment, such as toxic language detection, presents a somewhat contested practice because the correct decision can often be subjected to disagreement. The task of predicting social outcomes, however, remains fundamentally dubious involving “a lot of snake oil” (Narayanan, p. 9) and is marked with numerous drawbacks and harms. Similarly, in a recent work Salganik et al. (2020) examined the predictability of social trajectories of children from vulnerable families. A team of 160 ML researchers built predictive models using a rich dataset. The authors found that not one model made an accurate prediction and the best predictions were only slightly better than those from a simple benchmark model. Thus, Salganik et al. caution those considering using predictive models to forecast social outcomes. Nonetheless, predictive systems continue to pervade decision-making of social outcomes with disastrous consequences.
4 Indeterminability of the Person
Traditional cognitive science, so far as its desires for a universal, objective, and predictable science of the mind goes, is the heir to the Cartesian and Newtonian worldviews. The continually fluctuating and interconnected state of human affairs finds a stable point through the conjecture that positions the individual person as the seat of knowledge. As such, the individual person is often isolated and taken as the unit of analysis. Great emphasis is placed on her individual mental capabilities as the mind is assumed to be the property of the single individual (Linell, 2009; Marková, 2016). The nature of experimental design in scientific psychology, for example, illustrates the subtle remnant of the desire for cleansing thinking of cultural influences and political dimensions. Research in memory testing, for instance, to a large extent, proceeds from the assumption that memory is a purely cognitive process that resides in the brain (Harris et al., 2011). The individual is removed from her lifeworld and tasked with recalling a series of images or words (often meaningless to the person) using flashcards or a screen in the artificial confines of a laboratory. Subsumed by objective and universalizable formulations, cognitivist approaches paint a picture of the person that equates persons with brains. Emphasis on dynamic relations, contextual and historical embeddings, and messy interactions, on the other hand, are perceived as a threat that blurs and contaminates neat classifications and universalizable conceptions.
Individualistic and reductionist approaches are irrevocably ingrained in Western thought. It is a continual struggle, even for the most aware researcher and practitioner, to steer clear of them. Taking the individual self as the unquestioned origin of knowledge of the world and of others is a legacy of this tradition (Linell, 2009). Traditional social cognition research, supposedly an endeavour that turns attention to the social, falls short of recognizing the dynamic and entangled nature of bodies and environments, and how each influences the other. The individual person, in social cognition, is portrayed as the meaning generator and is the primary interest of study (Marková, 2016). Pushing back against these individualistic traditions, the broadly conceived approaches of embodied and enactive cognitive science offer views of persons, brains, and nature of reality in general that are active, dynamic, and inextricably connected with environments (and others).
Fluidity, multivalence, and precariousness are not perceived as obstacles that stand in the way of final and universal understanding, but are acknowledged and celebrated as necessary conditions for existence. The embodied and enactive turn (Chemero, 2011; Kyselo, 2014; McGann & De Jaegher, 2009; Varela et al., 2016), at its core, places living bodies, with their peculiarities, fluidity, and messiness, at centre stage. Living bodies are not stationary entities that can be captured in neat taxonomies, rather they are active, dynamic, historical, social, cultural, gendered, politicized, and contextualized organisms. People are not solo cognizers that manipulate symbols in their heads and perceive their environment in a passive way, but they actively engage with the world around them in a meaningful and unpredictable way. Living bodies, according to Di Paolo et al. (2018), are processes, practices, and networks of relations which have “more in common with hurricanes than with statues” (p. 7). They are unfinished and always becoming, marked by “innumerable relational possibilities, potentialities and virtualities” (p. 6) and not calculable entities whose behaviour can neatly be automated and predicted in a precise way. Bodies “grow, develop, and die in ongoing attunement to their circumstances.… Human bodies are path-dependent, plastic, nonergodic, in short, historical. There is no true averaging of them” (Di Paolo et al., 2018, p. 97).
Universalizable theories of bodies, taxonomies, and statistical predictions of future behaviours all rely on similarities and abstraction of features that are common among particulars. Unique, contingent, and idiosyncratic features and behaviours pose challenges when it comes to deriving elegant taxonomies. However, idiosyncrasies and peculiarities make someone the particular, novel, and creative person they are. Living bodies each face unique challenges defined by the particular trajectory of history of enactments, history of adaptations, and social circumstantial interactions as they continually navigate the social world. Social interactions themselves, De Jaegher and Di Paolo (2007) contend, are active and dynamic engagements that take on a life of their own in an unpredictable way. They shift in moods, aims, and levels of intimacy, without the participants intentionally seeking these changes. Most fundamentally, “Our most sophisticated knowing is full of uncertainty, inconsistencies, ambiguity, and contradictions [emphasis added]. These characterize how we most often deal with the world, ourselves, and each other” (De Jaegher, 2019, sec. 1, para. 4). Furthermore, Buccella (2020) singles out indeterminacy as a key factor that is important in understanding human perceptual experience. Perception is necessarily open-ended and the environment presents unlimited possibilities and offers many ways of life (Merleau-Ponty, 1945/2012; Nonaka, 2020).
On a similar note, examining the problem of meaning in artificial beings, Froese and Taguchi (2019) single out indeterminability as the key characteristic of humans that differentiates them from artificial beings. Emphasizing the futility of reductionist approaches to complex adaptive systems, Cilliers (posthumously noted by Preiser, 2016, p. 64) further points out that, “From the argument for the conservation of complexity—the claim that complexity cannot be compressed—it follows that a proper model of a complex system would have to be as complex as the system itself.” Precise3 predictions of behaviours and actions, therefore, are impossible and, when enforced, dire ethical consequences emerge. This might raise the question of whether people and social systems are unpredictable in principle or unpredictability is only a practical limitation and a matter of even more data and compute power. Following Cilliers' argument for the “conservation of complexity,” I contend that people and social systems are unpredictable in principle. Having said that, this is not to oppose those that might aspire towards and attempt “accurate” predictions. However, the argument remains that, so long as classification and predictive systems operate within a white straight ontology (Ahmed, 2007), precise and accurate prediction risks measuring how closely behaviours or actions adhere to socially and historically held stereotypes.
Indeterminability and unpredictability do not, by any means, mean that people and social systems wander aimlessly without pattern, habit, or relatively stable behaviour. For any given society there exist socially and culturally accepted norms and historically established conventions. People self-organize with these dynamic and contextual constraints that serve as the reduction of possibilities (Juarrero, 2000). However, these relative stabilities and habitual patterns do not mean a an individual person can be rendered fully knowable and predictable with precision. Any prediction of future behaviour based on past patterns is at best a statistical probability. We may, therefore, be able to predict a person's general dynamics, under certain conditions, within certain context and time but precise prediction of a person's specific behaviour and action, due to their nonlinear interactions and endless possibilities, are impossible. Moreover, as discussed in section 7, relatively stable patterns and established conventions and norms are charged with social, political, and power asymmetries that benefit or disadvantage groups and individuals depending on one's position in society. When ML systems “pick up” stable patterns, they also identify harmful current and historical norms, prejudices, and injustices. Taking such historical and current patterns as the ground truth, from which to model the future brings forth a machine-determined world that resembles the past and raises a host of ethical and justice issues.
5 Prediction, a Self-fulfilling Prophecy
In the age of ubiquitous and interconnected systems, it has become outdated to conceive of the digital and the physical as separable realities. Outcomes from ML systems are used to justify action in the social world. Who we are is not signified by our bodies alone but also by algorithmic identifications, often assigned to us without our knowledge or consent. Marketing and web analytics companies who gather (and/or purchase) huge amounts of data construct our algorithmic identities from quantifiable attributes that emerge from various input data, observed patterns, and algorithmic inferences and extrapolations. Traces of data and metadata, including behavioural data such as statistics from visited websites, online purchase history, device location, and data from cameras, sensors, and IoT devices that proliferate public and private spaces, all contribute to the construction of the “person” in the digital realm. Algorithmic identifications that are assigned to an individual or a group carry tangible implications as such identifications increasingly play a central role in determining the outcomes of various aspects of an individual's endeavours.
As the hiring process becomes more and more automated, for example, algorithmic hiring tools become most consequential in determining key aspects of a person's life such as how much one earns, where one works and lives. Just like automating any social processes, automating best candidates is prone to automating and reproducing historically and socially held inequities and stereotypes all while providing a veneer of objectivity (Raghavan et al., 2020). Depending on how appropriate or fit candidates are deemed, job opportunities are automatically surfaced to some and withheld from others. More accurately, automated hiring systems serve as tools to reject applicants who do not fit in certain boxes (Ajunwa & Greene, 2019). And given that best, appropriate, or fit applicants are often defined and measured by past success, candidates that do not fit within that box risk exclusion. This was case in point with the hiring algorithm that Amazon deployed and disbanded in 2018 upon discovering that the tool had been discriminatory against women (Dastin, 2018). Amazon's hiring tool was trained to identify the best candidates based on observed patterns in résumés submitted to the company over a 10-year period. Given the male dominance in the tech industry, most résumés came from men. Automating such patterns, by definition, then is a process to automating historical inequities. Predictions based on past hiring decisions reproduce patterns of inequity even when tools explicitly ignore race, gender, age, and other protected attributes (Bogen & Rieke, 2018).
ML tools are not simply methods that sort and classify people and the social world, but are also apparatuses that directly act upon the world transforming social realities and producing certain subjectivities (and not others) (McQuillan, 2018). For instance, faced with an automated assessment system, a job seeker is likely to alter her behaviour in a manner that guarantees positive outcome; awareness that one's social media post has the potential to impact one's perceived characteristics or fitness for a job has the potential to alter her actions and behaviour.
Algorithmic classifications, sortings, and predictions, when enacted in the world, create a social order. For any individual person, group, or situation, algorithmic processes give advantage or they inflict suffering. Jobs are made and lost (Ajunwa et al., 2016; Sánchez-Monedero et al., 2020). Who is visible and legible is legitimized through algorithmic predictions as some subjectivities (and not others) are recognized as a pedestrian (Wilson et al., 2019), or hire-able (Ajunwa et al., 2016; Speicher et al., 2018), or in need of medical care (Obermeyer et al., 2019), or likely to engage in criminal acts (Angwin et al., 2016; Lum & Isaac, 2016).
Furthermore, the very practice of forecasting the future partly acts directly upon the world—machine prediction plays a part in creating what exists whenever such predictions inform decision-making. In a recent paper, Perdomo et al. (2020) illustrate that prediction often influences the outcome that it is trying to predict. They refer to such practice as “performative prediction”: “Traffic predictions influence traffic patterns, crime location prediction influences police allocations that may deter crime, recommendations shape preferences and thus consumption, stock price prediction determines trading activity and hence prices” (p. 7599). Similarly, Benjamin (2019) argues that “crime prediction algorithms should more accurately be called crime production algorithms” (p. 83) because predictive policing software predominantly targets historically underserved communities, wherein hyper-surveillance partly produces crime, partly creating a self-fulfilling prophecy.
In another recent study Milano et al. (2020) looked at ubiquitous recommender systems and found that recommender systems shape user preferences and guide choices at both the individual and social levels. The authors contend that attempts to predict preferences have socially transformative effects and impinge on personal autonomy. Furthermore, machine classification and prediction is a multi-billion dollar business, meaning that business objectives play an active role in the direction with which social outcomes and behaviours are moulded. Through “nudging,” persuasion, and limiting the range of options available to individuals, recommender systems cajole people in particular directions, often in a way that maximizes profit. Recommender systems often have commercial objectives and are developed for business applications. Consequently, by predicting preferences, recommender systems not only shape individual experience and social interactions, they also hold transformative impact on society in a manner that aligns with commercial values (Milano et al., 2020).
6 Imposed Determinability in Unequal Measures
Reality is sedimented out of the process of making the world intelligible through certain practices and not others. Therefore, we are not only responsible for the knowledge that we seek but, in part, for what exists. (Barad, 1998, p. 105)
On the one hand, machine–human relation and interaction is a process that constitutes a systems-level organization. On the other, it is vital to acknowledge power asymmetries within this systems-level organization. Influences are not bi-directional and benefits, disadvantages, and negative impacts are distributed unequally among individuals and communities. In the case of recommender systems, for example, commercial agents behind the development and deployment of these systems exert power in moulding future realities. The individual person or the end user, has little to no direct influence.
In fact, when ML is used to rank, sort, score, and predict social outcomes, what Narayanan (2019, p. 9) calls “AI snake oil,” those being ranked and scored are rarely aware of it or merely know why they are given certain scores. This makes it difficult to contest and negotiate algorithmically assigned identities and scores. Furthermore, the very practice of scoring, characterizing, and assigning algorithmic identities without people's awareness risks treating people like objects. As Maturana (2004, p. 108) remarks,
If you deprive people of the opportunity [to contest and protest against their characterization], you treat them like freely disposable objects; they have the status of slaves, compelled to function without the opportunity of complaining when they do not like what is happening to them.
Predictive models, due to their use of historical data, are inherently conservative. They reproduce and reinforce norms, practices, and traditions of the past. Historical norms and traditions are often unkind and unjust to individuals and groups at the margins of society. Decisions made in the past align with the maintenance of the status quo. The practice of constructing predictive models based on the past and directly deploying them for decision-making amounts to constructing a programmed vision of the future based on an unjust and socially conservative past. Through the application of predictive systems in the social sphere, historically and socially unjust norms, stereotypes, and practices are reinforced. A robust body of research on algorithmic injustice (Benjamin, 2019; Birhane, 2021; Eubanks, 2018) shows that predictive systems perpetuate societal and historical injustice. In a landmark study, Buolamwini and Gebru (2018) evaluated gender classification systems used by commercial industries. They found huge disparities in image classification accuracy; lighter-skin males were classified with the highest accuracy and darker-skin females were the most misclassified group. Similarly, object detection systems designed to detect pedestrians display higher error rates when identifying dark skin pedestrians while light-skinned pedestrians are identified with higher precision (Wilson et al., 2019). The use of these systems ties the recognition of subjectivity to skin tone. Recidivism algorithms unfairly score black defendants as higher risk compared to white defendants of similar criminal conviction (Angwin et al., 2016). Hiring tools tend to disproportionately disadvantage women (Ajunwa et al., 2016). Additionally, the notion of gender that ML systems depend on is a fundamentally essentialist one that operationalizes gender in a trans-exclusive way resulting in disproportionate harm to trans people (Barlas et al., 2021; Hamidi et al., 2018; Keyes, 2018). Machine classification and prediction, thus, negatively impact individuals and groups at the margins the most.
7 Sorting and Predicting Are Moral and Political
A central tenet of the linear, stable, and predictable Cartesian–Newtonian worldview is the idea of objectivity—the assumption that observation, description, and classification of the world can be done from a “View from Nowhere” (Nagel, 1989). Heinz von Foerster famously decried, “Objectivity is a subject's delusion that observing can be done without him [sic]. Invoking objectivity is abrogating responsibility—hence its popularity” (Glasersfeld, 1992, p. 3). The practice of categorizing, ordering, and forecasting a future necessarily entails making moral and ethical choices as deemed “correct” from a given point of view. Through the very act of clustering similarities, boundaries around which behaviours and actions are good or acceptable and which are not are drawn. Furthermore, through their performative powers, predictive systems cast certain ways of being as “normal” while others are deemed “deviant” and in need of correction. Practices that were previously understood to be moral and political, and historically required a great deal of dialogue and negotiation, are obfuscated as apolitical endeavours with the advent of machine classification and prediction. Moreover, the veneer of objectivity that ML is entrusted with presents an added challenge to seeing machine classification and prediction as anything but a technical and mathematical task.
Categorization by human beings itself arises within context and goal-directed activities. Human categories, rather than carving nature at its joint, are developed on the fly to address goal-directed actions. Categories, therefore, are dynamic, unstable, contextualized, and inherently embedded in on-going activities (Barsalou, 1991, 2009). Machine categorization, therefore, can only be created within the context of a broader goal rather than being an austere, abstract, and mechanical process. Creating categories and drawing boundaries is not primarily a technical choice or a purely scientific question but necessarily an ethical and moral one, especially when such practice has a direct and tangible impact on vulnerable lives. Acknowledging this is a crucial step in taking responsibility for what exists. Having said that, responsibility needs to be selectively attributed. Benjamin (2019), in Race After Technology, notes that
it might be tempting to point to the smart technologies we carry around in our pockets and exclaim that “we are all caught inside the digital dragnet!” But the fact is, we do not all experience the danger of exposure in equal measure. (Benjamin, 2019, p. 111)
Although, as Barad (1998, 2007) assessed, reality is something we create together through active practices, a few have genuine say towards what kind of world needs to be co-created while many others are forced to live in it. Existing social, political, and financial power dynamics mean those at the bottom of societal hierarchies have little say, if any at all, in the co-creation of realities.
It is impossible to operate in a value-free space. The type of concerns, questions, and design all reflect the motivations, commitments, and interests of those at the helm of creating ML systems. When values are not explicitly laid out, the values that are taken as “universal” or “neutral” are those values that represent the status quo and the values that are implicit within a given field (Ahmed, 2007; Collins, 2002). Within both the academic fields and corporate industry currently developing ML systems en masse, the values that are taken as “universal” are predominantly the values and interests of the Western, white, male. Computer science (as well as its subfield, machine learning), since its conception as an academic field in the 1950s in the US has always been a field that strove for impactful application within the military, education, and the general social sphere. The field has since come to exert unprecedented social, political, and economic power. Within major technology corporations—from Microsoft to Amazon—Western white men with homogeneous backgrounds and conservative leanings (Cohen, 2018) remain the most predominant powerful figures that influence and redefine social realities (Broussard, 2018). What we find then is a huge power disparity between powerful corporations (and the individuals behind them) and end users whose agency, opportunity, and options have been limited in the process of algorithmic classification and prediction. Such process amounts to financial and personal gain for the former at the expense of the latter. In fact, as Zuboff (2019) argues, the technology industry is built on capitalization and monetization of lived experiences and on building tools of surveillance.
Given massive power disparity, those engaged in the practices of designing, developing, and deploying ML systems—effectively shoehorning individual people and their behaviours into predefined stereotypical categories—carry a great proportion of the responsibility in creating what exists. Such demography decides what questions are worthy of investigation, what problems need to be “solved”, and what is sufficient performance for a model to be deployed into the world. Consequently, this group bears much greater responsibility and accountability.
Scientific enquiries carry inherent ethical and moral dimensions. The more a topic of enquiry veers towards human and social affairs, the more apparent its moral and ethical dimensions. The apparent dissociation of science from ethics has historically allowed science to evade accountability and responsibility, and similarly so will algorithmic systems if they are allowed to. Ubiquitous deployment of ML models to high-stake situations creates a political and economic world that benefits the most privileged and harms the vulnerable. It also creates a social world where the status quo is maintained and historical injustice perpetuated. For most scholars working at the intersections of algorithmic injustice and science and technology studies, it has become trivial and common knowledge that in speaking of ML models ”works well” often equates to “picks up historical patterns.” The social world as it is, is filled with beauty, ugliness, and cruelty. And as Benjamin (2019) notes, to think that one can feed a model with all the world's beauty, ugliness, and cruelty and expect only beauty is a fantasy.
Within the context of complex systems thinking, Artificial Life, or similar fields of enquiry that are primarily concerned with the creation and/or simulation of intelligent systems, the notion of ethics often revolves around the moral status of the “intelligent system.” Should the supposedly intelligent system have moral or legal rights on a par with a human being? Does the experience of pain or the capacity for a “theory of mind” differ depending on whether the entity is carbon- or silicon-based? Does working towards ethical systems come down to Isaac Asimov's conception of laws of ethical robotics? How do we prepare humanity for the Singularity? These concerns, for the most part, focus on hypothetical and/or future “First World Problems” (Birhane & van Dijk, 2020). These quests might be a valid intellectual exercise in and of themselves but in light of the mass integration of ML systems into society and the harms they impose on vulnerable individuals and communities, I argue that attention regarding machine ethics should primarily focus on current and tangible concerns.
8 On Creativity
Human creativity is marked by imagination and thinking of things that were not thought of before. Creative innovations that have come to define and revolutionize the world, from music to medicine, are often marked by surprise, spontaneity, and uncertainty. Creativity, Juarrero (2000) reminds us, stands in stark opposition to certainty and predictability. It requires unexpected and spontaneous behaviour and not repeating past patterns and trajectories. Creativity, by definition, defies expectation.4
As ML systems attempt to order the spontaneous and non-determinable social world, they create a future that resembles the past leaving us no room for a chance to be different. Such classifications and predictions reinforce stereotypes and impinge on the inherently open-endedness of being, limiting a person's potential by defining them by what people like them have done or liked or how people like them have behaved in the past. When future behaviours are predicted based on past stereotypes, individuals are deprived of the opportunity to challenge stereotypes and to realize their full potential.
In the age of ML where accurate categorization and precise prediction are highly valued, unexpected and spontaneous behaviour poses a challenge and is seen as a deviation to be corrected—not an inherent, indeterminable part of human beings that should be celebrated. In fact, the idea of the “average” can be traced to the explicit logic in the origins of statistics in sociology, crime, and public health by Quetelet in the 1800s. The “average” was considered an ideal. “Quetelet applied the same thinking to his interpretation of human averages: He declared that the individual person was synonymous with error, while the average person represented the true human being” (Rose, 2016, p. 26). To this day, uniquely and idiosyncratically expressed unrepeatable behaviours that defy systemic rules or clusters of patterns—fundamental to creativity—are seen as undesired anomalies and edge cases. ML processes codify the past. They do not invent the future. Doing that, O'Neil (2016, p. 204) emphasizes, “requires moral imagination, and that's something only humans can provide.”
Creativity, which stands outside machine determinability, holds the potential to transform the way we approach and use algorithmic systems. From organized resistance, to strict regulations, to disrupting the current capitalist ecosystem, to imagining a fundamentally new type of technology that celebrates differences instead of forcing uniformity (discussed below in the conclusion) all require creativity. As opposed to accepting a machine-determined world as inevitable, creativity is fundamental to imagining an alternative world and the disruptive technologies to actualize such world.5 Artificial life, underpinned by fundamental recognition of uncertainty and non-determinability as an inherent condition of life and its distinctly creative understanding of humans and society, holds promising paths to a just world.
9 Conclusion and Discussion
It is essential for the thing and for the world to be presented as “open,” to send us beyond their determinate manifestations, and to promise us always “something more to see.” (Merleau-Ponty, 1945/2012, p. 348)
ML systems, tools that fundamentally classify, order, and predict, I argue, are practices that reincarnate Cartesian and Newtonian worldviews that seek stable, predictable, and complete understanding. But, people (and the social systems that they are embedded in) are partially open, indeterminable, and fluctuating, meaning a complete understanding would imply death of the person or that the social system has come to a stall. Automation, which requires complete understanding, thus stands at odds with human behaviour, which is inherently incomplete and unfinalizable, making machine classification and prediction futile.
Arguably, systems of classification are inherent to humans and part of all cultures, although modern Western culture has produced more than most, without realizing it (Bowker & Star, 2000). Therefore, I do not argue against machine classification altogether. However, given that people and social systems are dynamic and unpredictable and that social structures are hierarchical and saturated with power asymmetries, forcing order and taxonomies brings forth harm and injustice to those at the margins. Furthermore, as Narayanan (2019) remarks, predicting social outcomes is a fundamentally dubious endeavour with many disadvantages and problems but with few, if any, benefits. In this regard, those behind ML systems (from conception, to design, to development, to deployment)—individuals and corporations alike—bear the responsibility for the unjust and harmful social reality they are creating.
Knowledge of self and of the world is fluid, dynamic, and continually moving. Any understanding of the person–society–technology relationship is partially open and evolving. This accommodates the uncertainties of ongoing change and provides room for dialogue, negotiation, reiteration, and revision of any claims and positions. This means that how algorithmic systems are designed and implemented requires continual negotiation between the different stakeholders. The views and input of vulnerable communities that are disproportionately negatively impacted by algorithmic decision-making needs to be central at all stages of the design, development, and deployment process. One way of moving forward to a just society is to envisage a fundamentally different kind of technology that is grounded in ambiguity, fluidity, and diversity of experience. In their proposed vision in Diversity Computing, Fletcher-Watson et al. (2018) envisage fundamentally new kinds of computing devices that reflect, promote, and embrace differences rather than eliminating them. This requires a fundamental shift from striving for uniformity towards a technology that underpins the inherent diversity and indeterminability of the world.
The discourse surrounding ML development and deployment is marked by the rush to “solve problems” with little, if any, thought, consensus, or reflection of what the problems are. Machine learning, in this regard, produces “thoughtlessness, the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out,” McQuillan (2020, p. 3) argues. Understanding of contingent and underlying factors is crucial and needs to be prioritized over prediction. This requires asking questions such as “Why are we finding these clusters and similarities?” and investigating that further instead of using the patterns that we find to build predictive models (Birhane, 2021). On a more radical way forward McQuillan (2020) puts forward a “non-fascist AI.” A non-fascist AI, according to McQuillan, requires resistance to toxic application of AI through self-organization and broader worker mobilization to an alternative social vision. This vision aspires for AI that confronts us with the injustice of current systems and the tools that form part of the movement for social liberation. Similarly, Kalluri (2020) notes that current ML systems centralize power where it is already concentrated. Consequently, ML models that shift power from the most to the least powerful hold the key to social realities that serve the least privileged.
Finally, the main points that I have discussed in this section are crucial for a radical transformation of AI that embraces uncertainties and indeterminabilities and serves the most disadvantaged. Although acknowledging responsibility and accountability is an important first step, pleading with the powerful to take responsibility and be considerate to the vulnerable is simply not sufficient. Just as science has found ways to evade ethical responsibility by means of systematic separation of “objective science” and ethics, AI, if allowed, will do the same; indeed, in some respects, it has been doing so with impunity. In fact, global technology giants spend millions (Molla, 2019) actively lobbying to influence legislation in their favour, which comes at the expense of less agency and more harm to the masses. The capitalist ecosystem in which ML systems are built and deployed presents one of the greatest challenges. Even for the most well meaning technologists, the incentive structures pressure individuals to develop technology that maintains the status quo, wields existing power, and produces maximum profit. Technology that envisages a radical shift in power (from the most to the least powerful) stands in stark opposition to current technology that maximizes profit and efficiency. It is an illusion to expect technology giants to develop AI that centres on the interests of the marginalized. Strict regulations, social pressure through organized movements, strong reward systems for technology that empowers the least privileged, and a completely new application of technologies (which require vision, imagination, and creativity) all pave the way to a technologically just future.
The author would like to thank Alicia Juarrero, Anthony Ventresque, Dan McQuillan, Elayne Ruane, Hanne De Jaegher, Johnathan Flowers, Marek McGann, Os Keyes, Sergio Graziosi, Thomas Laurent, Tony Chemero, and Vinay Uday Prabhu for their useful feedback on an earlier version of this manuscript.
This work was supported, in part, by Science Foundation Ireland grant 13/RC/2094 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero—the Irish Software Research Centre (www.lero.ie).
AI can generally be conceived of within two broad categories: narrow AI and general AI, the latter commonly known as Good Old Fashioned Artificial Intelligence (GOFAI). The use of AI throughout this paper refers to narrow AI, specifically ML systems that deal in mathematical probabilities which are increasingly used in decision-making in the social sphere.
The term accuracy vaguely denotes how closely models or data represent “the ground truth” or things as they are in the world. However, critical and genealogical examination of the use of this term remains scarce in the ML literature. In the absence of such critical examination, “accurate classification or prediction,” especially in the context of social affairs, risks corresponding representations with stereotypically held views.
The very term precision (in prediction) assumes an observer-free “ground truth,” a correct description of reality, and a correct trajectory against which things can be compared. This line of thinking follows Cartesian logic. Precision, accordingly, marks proximity to the presumed “ground truth” or the “correct description of reality” while deviation from it might signal lack of precision. Contrary to these presumptions, descriptions of reality or “ground truth” are never given in an observer-free manner (see section 4).
Having said that, the view of creativity as the creation of something novel is not to remove the creative process from its historical, social, and contextual embeddings.
This is not to succumb to techno-solutionism where technology is sought as the only viable source of answer. Far from it, and in some cases, the option of no technology at all can be the most efficient way to a just world.