Abstract
This article discusses a project under development called “Inventing Indicators of Interdisciplinarity,” as an example of work in methodology development that combines quantitative methods with interpretative approaches in social and cultural research. Key to our project is the idea that Science and Technology Indicators not only have representative value, enabling empirical insight into fields of research and innovation but simultaneously have organizing capacity, as their deployment enables the curation of communities of interpretation. We begin with a discussion of concepts and methods for the analysis of interdisciplinarity in Science and Technology Studies (STS) and scientometrics, stressing that both fields recognize that interdisciplinarity is contested. To make possible a constructive exploration of interdisciplinarity as a contested—and transformative—phenomenon, we sketch out a methodological framework for the development and deployment of “engaging indicators.” We characterize this methodology of indicating as participatory, abductive, interactive, and informed by design, and emphasize that the method is inherently combinatory, as it brings together approaches from scientometrics, STS, and humanities research. In a final section, we test the potential of our approach in a pilot study of interdisciplinarity in AI, and offer reflections on digital mapping as a pathway towards indicating interdisciplinarity.
1. INTRODUCTION
The ambition to move beyond the divide between quantitative and qualitative methods is a familiar one in the social studies of science and technology (STS). It goes back to at least the 1980s, when an interdisciplinary group of researchers in Paris combined anthropological fieldwork with statistical methods of network analysis to study innovation, and developed well-known maps of “actor-networks”: emergent associations among actors in science, engineering, and society (Callon, Courtial, et al., 1983; Teil & Latour, 1995; for a discussion, see the editorial in this special issue: Leydesdorff, Ràfols, & Milojević, 2020). This enthusiasm for moving across the quantitative-qualitative divide was revived in the late 1990s and early 2000s, when the Worldwide Web introduced qualitatively trained scholars in STS to a new type of research environment (Rogers & Marres, 2000; Wouters, Vann, et al., 2008). Unlike the scientific and engineering laboratories in which STS researchers had conducted their ethnographic studies during the decades before, the data-intensive settings of the 1990s Web, with its graphical user interface and sprawling civic and literary content, had emerged out of the combination of quantitative and qualitative traditions—in design, engineering, computer science, civic culture, and literary experimentation. As such, the Web rendered many of us comfortable with the idea of mixed competencies in computational knowledge practices, and with diversity of methods in data-intensive social research on science and technology (cf. Hine, 2005; Thelwall, Vaughan, & Björneborn, 2005).
Today we may be entering yet another period of experimentation across the divides between quantitative—or more widely defined, scientific—and qualitative—interpretative—approaches. On the one hand, many of us have by now grown more aware of the irreversible, even irreconcilable, differences that derive from different methodological backgrounds in either primarily scientific or predominantly interpretative traditions. On the other hand, the potential of collaboration is today recognized in most if not all fields, and many of us remain committed to the possibilities of combining the methodological strengths of diverse approaches across the sciences and humanities. We have become only more curious about the possible results—not just the findings, but also in terms of methods, tools, concepts, and environments—that we might be able to realize if we learn how to do this work of “methodological recombination.” Furthermore, in the social studies of science and technology, we find today new and different interlocutors at the table as compared to 10 or 20 years ago. Over the last years, design research, for instance, has emerged as a crucial force of methodological innovation in this area, with creative methodologies such as prototyping enabling new forms of exchange between diverse actors and forms of knowledge (Dantec & DiSalvo, 2013; Vertesi, Ribes, et al., 2016).
In this article, we discuss a project currently under development called “Inventing Indicators of Interdisciplinarity1” as an example of this type of boundary-crossing work in methodology development, which seeks to connect quantitative methods with interpretative approaches developed in STS and creative forms of humanities research. The project originated from a shared interest among STS scholars, scientometricians, and humanities scholars in digital metrics and indicators as a site of methodological innovation. Recent work in each of these areas has proposed that such measures can be deployed to enable generative, inclusive, and interactive ways of exploring and evaluating research (Fochler & de Rijcke, 2017; Holtrop, 2018; Lury, Fensham, et al., 2018; Marres, 2017; Ràfols, 2019). The development of such engaging indicators, however, is likely to require interdisciplinary collaboration of its own, and to entail changes in the (inter)disciplinary embedding of the indicators themselves. Crucially, it means recognizing that indicators are designed entities, which in their material, visual, and interactive forms engage and include or exclude actors—scientists, administrators, policy-makers, stakeholders, activists, and publics—in evaluation processes. We will argue that this recognition has methodological implications for how indicators can be used to explicate phenomena such as interdisciplinarity.
Key to our project is the idea that indicators not only have representative value, enabling empirical insights into fields of research and innovation but simultaneously have organizing capacity, as their deployment enables the active curation of communities of interpretation. When asking whether and how indicators are capable of configuring and reconfiguring audiences and assembling communities, this opens indicators up to issues that are specifically addressed in the area of participatory methods, user studies, and design research (Lezaun, Marres, & Tironi, 2016).
It is important, then, that we assess how the relevant fields—scientometrics, STS, and design methods—could work together to develop engaging indicators. We selected Science and Technology indicators of interdisciplinarity as the empirical focus for this work of methodological experimentation. “Interdisciplinarity” is both a current area of indicator development in scientometrics and a topic of special interest to STS and humanities scholars. For the latter, interdisciplinary research is a key site where the role of interpretative forms of enquiry in the academy and society is being negotiated today (Biagioli, 2009). As we will discuss, it is also a topical area in which the need for, and benefits of, indicators capable of engaging diverse stakeholders and publics is more readily apparent than in others. In a series of meetings and workshops, we then reviewed the capacities of methods developed in diverse fields—scientometric analysis of Web of Science, social media analysis, and playful mapping—to indicate interdisciplinarity in a specific area of research and innovation (artificial intelligence) and on this basis, we identified key features and requirements for what we here call engaging indicators. On this basis, we here outline our proposal for a combinatory methodology for the development of engaging indicators. We reflect on what is at stake in this project for us as STS scholars working in interpretative traditions, and briefly indicate the methodological potential of engaging indicators through a discussion of our shared efforts to indicate interdisciplinarity in AI.
2. INTERDISCIPLINARITY AS A CATEGORY AT STAKE: APPROACHES IN STS AND SCIENTOMETRICS
In recent years, research evaluations using Science and Technology Indicators have been identified as a key site where the shape and viability of interdisciplinary research are being contested (Ràfols, Leydesdorff, et al., 2012; Wagner, Roessner, et al., 2011). Interdisciplinary contributions to knowledge are often evaluated less positively than disciplinary research, in peer-review panels for journals and grants, and evaluation exercises such as the United Kingdom’s Research Excellence Framework (REF). It has been argued that performance indicators such as journal impact factors exemplify and consolidate this bias against interdisciplinary research, insofar as core disciplinary journals tend to perform better according to this popular metric. On a more general level, performance indicators, such as those derived from citation data, have been criticized for driving the metricization of research culture (De Rijcke, Holtrop, et al., 2019): The de facto reliance on indicators such as the impact factor in evaluation processes creates a situation in which scientists, assessors, and policy-makers are encouraged, and become more inclined, to value research in terms of quantifiable markers of recognition and significance (citation, impact), rather than in substantive terms. As Ismael Ràfols put it (2019): “[A] compelling argument of ‘what went wrong with indicators’ (Barré, 2019) … is that, although indicators were developed in the spirit of enlightenment as tools that would inform decision-making, they have become ‘ignorance producing devices’, in the sense that they are used as horse blinkers that reduce the issues taken into consideration.” At the same time, and precisely for these reasons, the design and deployment of indicators has been identified as a potential site for methodological innovation: Developing indicators of interdisciplinarity could be an effective way to counter the devaluation of interdisciplinary research and enable the articulation and valuation of interdisciplinary research agendas (Holtrop, 2018; see also Fochler & de Rijcke, 2017).
Before discussing indicators of interdisciplinarity in more detail, it is worth noting that interdisciplinarity holds special importance in both STS and scientometrics. In STS, interdisciplinarity has significance on multiple levels: as a methodological approach, an object of enquiry, a strategic site, and a normative commitment. First, STS as a field itself tends to be defined as interdisciplinary, bringing together social science and humanities disciplines such as sociology and anthropology with the sciences and engineering. Second, STS seeks to understand empirical transformations of science and innovation and their roles in society and culture, and this notably includes the increased valorization of interdisciplinarity in some research communities, and by policy-makers and research funders over the last decades (Biagioli, 2009; Nowotny, Scott, & Gibbons, 2003). Third, interdisciplinarity has been welcomed in STS on strategic grounds: It presents a knowledge ideal that can help to render the humanities newly relevant in a changing research and innovation landscape in which the organization of knowledge is increasingly problem-centered (Biagioli, 2009; see also Davidson & Goldberg, 2004). Fourth, STS has a normative investment in interdisciplinarity. The encounter between diverse knowledge communities is valued by many in the field as a mechanism for rendering monodisciplines more accountable, both to one another and to society. It is understood to be aligned with the wider agenda of the democratization of knowledge (Nowotny et al., 2003; Barry, Born, & Weszkalnys, 2008). In STS, finally, interdisciplinarity is also valued on epistemological grounds: as a way of realizing adventures in knowledge (Savransky, 2016).
One recent contribution to the study of interdisciplinarity (Barry et al., 2008) is especially relevant to our purposes here. Drawing on ethnographic fieldwork in interdisciplinary research laboratories, Barry et al. identify different types of interdisciplinarity, from hierarchical forms of collaboration, where one discipline makes up for a lacuna in another (master) discipline, to interdisciplinary research that operates in the ontological mode. In the latter type, it becomes the objective of interdisciplinary research “to re-conceive both the object(s) of research and the relations between research subjects and objects” (Barry et al., 2008, p. 25). When introducing this typology, Barry et al. note that interdisciplinarity is best treated as an agonistic category. That is, they conceive of interdisciplinarity—and disciplinarity—not just as a given attribute of existing fields of knowledge but as a contested category: the forms that interdisciplinary research will take—the division of labor between fields; where key concepts are derived from; the relations between data and method—are the focus of disagreement and power struggle. Building on this proposition, we want to propose that interdisciplinarity can be productively understood as a category at stake: the shape of interdisciplinary research is not only contested on the level of discourse, as the notion of agonism highlights, it is called into question as part of interdisciplinary knowledge practices themselves, and must therefore be understood as at least partly an accomplishment—an outcome—of interdisciplinary research collaborations, exchanges, and communication. We call this the transformative understanding of interdisciplinarity2. We think that this understanding of interdisciplinarity can connect with scientometrics in interesting ways, not least because the latter field is methodologically and technically equipped for the analysis of knowledge dynamics, the detection of interdisciplinary fields and communities in formation (Leydesdorff & Schank, 2008).
In tune with the significant interest in interdisciplinarity in science policy and in STS, scientometricians have during the last 15 years or so developed a variety of ways to measure interdisciplinarity. In a review of this literature, Wagner et al. (2011) describe different scientometric measures developed to assess interdisciplinarity. The most common approach is citation analysis, which measures engagement between fields by detecting “the occurrence of what are considered discipline-specific citations pointing to other fields” (Wagner et al., 2011, p. 19). Such methods provide a relational approach to the detection of interdisciplinarity, and this is one of the key advantages that we, as Science and Technology Studies scholars working in interpretative traditions, find in them.
Relying on methods of network analysis such as citation analysis—and related measures such as keyword co-occurrences—research in scientometrics has addressed substantive questions about the organization of research—such as the relative closedness and openness of research fields—in terms of relations among authors, topics, and outputs. On a general level, such a methodological approach seems to us well aligned with the transformative understanding of interdisciplinarity discussed above, which is concerned with changing relations between fields, between subjects and objects and research, and we add, between data, methods, and concepts (Marres & Gerlitz, 2012). From this perspective, the key advantage of scientometrics is that one does not have to assume a fixed ontology at the outset of research—say assuming “disciplines” as already constituted ex ante. One can recognize that ontologies are dynamic, so that one cannot only treat as an empirical question what the relevant entities are, which their relations are, and what their attributes are but also recognize that these very categories are in question in the empirical realities under study3.
Ràfols et al. (2012) proposed to use network-analytic measures to operationalize Barry etal’s conception of interdisciplinarity, and show how such measures can help to foreground the contributions to knowledge of actors located at the margins of or in-between fields. These contributions are “seldom captured in conventional classification categories” (Ràfols et al., 2012, p. 10). “Interdisciplinarity” is thus defined not ex ante but operationally. To this operationalization of “agonistic interdisciplinarity,” we would like to add that scientometrics as a relational methodology can also be used to elaborate the understanding of interdisciplinarity as contested and transformative: Scientometrics can help to render visible, and explorable, interdisciplinarity as a dynamic space, a space of possible transformation of the relations between disciplines, and between concepts, methods, and data. Furthermore, such an analysis of transformative interdisciplinarity opens up distinctive methodological opportunities for the evaluation of interdisciplinarity. With the help of indicators, the empirical study of different types and logics of interdisciplinarity can be turned into an opportunity for reconstructive engagement. Indicators can actively enable the type of active negotiation and contestation of interdisciplinary knowledge and innovation, which the notion of transformative interdisciplinarity denotes.
Reviewing indicators of interdisciplinarity, we find both alignment and divergences between, on the one hand, measures of interdisciplinarity put forward in scientometrics, and, on the other, the concepts of agonistic and transformative interdisciplinarity developed in STS and related fields. We have been struck by the relative emphasis on knowledge integration in scientometric definitions of interdisciplinarity, which frames interdisciplinarity as a process of integrating different bodies of knowledge (Porter & Ràfols, 2009; Wagner et al., 2011, p. 16). In our view, this focus on the production of newly coherent fields of knowledge risks displacing attention away from the dynamics of epistemological and ontological contestation of concepts, methods, and object delineations between diversely composed fields and communities. However, a number of indicators of interdisciplinarity have been proposed that foreground diversity in the composition of fields of knowledge and connectivity across fields, thus pointing precisely beyond the preoccupation with the unification of scientific fields. This notably includes the compositional measure of “diversity”—which foregounds the “heterogeneity of the bibliographic set,” of the outputs composing a field of research and innovation (Ràfols & Meyer, 2010; Stirling, 2007)— and the citation-based measure of interdisciplinarity in terms of intermediation and betweenness (Leydesdorff, Wagner, & Bornmann, 2019; see also Wang & Schneider, 2019). The latter foregrounds the location of interdisciplinarity research between different fields, and the relational capacities of this research to bring diverse thematic communities into relation. Such compositional and relational concepts and measures hold significant promise as ways to translate transformative understandings of interdisciplinarity, insofar as they direct attention to shifting connections on multiple levels: between outputs, clusters of outputs, and potentially thematic and methodological communities. This makes possible an appreciation of the relational dynamics through which ontological and epistemological transformations play out at the intersections of science, innovation, and society.
Scientometric research on interdisciplinarity demonstrates significant awareness of the contested nature of this category. The measurement of interdisciplinarity is marked by a lack of consensus (Ràfols et al., 2012) and as Wagner et al. (2011, p. 15) observe, the term interdisciplinary is “notable for conflicting meaning.” Ràfols and Meyer (2010) noted that a specific difficulty in developing indices of interdisciplinarity is that “research areas qualifying as interdisciplinary tend to be marked by ‘multidimensional[-ity] and inherent conflict with categorisation’” (Ràfols & Meyer, 2010, p. 263). Attempts to map interdisciplinarity using scientometric methods, then, are not exempt from the challenges posed by the contested and transformative nature of interdisciplinarity. Importantly, scientometric work has made it clear that these challenges of defining interdisciplinarity not only play out on the level of ideas—in how we conceptualize interdisciplinarity—they equally trouble its operationalization in indicators and metrics. This raises the question of whether and how the process of the detection of interdisciplinarity can itself be organized so as to facilitate the exploration of interdisciplinarity as a zone of epistemological and ontological contestation and transformation. Rather than deploying scientometric measures to close down the debate, the objective then becomes to deploy indicators so as to facilitate exploration of knowledge in (trans-)formation.
3. FROM INDICATORS TO INDICATING INTERDISCIPLINARITY
In his paper “Indicators in the Wild,” Ràfols (2019) proposed that indicators can be purposefully deployed as participatory devices, provided “the development of STI indicators should take place not only in ‘secluded spaces’ such a scientometric labs, but with the participation of stakeholders so as to take in consideration their contexts, both in terms of relevant social spaces and values.” In proposing this, Ràfols offers a key articulation of indicators as a special type of interface between measurement and participation, which has the capacity to assemble and engage diverse actors through processes of interpretation of metrical outputs across boundaries (sciences and the humanities, university and society). Ràfols’ proposal to treat indicators as participatory devices resonates in interesting ways with recent work by Celia Lury and colleagues, in which they seek to clarify the double role of methods in society and culture as “means by which the social world is not only investigated, but may also be engaged” (Lury & Wakeford, 2012, p. 6). In a recent paper, Gerlitz and Lury (2014) extend this perspective to social media metrics: They point out that metrics such as the Klout score—a popular, aggregate measure of influence across different online platforms such as Twitter and Facebook—may operate simultaneously in two registers: On the one hand, they perform an epistemic operation by ordering relations between social media users through mathematical operations of quantification (producing, among other things, influence rankings). On the other hand, they enable participation as they assemble users into a dynamic collective, by enabling these users, as well as third parties, to compare, contrast, and relate to one another by way of Klout scores.
Asking whether and how indicators are capable of configuring audiences and assembling communities opens indicators to issues that are commonly addressed in fields other than STS or scientometrics, such as user studies and design research. The question also implies a recognition that indicators are more than forms of measurement. By drawing attention to their material, visual, and interactive form, we recognize that indicator development is likely to require different competencies, such as those of design. And by noting that indicators not only represent the worlds of research and innovation but are and can be deployed to configure communities of interpretation, we signal that their specification will require taking their context of application into account: where indicators are deployed, whom they address, and with what purpose? Here, we will not be able to pursue all of these multiple questions, but instead will sketch a methodological framework to render the questions tractable in empirical research.
Key to our methodological approach to indicators is a constructive valuation of disagreement. In scientific methodology, a lack of agreement can easily be interpreted in negative terms, as preventing the stabilization of a concept or a measure, and thus, its operationalization or application. However, in the social sciences and humanities, disagreement, or contestability, must in some cases be understood as constitutive of the empirical phenomenon in question, and here controversy has been valued as an occasion in which relations, divisions of labor, and ideas may be undergoing transformation (Latour, 2005). This includes interdisciplinary research, in particular that of the transformative type.
To capture this contestability, sociologists have put forward the notion of heterarchy (Stark, 2011; Weber, 1946), to highlight situations in which multiple principles intersect and can be contested. In our opinion, this term is applicable to the lack of consensus that becomes noticeable when defining interdisciplinarity. However, we want to emphasize that recognizing heterarchy has methodological implications for the design and deployment of indicators of interdisciplinarity. When indicators are used, the methodological assumption often is that the concept in question—interdisciplinarity—can be clearly and robustly defined before the act of measurement takes place. In other words, the framework for indicator usage is deductive. However, in the case of an “essentially contested” and transformative category such as interdisciplinarity, it would not be appropriate to assume that the definition of interdisciplinary research can be agreed a priori, and a deductive approach is thus unlikely to work in this case.
This could be taken as a weakness of indicator-based methodologies, but it is also possible to turn this assessment around: The measurement of interdisciplinarity provides interesting opportunities to redefine, reconstruct, or reinvent the use of indicators in the evaluation of research and innovation. We find inspiration to develop such an approach in recent work by De Rijcke et al. (2019) which put forward the “evaluative inquiry,” an experimental approach to evaluation that treats it as a knowledge production process. A key step towards this framework is to let go of reproducing the conventional role of analysts as detached accountants who treat indicators as external proxies for quality. Instead, evaluative enquiry treats analysts as engaged experts, who take part in formulating the broader projects of ensuring accountability and autonomy with which an evaluation is always intertwined (De Rijcke et al., 2019, p. 178). Building on this, we claim that a participatory, abductive, interactive approach to indicator development and deployment is promising, in light of the objective defined above, namely, to facilitate a collective exploration of interdisciplinarity as a “zone of epistemological and ontological contestation.” Below we define each of these terms in turn.
- 1.
Participatory. Rather than assuming that the development of an indicator is strictly separate from the process of its interpretation, we propose to approach these tasks as interrelated. Understanding indicators as designed entities suggests that both the context of application and the community of evaluation need to be actively curated—or even scripted, and in turn, that indicators can be deployed to actively assemble actors that do not already form a social group into an interpretative community. The interpretation of indicators—in evaluative situations—may be treated dramaturgically, as a situation in which different participants are assigned a role and brought into relation according to a basic script. As a provisional name for this script we propose the term indicating (more about which below). It means that we approach participation in evaluation as a methodological proposition. It is not merely something to strive for because it is considered the correct thing to do. Instead, actors with different kinds of expertise in and relevant experiences of an area of research and innovation are explicitly invited to inform, at designated moments, the process of interpretation in which indicators play a role.
- 2.
Abductive.Timmermans and Tavory (2012, p. 167), drawing on the work of the pragmatist philosopher C. S. Pierce, define abduction as “a creative inferential process aimed at producing new hypotheses and theories based on surprising research evidence.” It provides an alternative to both deductive and inductive approaches in that it proposes an iterative framework for data analysis. It involves a moving back and forth between empirical material and concepts that can assist in interpretation. Such an iterative approach enables a constructive recognition of the limitations of existing indicators and finds in these limitations the starting point for their elaboration. Timmermans and Tavory (2012) also note that “abductive analysis arises from actors’ social and intellectual positions” but can be elaborated through data analysis (p. 173). As such, the approach is also in line with the participatory commitment above, as it values participants’ perspectives, and exchanges among them, as a potential resource for the interpretation of the indicator under scrutiny.
- 3.
Interactive. This last commitment complements the former two. It foregrounds that the productive deployment of indicators in a participatory and abductive manner requires methodological attention to how indicators facilitate interaction among different points of view, data sources, measures, concepts, and contexts. This methodological interactivity goes beyond interaction in the social sense: It does means not just exchange between actors but a process of active interarticulation between data sets, measures, perspectives, and contexts (Marres & Gerlitz, 2012). Deploying an indicator interactively, methodologically speaking, means that indicators are used not just to facilitate an exchange between different points of view but to bring to the surface different types of interdisciplinarity, in a shared process of data selection, analysis, and evaluation. This methodological commitment connects with the sociotechnical requirements that today’s digital, networked data infrastructures and tools place on the use of indicators: The increased availability of underlying data sources and contextual information, and the growing uptake of interactive, visual user interfaces, in today’s digital culture, facilitate a more exploratory approach to rendering data interpretable and analyzable (for a discussion, see Waltman & Van Eck, 2016)4.
- 4.
Designed. A shift from indicators to indicating offers an opportunity to open up indicator development to design methodology, and approach indicating as a situated, equipped, and/or visual process. The interactive design of indicators can be informed by design research, where creative methods such as rapid prototyping are used in workshop settings to explicate understandings of interdisciplinarity with diverse stakeholders and using simple materials such as paper and ribbon. Here play with materials—and data?—serves the methodological purpose of “surfac[ing] implicit assumptions and values as well as to communicate and test ideas” (Lockton, Brawley, et al., 2019, p. 3). To be sure, there are significant challenges in connecting such an approach with scientometric methods, as the success of the latter depends on the capacities of data, concept, and method to act as constraints on the process of interpretation.
Still, when approached as participatory, abductive, and interactive, the process of indicating can be significantly enriched by the deployment of material and visual—and not necessarily metrical—proxies. This also has the benefit of opening up our understanding of the empirical content of indicators to the nonmetric. The German STS scholar Tahani Nadim, who works in the area of biodiversity research, for example, reminded us of the existence of biological indicators, and told us that “stink bugs are very good indicators—their markings change markedly and look differently when something goes wrong in their living environment” (Figure 1). In this approach, the empirical content of an indicator derives at least in part from its indexicality: An index is valued as a “material trace” (Schuppli, 2012) rather than as a representation of a phenomenon, and this trace can be read as an indication of something that is latent in a particular situation (e.g., a latent danger).
Garden bugs, Kussaberg (Germany), close to Leibstadt nuclear power plant, by C. Hesse-Honegger (Raffles, 2010).
Garden bugs, Kussaberg (Germany), close to Leibstadt nuclear power plant, by C. Hesse-Honegger (Raffles, 2010).
Taken together, the above four commitments enable an approach to indicators that is inherently combinatory, in that it brings together methodological propositions from scientometrics, STS, design research, and interpretative social and cultural research methodology. We describe this approach as indicating to highlight something that each of the four terms above have in common: They frame the development and use of indicators as a process. This is key insofar as it enables us to understand the assembly of communities of interpretation as an ongoing process, one that spreads out across the design and deployment of indicators. In referring to our approach as “indicating,” we also build on the Routledge Handbook of Interdisciplinary Research Methods (Lury et al., 2018), in which the editors proposed that the development of interdisciplinary methods can benefit from a focus on the “doing of methods” (p. 2). In contrast to the definition of method as formalized procedure, method is here understood as referring to a generative process that unfolds through specific socioepistemic and sociomaterial processes such as mapping, prototyping, and visualizing, and as such, offers a way of working with intractable differences. We now add the term indicating to this list.
4. TEST CASE: INDICATING INTERDISCIPLINARITY IN AI
To test the possibility of combining scientometrics with other approaches in this way, we conducted a pilot study of interdisciplinarity in artificial intelligence (AI). AI is an interesting case, because interdisciplinarity has a contradictory status in this area of research and innovation: In recent years, interdisciplinarity has been identified as both an existing strength and a major challenge for AI research. Thus, on the one hand, the crossing of boundaries between disciplines and between the sciences and humanities is viewed as inherent to this area of research and innovation. As a recent report by Elsevier (2018), noted: “Research in AI is both theoretical and applied, and transcends traditional disciplinary boundaries, bringing together experts from diverse fields of study.” On the other hand, recent efforts to map AI research have highlighted important current barriers to interdisciplinary collaboration. A report by the Nuffield Foundation, “Ethical and societal implications of algorithms, data, and AI: a roadmap for research,” observed that efforts to advance interdisciplinarity in AI are currently held back, not least because interdisciplinarity itself is understood in diverse ways across different areas of AI research (Whittlestone, Nyrup, et al., 2019). We then sought to combine scientometrics with other methods, such as evaluative enquiry in STS, visualization, and playful methods in order to open up for investigation—and constitute as matters of reflection (Venturini & Meunier, 2019)—the contradictory status of interdisciplinarity in AI.
In investigating this, we were especially interested to establish whether and how AI can be considered as a potential site of transformative interdisciplinarity. Based on a cocitation analysis of Web of Science publications in the area of AI, Cardon, Cointet, et al. (2018) suggested that with the rise of neural networks and deep learning a connective paradigm is now superseding previous mentalist and cognitive frameworks in AI, and that this new paradigm may bring AI research in the computational sciences closer to the social and human sciences. Castelle (2020) explores this further through fieldwork methods in his study of the uptake of pragmatist and sociological concepts of learning and communication in current research on adversarial neural networks. This work thus suggests to us that relations between scientific, social-scientific, and humanities fields may currently be at stake in AI as an area of research and innovation.
In the international workshop organized as part of the Inventing Indicators of Interdisciplinarity project5 we posed the following questions: Is interdisciplinary AI principally concerned with the application of computational methods to social and cultural data? Which other forms of collaboration across disciplines can we detect in AI as an interdisciplinary area of research and innovation? And what is the potential contribution that new types of disciplinary and interdisciplinary ensembles spanning the faculties seek to make in this area? The workshop yielded various insights, but here we want to highlight a single one: the importance of digital mapping—or data cartography—as the methodological framework for indicating interdisciplinarity in this area, one that can enable the exploration of transformative interdisciplinarity.
On a general level, mapping is well attuned to an abductive approach to data analysis, one where data selection, analysis, and visualization are performed recurrently, in a process of continuous exploration. During our workshop, Ludo Waltman clarified the concrete implications of endorsing a mapping methodology for the design of the “pipeline” of bibliometric research. Identifying three key steps in bibliometric mapping—field delineation, indicator selection, and data interpretation (labeling), Waltman proposed that there are ways to adapt the pipeline to affirming—rather than overcoming—the challenge of reaching consensus in each of the three steps. For example, in the case of field delineation, the question is what belongs and does not belong to AI research?
Challenges to consensus can come from various directions: For one, outputs that did not mention the word AI may nevertheless be key to the field. Waltman also highlighted a consequential step in the Elsevier study discussed above, which relied on a small group of experts to identify the keywords used to train the machine learning algorithm guiding field delineation in this study. Based on the fields that the study eventually identified—Search and Optimization, Fuzzy Systems, Natural Language Processing and Knowledge Representation, Computer Vision, Machine Learning and Probabilistic Reasoning, Planning and Decision Making, and Neural Networks—it seems likely that the keyword selection did not include consideration of topics that could help to surface AI research in the social sciences and humanities. It appears the experts did not pay sufficient attention to AI research in the social sciences and humanities in the training of the machine learning algorithm.
Similar challenges to consensus can be raised in relation to indicator selection and the labeling of clusters in the process of field delineation. Presenting a mapping of the field of AI using VOSviewer at our workshop, Waltman identified two clusters, “AI and Society” and “Big Data and Society,” as areas where interdisciplinary exchanges between the sciences and humanities were growing (Figure 2). However, this act of labeling, too, could be interpreted as introducing a partial perspective: Why privilege science/social sciences/humanities interaction as a modality of interdisciplinarity in AI? Answering this question is likely to take us beyond the map into a space of deliberation.
Interdisciplinarity in AI, Leiden field delineation, VOSviewer (Waltman, presented at the Indicating Interdisciplinarity in AI workshop, University of Warwick, February 2020).
Interdisciplinarity in AI, Leiden field delineation, VOSviewer (Waltman, presented at the Indicating Interdisciplinarity in AI workshop, University of Warwick, February 2020).
Such investigations of “selection effects” in scientometric analysis could be taken as expressing concerns with bias, opening up the analysis to criticisms to the effect that objective and thus shareable frameworks of reference are not being secured through scientometric analysis. However, if we recognize interdisciplinarity in AI as contested and transformative, a different methodological interpretation of these debates is opened up. From this perspective, the issue of selection can be treated constructively: The moment of selection presents a participatory occasion, one in which experts, stakeholders, and other participants can be invited into the process of identifying the boundaries and composition of the field on methodological grounds. If these boundaries and composition are inherently contested, then the identification of multiple keywords, multiple ways of labeling clusters, is not only appropriate; it is likely to strengthen the process of interpretation. This is what we mean when saying that indicating involves not only the construction of a proxy but also the curation of a script for the interplay between data, measures, questions, and participants6.
5. CONCLUSION: MAPPING AS A PATH TOWARDS INDICATING
One important implication of the participatory approach to mapping interdisciplinarity is that it entails a move away from the classic opposition between maps and indicators in the sociology of science and innovation. In institutional sociology, indicators safeguard the realist view; they secure the externality of social reality. From this realist perspective, to move beyond measurement to the construction of an indicator means to construct a theory of what a variable is a measure of. It is to posit a proxy.
Performative approaches, by contrast, traditionally rely on mapping to facilitate iterative processes of data exploration (selection, analysis, visualization). This makes it possible to affirm that the specification of the object of enquiry—interdisciplinarity in AI—is accomplished through this iterative process (Marres & Moats, 2015). In a scientometric analysis of interdisciplinarity in AI, we could then recognize that the “appropriate” delineation of the research field of AI, and the concept of interdisciplinarity appropriate to its exploration, emerges from the interplay between bibliometric data (say all abstracts containing the word AI in a database) and method (say, the Leiden method of field delineation through clustering). From this performative perspective, the move from mapping to indicating involves more than the construction of a proxy. Crafting—or curating—are needed to configure the context of application: What is the mapping for? In other words, the move from mapping to indicating is a move from the relatively open-ended exploration of relations between entities to scripting interactions between questions, data, measures, concepts, and participants— in such a way that the issues at stake can be surfaced and receive a formulation.
The move from mapping to indicating also enables us to move beyond an appreciation of scientometrics in terms of its relational approach to data exploration, which we ourselves invoked above. Rather than celebrating the methodological contribution of relational methods such as keyword co-occurences and citation analysis in terms of the emergent entities these methods render traceable, to move to indicating is to welcome the task of actively configuring the context of evaluation, and the community of interpretation, that we produce our mapping for (Marres, 2017). This could include identifying prevalent themes and ambitions in AI research and innovation, their operationalization, the people and resources that are mobilized, and the outputs this generates. In emphasizing process and engagement in indicating interdisciplinarity in AI, indicators may play a role in characterizing forms of interdisciplinarity still in formation. They may be scripted to enable the negotiation of interdisciplinarity among diverse participants, data sources, and methodologies, and amidst multiple epistemic commitments, which are not already congealed into a research community; for example, when interdisciplinarity involves the creation of new combinations and connections between sciences and humanities. To explore different ways in which indicators can be used to enable negotiation, it may be helpful to start with more open-ended spaces of exploration, such as network visualizations and forms of relational mapping. Thus, perhaps somewhat surprisingly, if our wider aim is to move from indicators to indicating, it is good to start with mapping. In doing so, new ways of indicating interdisciplinarity in AI research and innovation might emerge, which support and highlight exchanges across boundaries rather than suppress them.
COMPETING INTERESTS
The authors have no competing interests.
FUNDING INFORMATION
This article has benefitted from a workshop grant from The Alan Turing Institute (JES no. 64678).
ACKNOWLEDGMENTS
We would like to thank contributors to the “Inventing Indicators of Interdisciplinarity” project: Anne Beaulieu, Rodrigues Costa Comesana, Thomas Franssen, Tjitkse Holtrop, Sybille Lammes, Celia Lury, Greg McInerny, Ismael Ràfols, and Ludo Waltman. We are grateful to the reviewers, Loet Leydesdorff, and James McNally for their comments on an earlier version of this article. We would also like to thank participants in the Indicating Interdisciplinarity in AI workshop that took place at the University of Warwick in February 2020 and the Alan Turing Institute, which funded the event.
Notes
This pilot project started in 2018 and is a collaboration between the Centre for Interdisciplinary Methodologies (CIM) at the University of Warwick, and the Centre for Science and Technology Studies (CWTS) at Leiden University: https://warwick.ac.uk/fac/cross_fac/cim/research/inventing-indicators-of-interdisciplinarity/.
In their formulation of agonistic interdisciplinarity, Barry et al. rely on a political concept, the idea of agonism put forward by the political theorist Chantal Mouffe. As such, their conceptualization leaves under-specified the epistemological and ontological process through which the relations between the objects and subjects of research, and between concepts, methods, and data, are redefined on the level of knowledge practice.
Note that mapping does not necessarily presume flat ontology: In scientometrics, the use of cartographic methods goes hand in hand with clear ontological assumptions, for instance, positing authors, journals, and citations as constitutive of scientific research. (With thanks to Ràfols, pers. communication.)
Such as approach does not endorse a perspectival understanding of interdisciplinarity (“everyone understands it differently”): To define the articulation of interdisciplinarity as a collective task recognizes that different definitions of interdisciplinarity do not merely coexist but are likely to pose a challenge to one another, insofar as the endorsement of one definition may reduce opportunities for the pursuit of another. This is also why we insist that interdisciplinarity is at stake in AI research and innovation.
Indicating Interdisciplinarity in AI, workshop funded by the Alan Turing Institute, University of Warwick, February 6, 2020; https://warwick.ac.uk/fac/cross_fac/cim/events/indicating-interdisciplinarity-in-ai/.
See Lammes and Wilmott (2018) on the map as playground and playful methods taking us one step below the level of the “finding.”
REFERENCES
Author notes
Handling Editors: Loet Leydesdorff, Ismael Rafols, and Staša Milojević