The coming decade of digital brain research: A vision for neuroscience at the intersection of technology and computing

Abstract In recent years, brain research has indisputably entered a new epoch, driven by substantial methodological advances and digitally enabled data integration and modelling at multiple scales—from molecules to the whole brain. Major advances are emerging at the intersection of neuroscience with technology and computing. This new science of the brain combines high-quality research, data integration across multiple scales, a new culture of multidisciplinary large-scale collaboration, and translation into applications. As pioneered in Europe’s Human Brain Project (HBP), a systematic approach will be essential for meeting the coming decade’s pressing medical and technological challenges. The aims of this paper are to: develop a concept for the coming decade of digital brain research, discuss this new concept with the research community at large, identify points of convergence, and derive therefrom scientific common goals; provide a scientific framework for the current and future development of EBRAINS, a research infrastructure resulting from the HBP’s work; inform and engage stakeholders, funding organisations and research institutions regarding future digital brain research; identify and address the transformational potential of comprehensive brain models for artificial intelligence, including machine learning and deep learning; outline a collaborative approach that integrates reflection, dialogues, and societal engagement on ethical and societal opportunities and challenges as part of future neuroscience research.


Preamble
Brain research has in recent years indisputably entered a new epoch, driven by substantial methodological advances and digitally enabled data integration and modeling at multiple scalesfrom molecules to the whole system.Major advances are emerging at the intersection of neuroscience with technology and computing.This new science of the brain integrates high-quality basic research, systematic data integration across multiple scales, a new culture of large-scale collaboration and translation into applications.A systematic approach, as pioneered in Europe's Human Brain Project (HBP), will be essential in meeting the pressing medical and technological challenges of the coming decade.The aims of this paper are • To develop a concept for the coming decade of digital brain research • To discuss it with the research community at large, with the aim of identifying points of convergence and common goals • To provide a scientific framework for current and future development of EBRAINS • To inform and engage stakeholders, funding organizations and research institutions regarding future digital brain research • To identify and address key ethical and societal issues While we do not claim that there is a 'one size fits all' approach to addressing these aspects, we are convinced that discussions around the theme of digital brain research will help drive progress in the broader field of neuroscience.

Introduction
Research in the last two decades has yielded impressive new insights into our understanding of the human brain.In unravelling brain complexity, researchers have studied the brain at different levels of organization, from the processes at the level of single molecules and cells over networks of cells up to the level of the brain as a whole organ with areas, nuclei and their networks, involved in a variety of cognitive functions.In endeavouring to understand how the brain works, we are inevitably confronted with the complexity of the organ and its sheer size but also with legitimate ethical limitations that do not allow all of the necessary datasets to be acquired directly from human material.This poses challenges for both empirical research and digital approaches to data analysis, artificial intelligence (AI), data-driven models and simulation.It is also compelling neuroscientists to interact in more collaborative ways and has consequences for the underlying methods and tools.
Combinations of different methods, e.g., structural and functional neuroimaging with MEG or EEG have successfully been applied to identify biological correlates of vision, motor control and executive function.For many brain diseases, mechanisms of genetic control have been elucidated, with concrete relevance for diagnostics and therapy.Further, molecular and cellular mechanisms of a number of signal transduction pathways have been deciphered.Nevertheless, we are still lacking important insights into brain organization, the relationship between brain structure, function, dynamics and behaviour, brain reorganization during learning and sleep, as well as the conditions leading to brain disease and the circumstances required for maintenance of mental health.While the multiscale architecture of the brain accounts for its resilience and adaptive capacity, it also makes elucidation of mechanisms a huge challenge and significantly contributes to the inter-individual variability found at all levels of brain organization, albeit to varying degrees.Filling these gaps across scales is thus important for the development of a deeper mechanistic understanding of brain function, which will ultimately lead to improved diagnostics and personalized therapies.
Innovative neuroimaging, advances in microelectronics and optical methods have contributed in recent years to a spatial understanding of brain function at ever-higher resolution, the study of brain activity and function over ever-longer periods of time or higher temporal resolution and have allowed scientists to better capture the dynamics of changes.At the same time, large cohorts of thousands of subjects have been described with extensive data sets, which has facilitated the identification of factors implicated in aging, disease, lifestyle, etc., their genetic regulation, and interplay that are relevant to ageing.Such empirical research has resulted in significant volumes of highly structured data.
The question before us now is what can be done with the data and how best to interpret it.Sydney Brenner summarized this well during his 2002 Nobel lecture, 'Nature's Gift to Science'53 : We are drowning in a sea of data and starving for knowledge.The biological sciences have exploded, largely through our unprecedented power to accumulate descriptive facts ... We need to turn data into knowledge and we need a framework to do it'.This is further complicated by the fact that the research aims and methods used in individual laboratories are generally very diverse.Therefore, it has become clear that defining and achieving ambitious scientific goals will require close collaboration between laboratories with expertise in different brain regions and techniques, for example, specialists in image analysis, neuroanatomy, data analysis, computation and physiology.
Such close collaboration across different domains of brain research is a defining feature of big international projects like the HBP54 .The HBP is a European Flagship project in the field of Future and Emerging Technologies.Started in 2013, it was one of the first large-scale research projects worldwide and played a pioneering role in transforming digital brain research into a discipline that is more collaborative, reproducible and ethically and socially responsible (Amunts et al., 2022).
The HBP has developed foundations for scientific workflows that enable a FAIR (findable, accessible, interoperable and reusable (Wilkinson et al., 2016)) comparison among multi-scale, multi-species experimental data and theoretical and data-driven models (Schirner et al., 2022).Research in the project has led to new insights into the mechanisms of learning, visuo-motor control, consciousness, sleep, spatial navigation, predictive coding and perception and has resulted in new theoretical concepts and analysis methods.
The HBP has also empowered the neuroscience community to take advantage of the most recent developments in computing, simulation and artificial intelligence (AI).Experimental data, tools, instruments, and dedicated hardware such as neuromorphic computing have been created in the project and made available with the intention of significantly speeding up developments in brain medicine and research.The HBP has developed and is operating EBRAINS as a collaborative research platform with the aim of bringing brain research to the next level and of further developing applications in medicine and neuro-inspired technologies.EBRAINS has successfully applied to the ESFRI Roadmap and is now being developed as a sustainable research infrastructure -by scientists for scientists.In this context, the digital twin (Tao et al., 2019) is a useful concept that can be exploited in many fields of research including neuroscience, medicine, and neuro-inspired technology.
Looking to the next decade, we here identify gaps in our knowledge of the brain based on what has been achieved, and articulate research goals for the future.We are convinced that efforts towards achieving these goals will benefit from progress in digital brain research as well as recent developments at the interface of technology and computing.

Neuroscience: state of the art
To understand where we are in neuroscience research, it is critical to consider where we have come from and also to look into the future.Modern neuroscience was born in the last two decades of the 19 th century, when the brain basically went from being regarded as an unstructured mass to an intricate network of neurons specialized for different areas of the nervous system.An understanding of structure led to elucidation of function, and the full brain electroencephalograms of the 1930s paved the way for intracellular electrophysiological recordings in the 1950s and to a basic understanding of the physiology of neurons and synapses.Explorations of the sensory (mainly visual) and motor systems in the 1960s and 1970s, and parallel advances in their anatomy, provided valuable insights, giving rise to an updated view of the brain that we nevertheless now understand was somewhat naïve and simplistic.The 1980s saw great advances in our understanding of neuronal membrane biophysics and the functioning of receptors and ionic channels, while in the 1990s the advent of full-brain imaging techniques kickstarted a period of intense progress that continues to this day.The beginning of the 21 st century saw the development of new tools to control brain circuits such as optogenetics, which through activation or silencing allowed for the first time investigation of the role of specific neuronal types.In parallel with all these neuroscientific techniques, molecular biology, genetics, pharmacology, psychophysics, neuroimaging, electronics and computing progressively also enriched brain studies.
Consequently, our conceptual understanding of particular brain functions has also become richer and more complex.To give but one example of the different levels of this complexity, sensory systems went from being considered relatively simple feedforward systems to being acknowledged as feedforward-feedback systems; whereas they were first studied under anaesthesia, they could be later studied under waking and behavioural conditions; the study of isolated sensory systems gave way to multisensory integration, passive sensory processing was exchanged for active sensing; finally, sensory systems went from being regarded as silent systems at which a stimulus arrived to active systems with self-organized activity, and are no longer simply considered as simple circuits but rather as circuits integrated into large interconnected networks.
As this increasing complexity emerged, the importance of computational neuroscience became more obvious.David Marr (Marr, 1982) recognized that, above the level of neural implementation, two further levels of organization are situated: the algorithmic and the computational level.The need to involve computational neuroscience has grown in parallel with computational capabilities, which have expanded in the 21 st century to the point where computational neuroscience has become an essential companion of experimental and clinical neuroscience.From the modelling of concrete processes or computations, we can now consider more ambitious, larger and integrative models.Computational approaches enable large and complex datasets to be analyzed efficiently, supported by artificial neural networks, theory, modeling and simulation, to link brain structure and function.Simulation of cellular-, molecular-level and/or system models can facilitate the testing of specific hypotheses or prediction of properties of brain structures, while integrating findings from different researchers and obtained with various techniques.This, in turn, is critical for translating findings from neuroscience into digital medicine, for proposing new strategies of intervention and for empowering neuro-inspired technologies that take advantage of a growing body of insights into perception, plasticity as well as learning and memory.
Further conceptual insights were gleaned using novel animal models.Classically, neuroscientists have studied mammals as their brains are similar to those of humans.Other species like zebrafish or the fruit fly are being selectively employed to understand genetic or ontogenetic mechanisms that cannot be properly tested in mammals.But increasingly, neuroscientists also study animal models like birds at the systems level to understand deeper principles of vocalization learning and cognition.Birds have been chosen because while their brains are vastly different to those of mammals, their cognitive abilities are similar.Such comparisons can yield basic insights into the links between brain structure and function and offer the unprecedented chance of gaining deep conceptual insights into fundamental brain functions.These studies could identify a core of identical neural mechanisms in the brains of birds and mammals that possibly constitute hard-to-replace components of advanced cognition (Güntürkün et al., 2021).This bird's eye view of modern neuroscience illustrates several important points: 1) Advances in neuroscience are not only the result of conceptual advances but are tightly linked to new methods and technologies.2) New techniques allow a better understanding of the brain, but at the same time open the door to a new level of complexity and open up new questions.3) There is an increasing need for integration of knowledge and collaboration across different domains of neuroscience research.

Instrumentation
Although much progress has been made, the instrumentation available to neuroscience researchers today means that the potential for further advances in the understanding of the brain remains breathtaking.We have new tools to look into the brain, new capabilities to control and repair brain function and considerable computational power at our disposal to analyze data and simulate brain function.
For the first time in the history of neuroscience, we have a dedicated research infrastructure, EBRAINS 55 , which gathers together tools, methods, theories and data, which were previously fragmented and distributed between different labs, into a joint, digital, open, interoperable platform.This provides the technological basis and tools for a new type of international, collaborative neuroscience and represents a large-scale interface for collaborative projects, e.g., organized in the International Brain Initiative (IBI) 56 .EBRAINS operates according to FAIR data principles and encompasses atlasing, simulation, brain-inspired technologies, medical data analytics as well as dedicated tools for collaboration.In addition, EBRAINS incorporates innovative neuromorphic computing and allows the execution of experiments in virtual robots.All data and tools have access to Fenix 57 , an infrastructure coordinated by leading European centres for high-performance computing, which greatly facilitates research with high computing and storage demands.Through Fenix, neuroscientists can also reach other research communities to jointly develop new software and solutions in the broader domains of data-and computationally intensive research.
The EBRAINS research infrastructure attracts a broad community of users and stakeholders, ranging from experienced application/service developers and senior neuroscientists to young researchers and students.The inherent diversity of the community is reflected in the heterogeneity of the EBRAINS research infrastructure services.The platform puts significant emphasis on the ease of use of its tools and services and on their ability to facilitate collaborative work, by allowing their combination and linking in a flexible manner to form arbitrary computational workflows that pursue solutions to diverse problems.In that sense, EBRAINS is changing the research paradigm scientists use to study the brain, both for large-scale neuroscience and for individual projects.
Computational workflows should be characterized by accessibility, shareability, automation, reproducibility, interoperability, portability and openness.In this context, of particular importance is the use of the Knowledge Graph 58 as a workflow registry.Its strengths include its multi-modal information representation as well as the following 'independence' features of EBRAINS workflows: • Independence of tools and services from the workflows in which they are used.The inputs of tools and services are parameterized so that they may produce different outputs depending on other tools and services with which they are (re-)used in diverse workflows.• Independence of workflows from the underlying infrastructure in which they are executed: the Common Workflow Language (CWL) is adopted for describing workflows in a common, standard fashion, offering transparent execution in infrastructures with different requirements, dependencies and configurations.• Independence of workflows from the underlying workflow management system.Several such systems are compatible with CWL for executing workflow steps, monitoring their execution, handling failures, automatically fetching logs and outputs and other relevant actions.

What is missing?
Further insights into brain function and dysfunction are not only now possible but are also urgently needed.Neurological and neuropsychiatric diseases create a significant burden for patients, relatives and society.Neurological disorders are by themselves the second leading cause of death after heart disease with 276 million DALYS (disability-adjusted life-years; GBD 2019).In 2010, the total cost of brain disorders in Europe came to €798 billion (Olesen et al., 2012).In addition, achieving progress in these areas is motivated by the more philosophical but no less weighty questions of knowing and understanding our own nature, our own consciousness, our own cognition and understanding the basis of brain health and the border between brain life and death.Ethical, philosophical, legal and regulatory, cultural and political challenges will need to be addressed in parallel.
Progress in brain medicine is tightly linked to advances in basic research, where fundamental questions still remain open.To name but a few examples, signal transduction, the formation of memories and the basis of consciousness, the interplay of electrical and molecular-biochemical mechanisms of signal transduction at synapses, the formation and interaction of functions like memory and consciousness, the role of different brain states in the life-long reorganization of the synaptic structure or the relevance of brain architecture for supporting a concrete cognitive function, are all areas where we have much to learn.Further, the need for interaction with the brain (both 'reading' and stimulation/manipulation) originally driven by clinical requirements, has opened up novel and expanding fields such as consciousness assessment, brain-machine interfaces, cognitive enhancement or sense-expanding technologies, which have relevance beyond the medical sector.
There is also a need for high temporal and spatial resolution brain recordings and activity control that are at the same time minimally or non-invasive.These technological advances require interdisciplinary work from neuroscience and areas such as micro-and nanoelectronics, optics, lightcontrolled drugs, nanorobotics, new materials (e.g., graphene), etc.It is to be anticipated that advances in security, biocompatibility, reactive changes in the brain (e.g., gliosis, cell death), signal-to-noise ratio, problems related to invasiveness (surgical, infections) and closed-loop control of brain function will be made in the near future, which will bring with them consequences in terms of legal and ethical issues.
While progress in these fields has been impressive, a comprehensive understanding of underlying processes requires an integration of each system (e.g., visual) with the rest of the brain, with the body and with the environment.Furthermore, it requires integration of molecular, subcellular, cellular and systems levels, to reach a 'multiscale' understanding that incorporates the emergent properties of all these complex relationships.All these levels act differently in different brain states, and they can also malfunction, resulting in a large variety of neurological and neuropsychiatric diseases.In order to understand the process holistically, one needs to understand all the individual steps, which is today in many cases difficult or impossible.
The newest computational models are now able to integrate microscopic features, such as specific ion channels, synaptic receptors and neuromodulators and evaluate their impact at the population level.
Recently, this approach was even extended to the brain-scale level, by studying the effect of molecular targets of anaesthetics such as propofol, and their impact at the level of large-scale activity.For example, changing K + conductance, or the kinetics of inhibitory (GABA-A) synaptic receptors, can induce a switch of brain activity to synchronized slow-waves, similar to the effect of anaesthetics. 59This is an example of an area where computational models can make a real contribution, through identifying mechanisms by which microscopic changes can be causally linked to macroscopic behaviour.
Models can in addition investigate how physiological mechanisms can be perverted in pathological conditions, where here again, microscopic changes can lead to aberrant behaviour or clinical symptoms.Among the best understood cases are epilepsy disorders, where several microscopic targets have been identified, leading to abnormally high excitability.This may result in seizures at the behavioural level, which can be focal or generalized.As seizures and their spatial profiles can be measured precisely by electrical recordings, computational models can be particularly precise for these disorders.However, the brain signals of many other pathologies such as schizophrenia are more subtle, and computational models have also a potentially important role to play here -not only in identifying mechanisms but also in predicting potentially informative macroscopic features.
This type of interplay between experimental measurements and modelling predictions is very powerful and has led to impressive advances in understanding network-level phenomena such as oscillations, waves, etc.The extension of such an approach to the level of the whole brain, however, is more challenging because of the high level of complexity involved, as well as the still insufficient time-and space-resolution of non-invasive human imaging techniques.Linking these models with imaging requires a deep biophysical understanding of the different signals involved.This aspect is key if computational models are to quantitatively predict imaging data, and thus create precise loops between computational models and clinical neuroimaging, which should ultimately lead to a better understanding of neurological diseases.

Technological and computational challenges
Brain research poses enormous technological and computational challenges for brain interfacing, analysis and mechanistic understanding, data interpretation and modelling of brain processing.To cite but some examples: • The complexity of data (multi-level brain organization, hierarchies, parallel information processing, redundancy, electrochemical processing, etc.).A key aspect of this complexity is the relationship between different scales that speaks to the kind of graining (and accompanying data) that is most apt for elucidating these relationships.One approach from physics is the notion of 'renormalization'; namely, the conservation of laws from one scale to the next (e.g., sparse coupling, hierarchical dynamics, computational principles, etc.).• A large proportion of data is not directly accessible and unknown (e.g., reactions at the cellular level cannot yet be measured in the living human brain).• The specific spatial and temporal resolution of datasets, given the multiscale nature of brain spatial and temporal activity.Scale integration is challenging (from micro-and nanometre scales, through meso-to macroscale) as is the need to capture brain dynamics.This requires representation of different scales in a common framework according to the topography of the findings, i.e., in a multi-level and multi-scale atlas.• The large size of 'subsystems' (e.g., large molecules such as neurotransmitter receptors with many atoms and complex, dynamic structures, large networks, whole-brain perspective, large cohorts).• The wide spectrum of response patterns, dynamics, plasticity and behaviour of the system, not only in physiological but also in pathological conditions.• The changing nature of the system, which manifests plasticity at different spatial scales (from dendritic spines to large networks; processes such as spike adaptation, LTP, LTD), or neuroregeneration after lesions.• The accuracy and reliability of predictions and analyses, which is particularly critical for translating applications into brain medicine.• The lack of a comprehensive brain theory, or a selection of such theories that compete with each other.Such a theory, in particular if it later turns out to be wrong, is tremendously helpful for systematically planning new experiments and interpreting their results.

Brain models and digital twins as enablers for future brain research
The acceleration of information and communication technologies in the past two decades has not only supported the development of simulation and machine learning technologies, it has also made data and models interoperable within a common ecosystem leading to a novel type of brain models.Directly tapping into the results stemming from basic research on the brain, simulation of brain models is expected to play a fundamental role in understanding essential aspects of brain processes (by demonstrating the capacity to reproduce them in silico): decision-making, sensorimotor integration, memory formation, etc.One may also envision the potential use of such models and simulations to address questions that currently cannot be studied experimentally such as, for example, studying the roles of particular genes during specific periods of brain development or creating virtual cohorts of patients for rare diseases.From there, it is easy to envision how generic brain models can be customized to capture some of the distinct features of a given patient's brain.For example, an individual's structural and functional brain imaging data may constrain a generic digital brain model and render it subject-specific, thus enabling its use as a personalized analysis template or in silico simulation platform.A concrete instance of such an approach is the Virtual Epileptic Patient, wherein neuroimaging data inform in silico simulations of an epileptic patient's brain to support diagnostic and therapeutic interventions, clinical decision-making, and prediction of consequences (Jirsa et al., 2017).
Such personalized 'virtual brains' can be seen as a stepping-stone towards something even more theoretically and technically challenging, but also better adapted to the ever-changing nature of brain activity across all time scales.We indeed see the logical culmination of personalized brain simulation in a model that is continuously informed and updated by real-world data, a type of model referred to as a 'digital twin'.
Historically the concept of the digital twin originated in the realm of industry and manufacturing (Grieves, 2019;Grieves and Vickers, 2017) and comprises three components: the physical object, its virtual counterpart, and the data flow back and forth between the two.Empirical data measured for the physical object are passed to the model, and information and processes from the model are passed to the physical object.The digital twin provides 'a real mapping of all components in the product lifecycle using physical data, virtual data and interaction between them' (Tao et al., 2019).The digital twin is thus more than a general simulation model.It is the specific instance of the general model for an individual object fed with empirical data from that specific object, e.g., an airplane engine in the industrial domain.
In constructing a 'digital twin' of a living organ, one is confronted by important challenges over and above those encountered when constructing the digital twin of an inanimate object.Therefore, the concept of the 'digital twin' in this context needs to be carefully defined to provide clarity on its limitations and to avoid creating unrealistic expectations of exact fidelity or even counterproductive hype (Evers & Salles, 2021).The digital twin as discussed here should be understood as a virtual model designed to adequately represent an object or process that is constrained by data from its physical counterpart and that provides simulation data to guide choices and anticipate their consequences.The digital twin is thus a copy in the practical sense, usually associated with a model of a function or process, and its power lies in its usefulness in dealing with relevant problems faced by its physical counterpart without the need (and certainly not the claim) of capturing every single detail thereof.In a neuroscience context, a 'digital twin' of a brain in the above sense holds much promise as an approach for continuously adapting interventions in functional neurorehabilitation or for tailoring neurotechnology-based interventions.It is clear that applications making use of a highfidelity digital twin of a human brain updated in quasi-real time will require technical developments (e.g., ecological immersion of that twin brain in simulated environments; high-bandwidth, stable brain-machine interfaces) that do not exist yet; as such, they remain a long-term objective for a rather distant future.This is not to say, however, that digital twins cannot already be applied in neuroscience and medicine today, provided they adequately consider the intrinsic limitations of current brain models, of available personalization processes and those faced by current technologies in updating them at the required frequency.The digital twin thus defines the current horizon of our digital neuroscience roadmap and must be appropriately taken into account as a driver for future developments in both the EBRAINS architecture and neurotechnology.
While the use of digital twins of the brain in concrete applications may still seem some way off, the era of the digital brain has, without question, already started.The digital brain is a central concept under which data, models, theory, methods and computational technology are integrated for all research and development efforts undertaken in the framework of the HBP.It enables researchers to address some of the major challenges that have hindered progress in neuroscience for decades.These challenges include our understanding of intra-and inter-subject variability, non-identifiability of mechanisms and multiscale complexity.EBRAINS provides the infrastructure and user interfaces to allow interoperation of the required components of data, models and methods; in doing so, it de facto establishes the operational basis for the concept of the digital brain to take centre stage in neuroscience research.
We propose that there are three distinct areas where digital brain simulations of all kinds (statistical average, personalized, digital twin) could be fruitfully applied in the short-to-medium term: (1) basic brain research, (2) applications in medicine, and (3) neuro-inspired technologies. (

1) Basic brain research
The digital twin concept should not replace basic research and knowledge accumulation but can be rather thought of as a useful 'engineering' tool that functions currently as an in-progress predictive model with a dual purpose: (1) putting current knowledge to the test, and (2) anticipating the effect of interventions on a single individual, desired effects, etc.The latter can be appealing as the number of interventional methods is expanding (deep brain stimulation (DBS), neuromodulation, transcranial magnetic stimulation, transcranial direct current stimulation, drugs, optogenetics, photopharmacology), but they are currently applied 'semi-empirically' with the available information about electrode location; circuit connectivity, function, and electrical models; genetic promoters of neuronal types; expression patterns of neuroreceptors and their signalling pathway models, etc.The digital twin may allow rational decision-making regarding these parameters, the testing of outcomes, followed by re-evaluation of the model, and so forth.Moreover, it could become the platform to integrate cumulative knowledge and make it available for statistical/AI treatments.
In order to be successful, underlying models must be biologically realistic, i.e., anatomically adequate and functionally comprehensive.This creates challenges in the human brain, where many features are not directly accessible for measurement, particularly in the living brain.Different strategies are used in the HBP to overcome this problem that require the integration of highly heterogeneous data, including in vivo and ex vivo, in the same reference framework.In an alternative, complementary approach, the Cell Census Network (BICCN) undertakes in-depth characterization of (small-scale) components of the brain, e.g., the most detailed and comprehensive multi-modal model of the primary motor cortex including single-cell transcriptomes, chromatin accessibility, DNA methylomes, spatially resolved single-cell transcriptomes, morphological and electrophysiological properties and cellular resolution input-output mapping (Callaway et al., 2021).Since it is currently not possible to completely map the human brain at such resolution, it is necessary to predict features of other brain regions based on these data, as well as sparse, but region-specific data and predictions of relationships between brain regions.In a comparable way, predictions may also rely on data obtained in animal brains.This approach has many similarities with a bottom-up approach or 'emergentism', i.e., system features such as self-organized behaviour will emerge from assembling suitably realistic neurons and neuronal networks.Still, to develop a realistic digital twin strategy, the integration of such an approach might be necessary and useful.Is it also sufficient?Evidence suggests that such a predictive approach should be supplemented by the definition of behaviour from an optimization perspective, which is followed by the search for the computational realization of optimization processes and their underlying anatomical, molecular and genetic structures in the data.The BigBrain model is a unimodal anatomical whole-brain model at 20-micron resolution, which serves as a scaffold for spatial integration of data sets from both top-down and bottom-up perspectives (Amunts et al., 2013).
Brain simulation plays a key role in elucidating brain complexity by allowing the testing of hypotheses about the brain's multi-level organization; moreover, it will become more and more important to interconnect simulations executed at different spatial levels (e.g., the EBRAINS simulation engines Gromacs at molecular level, Arbor and Neuron at cellular level, Nest at systems level and TVB at whole-brain level).Closed-loop simulations would allow the constant updating of models with experimental data, which will improve accuracy.
The multiscale complexity of the living brain, the limited accessibility for measurements, and our incomplete understanding of brain processes make the realization of the digital twin approach difficult to say the least.The BigBrain is an anatomical model of a brain of a body donor and serves as the scaffold for the integration of twin data in a strict sense, as well as data from other sources (typically based on experimental population data), synthetic data simulated by models and different brains.
Based on theory, the integration of these highly heterogeneous sources of information and knowledge is enabled by different workflows.They also identify the limitations and ranges of validity of the digital twin strategy, which is crucial for the responsible use and subsequent trust in the technology.Nevertheless, such data-driven models may represent the closest digital representation of a living human brain that is achievable at any given point in time. (

2) Twins in brain medicine
From such digital twins, personalized twins can be derived with the aim of improving diagnostics and therapy for patients.Analogous to cardiac digital twins (Gillette et al., 2021), i.e., digital replicas of patient hearts derived from clinical data that match all available clinical observations, human electrophysiological replicas have great potential for informing clinical decision-making and also for facilitating the cost-effective, safe and ethical testing of novel device therapies.Twins in medicine address a defined spatial scale, with a defined granularity, consider a defined time interval and serve a dedicated purpose.An application of the digital twin approach for Alzheimer's has been proposed just recently (Stefanovski et al., 2021), and while careful consideration of data privacy, security and safety aspects will be required, personalized twins but might also offer a uniquely powerful strategy for treating such conditions.
The Virtual BigBrain enables construction of individual connectomes based on neuroimaging and EEG data of a subject and anatomical data from the BigBrain model (Jirsa et al., 2017).The ongoing Epinov clinical trial employing the TVB represents a major step forward in this regard; scientists have developed individual models of the brains of patients undergoing epilepsy surgery to guide and predict the best seizure outcome (Proix et al., 2017).The strategy again is to combine population data with data from an individual brain to develop a model, a twin, that is realistic enough to allow simulation of the intervention prior to surgery.
Other applications would also be possible using this approach.For example, DBS, and neurosurgery in general, would massively benefit from personalized twins, which could help in planning the procedure in such a way to ensure maximum protection and preservation of healthy tissue.In particular, it could inform surgeons of the need to re-compute the model based on the brain signals detected by sensors during surgery.This would require running simulations nearly on-the-fly, e.g., before or during surgery, which means stringent requirements for high-availability service and security.Certainly, this is not yet in close reach.Applications in intensive care units following stroke or traumatic brain injury would have similar requirements.Beyond invasive therapeutic interventions, a digital twin would be a powerful tool for predicting the consequences of brain lesions and pathophysiology, which is sometimes described in terms of computational neuropsychology; namely, characterizing lesion-deficit relationships in silico, using synthetic lesions (Parr et al., 2018).This could revolutionize our capacity to personalize neurorehabilitation, while integrating complex information generated by virtual reality and robot-based therapies together with fine measurements of patients' responses and progress.
Other applications could employ simulations to test a 'clinical' simulated populations that could be far larger than a real one, therefore providing data amplification by creating cohorts of 'digital patients'.This could be particularly interesting for evaluating rare diseases, to study the influence of sex or to predict progression of diseases (Maestú et al., 2021).
Another perspective would evolve from testing the effect of drugs in a virtual environment to uncover the mechanisms of the drug not only at molecular, but also systemic levels.Considering that quantum mechanics/molecular mechanics are computationally highly demanding, such an approach at a system level would require highly scalable tools run on the most powerful supercomputers. (

3) Neuro-inspired technologies
A fundamental neurocomputational challenge is to establish what level of granularity in brain modelling is actually required to support the emergence of a variety of cognitive and sensorimotor functions.Models of the human brain, simulated in embodied settings (i.e., having the ability to control virtual bodies interacting with virtual but physically realistic environments), and receiving time-dependent input streams to produce behavioural outputs, represent a uniquely attractive platform to investigate the links between brain structure, brain activity and cognitive and functional performance.
One field expected to benefit greatly from this approach is neurorehabilitation, where highly realistic models of brain-body interactions will be useful in elucidating the neural mechanisms at play.The combination of highly detailed brain models together with models of the spinal cord and of the musculoskeletal system indeed affords special opportunities, such as investigating the relation between neural activity and resulting motor behaviour in a detailed, quantitative manner.Personalized models could thus be integrated into decision-support systems to guide the choice and combination of rehabilitation strategies by a physician.They may also support breakthrough developments in central nervous system (including spinal cord) stimulation technology and functional electrical stimulation, improving the efficacy of these techniques and expanding their relevance to a greater breadth of conditions.
Similarly, the combination of high-fidelity models of both the human musculoskeletal and central nervous systems is also expected to support the emergence of enabling in silico technologies for socalled electroceuticals, i.e., medical devices that provide neurostimulation for therapeutic purposes (e.g., in Parkinson's disease, epilepsy, etc.).There is little doubt that the medical device industry would have a fundamental interest in tools guiding their product design, generating predictions regarding efficacy and overall de-risking of the whole product development process.With the brain atlases and the multiscale brain simulators created by the HBP, it thus seems timely to consider the collection and integration of new data (e.g., dielectric properties) as a prelude to the development of simulation tools and services geared towards the evaluation of electroceuticals.
From a technological perspective, the human brain is also the most promising 'Rosetta stone' for the implementation of advanced cognitive abilities in artificial systems.Modern artificial agents are characterized by limited levels of intelligence, an inability to generalize beyond provided training sets and a strikingly superficial understanding of their environment.The lack of generalizability implies either the necessity for large data sets (the resource-intensive big data paradigm), continuous human supervision (remotely controlled systems) or extensive, rigid mission planners accounting for any allowable occurrence (for planetary or ocean exploration).The superficiality of perception implies a lack of robustness of and trust in artificial perception systems, a known obstacle to the emergence of e.g., effective driving automation.To overcome such limitations, brain-inspired multi-area model architectures must be developed in conjunction with new embodied and incremental learning algorithms, with a view to find those that best emulate the functional mechanisms underlying human perceptual cognition.Harnessing such mechanisms and understanding the emergence of cognitive functions will be essential to create explainable (and thus reliable) AI.
Finally, neuromorphic technologies, where both data transfer and processing are event-(i.e., spike-) based, provide special opportunities for edge computing, mobile robotics and neuroprosthetics.Considering current trends in automation of mobile systems and deployment of 'always-on' sensor arrays in particular, neuromorphic devices are expected to deliver enhanced, low-latency capacities for perception, cognition and action, while reducing the impact of onboard operations on the system's energy consumption.For example, combining spike-driven processing units with spike-generating sensors (e.g., dynamic vision sensors) into complete neuromorphic systems (sensors + processing units) will make it easier to perform data fusion and alleviate constraints related to the heterogeneity of data sources.Advances in the neurocomputational understanding of learning by neuronal circuits, especially through synaptic plasticity, will also provide new ways of endowing neuromorphic circuits with ever-more complex functionalities at a lower training cost (e.g., one-shot and continuous on-line learning).The HBP has supported the SpiNNaker many-core and BrainScaleS physical emulation neuromorphic computing platforms, establishing the first open neuromorphic computing services and has contributed to the further development of these technologies.Future developments in neural networks for artificial intelligence applications will see a convergence between mainstream AI and neuromorphic technologies; basic brain science will be key in informing the development of these technologies.

Responsible research and innovation
Digital brain research is driven by the desire to promote society's best interests and reflect societal priorities, including a better understanding of the brain, the development of better diagnostic tools and more effective treatment of brain diseases.To ensure that societal concerns are addressed and reflected in the research and its outcomes and that research and innovation processes are carried out responsibly, future research programmes must integrate anticipatory practices, neuroethical reflection, multi-stakeholder and citizen engagement and support ongoing compliance with current legislation, regulation and good research practice.This includes careful consideration of the role of gender and diversity in the production of neuroscientific content and governance of research, attention to potential dual-use research of concern or misuse of neuroscientific findings as well as reflection on the ethical sustainability of the research, its impact on human rights and its long-term societal and political implications.
The framework of responsible research and innovation (RRI) defines a multidisciplinary approach to tackling the ethical, philosophical, societal and regulatory challenges that accompany the vision of future digital brain research.Furthermore, RRI-inspired research and practices can be useful in building a future where responsible digital brain research is proactive in its recognition of existing and emerging societal and ethical challenges.
The concept of the 'digital twin' applied to brain models raises significant philosophical (e.g., what is the relation between the brain and its 'twin'?) and ethical (e.g., are there potentially problematic applications of the technology?Who is involved in the analysis and decisions on potential applications?)questions.Reflection on these questions is important for managing societal expectations and for determining future directions for ethical analysis and policy-making.Philosophical and neuroethical analysis adds important insight to the meaning and adequacy of the concepts involved.Some of the conceptual issues to address include the following: is it reasonable to describe a model of a brain as a 'twin'?If so, why?What are the limits of the term in this context?In what sense is the brain model a 'twin'?Conceptual clarity is a prerequisite for informed debates on the ethical issues raised by digital brain research and is essential for avoiding hype, misconceptions and misguided societal expectations The latter can lead to distrust in science, which will negatively impact the quality of the research and its applications (Evers and Salles, 2021).
Additional social and legal issues to be considered in relation to digital brain research include those raised by data protection and General Data Protection Regulation-compliant data governance, social desirability, acceptability, and sustainability of digital brain models and issues raised by the possibility of advanced artificial cognition, brain-inspired computing and neurorobotics research, among others.In one example, the intersection of neuroscience and technology is likely to lead to new approaches to AI.Digital brain research must ensure adequate representation of diversity in data (sex/gender, age, ethnicity/race etc.) on brain health and brain architecture as well as in the involved scientists, practitioners, and stakeholders; this diversity will help ensure that the discipline remains vigilant to the much-discussed issues linked to the reproduction of biases in AI and can proactively engage with new concerns that may arise from novel approaches, technologies and applications.
To meet the challenges described above, the HBP is developing the EBRAINS research infrastructure that incorporates responsibility by design to pro-actively anticipate, reflect, engage and undertake network-wide action on these and future neuroethical, philosophical and societal and legal challenges (Stahl et al., 2021).This approach aims to incorporate principles and practices of RRI into the infrastructure through a multi-pronged approach aimed both at the governance as well as research levels.Elements include neuroethical reflection and research, proactive governance structures including foresight and public outreach and dialogue activities, data governance, diversity and equal opportunities research and governance support and support for proactively addressing issues on dualuse research of concern, misuse and commercialisation of EBRAINS research and its outcomes.

Conclusion
An improved understanding of brain function depends on a better appreciation of fundamental mechanisms -the actual biological processes, their relationships and the rules that govern them.Only then can we target prevention, therapies and mechanism-based diagnoses.Although now feasible, digital twins of the brain are still at an early stage and have to undergo rigorous testing and validation before they can meaningfully address brain disorders and become the basis for disruptive new health technologies.Further, brain twins raise major ethical questions that we will need to address in an open dialogue with society.Twins can be seen as a kind of endpoint for ongoing developments of brain models and analytics.
With this goal in mind, a digital infrastructure that can host such digital brain twins, and which provides interoperability, information security, multi-level data, access to knowledge-based computing resources including high-performance computing and other relevant technologies may foster progress in understanding the rules and refining our digital brain twins to a point when they pass validation testing and become useful for clinical translation.EBRAINS is an infrastructure that is capable of hosting such developments.
Structuring data and knowledge such that they can easily be recombined and integrated towards an infinity of digital brain twins by the research community -together with delivering the powerful technology with which complex simulations of these twins can be performed -may in itself represent a disruptive technology for generating scientific insight.

Scientific Goals -a Roadmap
Short term • Combination of and close mapping between bottom-up and top-down models to speed up theory formation of how information processing is mechanistically implemented in the vertebrate brain • Building on a whole set of complex high-resolution regional models and integrating them into cognitive architectures • Using the existing expertise of EBRAINS to create state-of-the art brain atlases for novel animal models like birds • Leverage the potential of next-generation multilevel human brain atlases for neuroimaging, modelling and clinical applications • Extending the multi-level human brain atlas by physiological and functional knowledge, e.g., providing the foundations for a curated and standardized perturbational atlas of effective connectivity, built on, for example, intracortical stimulation and simultaneous intracranial (stereo EEG) and extracranial (highdensity EEG) recordings before and after lesions and for maps of cortical reactivity and connectivity (TMS-EEG), multimodal neuroimaging data sets in healthy controls and patients • Develop the first models integrating neuronal data with those from glial cells, as well as molecular and immunological signalling pathways.
• Based on developments in the field of epilepsy and the ongoing EPINOV trial, develop personalized brain models for several other brain diseases using neuroimaging and physiological data of individual brains in combination with atlas data from post-mortem and in vivo brains • Producing more complex combined models of several brain regions (e.g., cerebellum and striatum with cortex) or of different scales (e.g., whole-brain activity with detailed cellular models) to represent biologically realistic circuits, simulating these models and comparing the results with empirical data • Approach multiscale dynamics and decipher how they relate to structure • Providing the most comprehensive model of the hippocampus as a key region involved in Alzheimer's, integrating macro-, meso-and microscopical levels, to derive therefrom a twin of a rodent's hippocampus • Applying detailed anatomo-physiological models for brain medicine in first use cases, e.g., spinal cord • Inventory of currently discussed brain theories and corresponding strategies of aligning experiment and theory.

Middle term
• Creating ultra-high resolution integrated models of the brain's cellular and axonal structure as a spatial framework to enrich the multilevel brain atlas down to the submicrometre scale with imaging data incorporating proteomics, genomics, connectomics and molecular information • Developing more complex combined models, across several regions and/or scales and predicting features of brain structure and function based on the simulation platform of EBRAINS • Implementing first behavioural, information and graph theory analyses, complemented by whole-brain computational approaches, based on a multiscale, causal (stimulation and recording) approach for the diagnosis and treatment of stroke, disorders of consciousness, neurodegenerative and mental diseases • Development of a set of criteria to quantify the goodness-of-fit and the predictive power of large-scale brain models.Special attention should be paid to the notorious problem of overfitting small empirical data sets.• Generate whole-brain models of sleep-wake states, attention and using twins in diagnostics to monitor consciousness • Generating useful new principles for the machine learning field and reaching the 'tipping point' at which the level of understanding of brain mechanisms behind perception, memory, cognition, decision-making and motor control will significantly impact the fields of artificial intelligence, robotics and neuromorphic technology.• Using general models of neuronal function to study species differences

Long term
• Developing realistic, large-scale brain models of sensorimotor function, cognitive, perceptual and language function • Simulating basic and more complex mouse behaviour in robots, using whole-brain models • Emerging models of learning and adaptivity in biological brains need to be evaluated for their potential to inspire new algorithms for machine learning and artificial intelligence, and novel engineering applications (e.g., new materials, artificial life, replacing and enhancing brain function).
• Computing data-driven models of brain ontogeny, development and aging at population and individual levels as a prerequisite for basic science, improving brain medicine in children and adolescents, and informing technology, e.g., evolutionary algorithms used to emulate learning • Applying detailed anatomo-physiological models in brain medicine (e.g., preparation of brain surgery, diagnostics, monitoring rehabilitation) • Applying simplified, yet reasonable realistic twins for different functional systems in brain medicine, with the option to update them on-the-fly with information from real-life sensor data, e.g., during diagnostics, rehabilitation or in acute situations such as surgery • Develop combined, multi-organ models that hasten the advent of Patient Twins by reflecting mechanisms of regulation of the nervous system with respect to the function of inner organs and the body