Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-11 of 11
Luiz Olavo Bonino da Silva Santos
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2024) 6 (2): 429–456.
Published: 01 May 2024
FIGURES
Abstract
View article
PDF
ABSTRACT Since 2014, “Bring Your Own Data” workshops (BYODs) have been organised to inform people about the process and benefits of making resources Findable, Accessible, Interoperable, and Reusable (FAIR, and the FAIRification process). The BYOD workshops’ content and format differ depending on their goal, context, and the background and needs of participants. Data-focused BYODs educate domain experts on how to make their data FAIR to find new answers to research questions. Management-focused BYODs promote the benefits of making data FAIR and instruct project managers and policy-makers on the characteristics of FAIRification projects. Software-focused BYODs gather software developers and experts on FAIR to implement or improve software resources that are used to support FAIRification. Overall, these BYODs intend to foster collaboration between different types of stakeholders involved in data management, curation, and reuse (e.g. domain experts, trainers, developers, data owners, data analysts, FAIR experts). The BYODs also serve as an opportunity to learn what kind of support for FAIRification is needed from different communities and to develop teaching materials based on practical examples and experience. In this paper, we detail the three different structures of the BYODs and describe examples of early BYODs related to plant breeding data, and rare disease registries and biobanks, which have shaped the structure of the workshops. We discuss the latest insights into making BYODs more productive by leveraging our almost ten years of training experience in these workshops, including successes and encountered challenges. Finally, we examine how the participants’ feedback has motivated the research on FAIR, including the development of workflows and software.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2023) 5 (1): 163–183.
Published: 08 March 2023
FIGURES
| View All (10)
Abstract
View article
PDF
ABSTRACT Metadata, data about other digital objects, play an important role in FAIR with a direct relation to all FAIR principles. In this paper we present and discuss the FAIR Data Point (FDP), a software architecture aiming to define a common approach to publish semantically-rich and machine-actionable metadata according to the FAIR principles. We present the core components and features of the FDP, its approach to metadata provision, the criteria to evaluate whether an application adheres to the FDP specifications and the service to register, index and allow users to search for metadata content of available FDPs.
Journal Articles
Oussama Mohammed Benhamed, Kees Burger, Rajaram Kaliyaperumal, Luiz Olavo Bonino da Silva Santos, Marek Suchánek ...
Publisher: Journals Gateway
Data Intelligence (2023) 5 (1): 184–201.
Published: 08 March 2023
FIGURES
| View All (5)
Abstract
View article
PDF
ABSTRACT While the FAIR Principles do not specify a technical solution for ‘FAIRness’, it was clear from the outset of the FAIR initiative that it would be useful to have commodity software and tooling that would simplify the creation of FAIR-compliant resources. The FAIR Data Point is a metadata repository that follows the DCAT(2) schema, and utilizes the Linked Data Platform to manage the hierarchical metadata layers as LDP Containers. There has been a recent flurry of development activity around the FAIR Data Point that has significantly improved its power and ease-of-use. Here we describe five specific tools—an installer, a loader, two Web-based interfaces, and an indexer—aimed at maximizing the uptake and utility of the FAIR Data Point.
Journal Articles
Sascha Welten, Laurenz Neumann, Yeliz Ucer Yediel, Luiz Olavo Bonino da Silva Santos, Stefan Decker ...
Publisher: Journals Gateway
Data Intelligence (2021) 3 (4): 528–547.
Published: 25 October 2021
FIGURES
| View All (6)
Abstract
View article
PDF
In recent years, implementations enabling Distributed Analytics (DA) have gained considerable attention due to their ability to perform complex analysis tasks on decentralised data by bringing the analysis to the data. These concepts propose privacy-enhancing alternatives to data centralisation approaches, which have restricted applicability in case of sensitive data due to ethical, legal or social aspects. Nevertheless, the immanent problem of DA-enabling architectures is the black-box-alike behaviour of the highly distributed components originating from the lack of semantically enriched descriptions, particularly the absence of basic metadata for data sets or analysis tasks. To approach the mentioned problems, we propose a metadata schema for DA infrastructures, which provides a vocabulary to enrich the involved entities with descriptive semantics. We initially perform a requirement analysis with domain experts to reveal necessary metadata items, which represents the foundation of our schema. Afterwards, we transform the obtained domain expert knowledge into user stories and derive the most significant semantic content. In the final step, we enable machine-readability via RDF(S) and SHACL serialisations. We deploy our schema in a proof-of-concept monitoring dashboard to validate its contribution to the transparency of DA architectures. Additionally, we evaluate the schema's compliance with the FAIR principles. The evaluation shows that the schema succeeds in increasing transparency while being compliant with most of the FAIR principles. Because a common metadata model is critical for enhancing the compatibility between multiple DA infrastructures, our work lowers data access and analysis barriers. It represents an initial and infrastructure-independent foundation for the FAIRification of DA and the underlying scientific data management.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 238–245.
Published: 01 January 2020
FIGURES
Abstract
View article
PDF
The FAIR principles, an acronym for Findable, Accessible, Interoperable and Reusable, are recognised worldwide as key elements for good practice in all data management processes. To understand how the Brazilian scientific community is adhering to these principles, this article reports Brazilian adherence to the GO FAIR initiative through the creation of the GO FAIR Brazil Office and the manner in which they create their implementation networks. To contextualise this understanding, we provide a brief presentation of open data policies in Brazilian research and government, and finally, we describe a model that has been adopted for the GO FAIR Brazil implementation networks. The Brazilian Institute of Information in Science and Technology is responsible for the GO FAIR Brazil Office, which operates in all fields of knowledge and supports thematic implementation networks. Today, GO FAIR Brazil-Health is the first active implementation network in operation, which works in all health domains, serving as a model for other fields like agriculture, nuclear energy, and digital humanities, which are in the process of adherence negotiation. This report demonstrates the strong interest and effort from the Brazilian scientific communities in implementing the FAIR principles in their research data management practices.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 276–284.
Published: 01 January 2020
Abstract
View article
PDF
The industry sector is a very large producer and consumer of data, and many companies traditionally focused on production or manufacturing are now relying on the analysis of large amounts of data to develop new products and services. As many of the data sources needed are distributed and outside the company, FAIR data will have a major impact, both by reducing the existing internal data silos and by enabling the efficient integration with external (public and commercial) data. Many companies are still in the early phases of internal data “FAIRification”, providing opportunities for SMEs and academics to apply and develop their expertise on FAIR data in collaborations and public-private partnerships. For a global Internet of FAIR Data & Services to thrive, also involving industry, professional tools and services are essential. FAIR metrics and certifications on individuals, data, organizations, and software, must ensure that data producers and consumers have independent quality metrics on their data. In this opinion article we reflect on some industry specific challenges of FAIR implementation to be dealt with when choices are made regarding “Industry GOing FAIR”.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 10–29.
Published: 01 January 2020
Abstract
View article
PDF
The FAIR principles have been widely cited, endorsed and adopted by a broad range of stakeholders since their publication in 2016. By intention, the 15 FAIR guiding principles do not dictate specific technological implementations, but provide guidance for improving Findability, Accessibility, Interoperability and Reusability of digital resources. This has likely contributed to the broad adoption of the FAIR principles, because individual stakeholder communities can implement their own FAIR solutions. However, it has also resulted in inconsistent interpretations that carry the risk of leading to incompatible implementations. Thus, while the FAIR principles are formulated on a high level and may be interpreted and implemented in different ways, for true interoperability we need to support convergence in implementation choices that are widely accessible and (re)-usable. We introduce the concept of FAIR implementation considerations to assist accelerated global participation and convergence towards accessible, robust, widespread and consistent FAIR implementations. Any self-identified stakeholder community may either choose to reuse solutions from existing implementations, or when they spot a gap, accept the challenge to create the needed solution, which, ideally, can be used again by other communities in the future. Here, we provide interpretations and implementation considerations (choices and challenges) for each FAIR principle.
Journal Articles
Annika Jacobsen, Rajaram Kaliyaperumal, Luiz Olavo Bonino da Silva Santos, Barend Mons, Erik Schultes ...
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 56–65.
Published: 01 January 2020
FIGURES
Abstract
View article
PDF
The FAIR guiding principles aim to enhance the Findability, Accessibility, Interoperability and Reusability of digital resources such as data, for both humans and machines. The process of making data FAIR (“FAIRification”) can be described in multiple steps. In this paper, we describe a generic step-by-step FAIRification workflow to be performed in a multidisciplinary team guided by FAIR data stewards. The FAIRification workflow should be applicable to any type of data and has been developed and used for “Bring Your Own Data” (BYOD) workshops, as well as for the FAIRification of e.g., rare diseases resources. The steps are: 1) identify the FAIRification objective, 2) analyze data, 3) analyze metadata, 4) define semantic model for data (4a) and metadata (4b), 5) make data (5a) and metadata (5b) linkable, 6) host FAIR data, and 7) assess FAIR data. For each step we describe how the data are processed, what expertise is required, which procedures and tools can be used, and which FAIR principles they relate to.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 87–95.
Published: 01 January 2020
Abstract
View article
PDF
Since their publication in 2016 we have seen a rapid adoption of the FAIR principles in many scientific disciplines where the inherent value of research data and, therefore, the importance of good data management and data stewardship, is recognized. This has led to many communities asking “What is FAIR?” and “How FAIR are we currently?”, questions which were addressed respectively by a publication revisiting the principles and the emergence of FAIR metrics. However, early adopters of the FAIR principles have already run into the next question: “How can we become (more) FAIR?” This question is more difficult to answer, as the principles do not prescribe any specific standard or implementation. Moreover, there does not yet exist a mature ecosystem of tools, platforms and standards to support human and machine agents to manage, produce, publish and consume FAIR data in a user-friendly and efficient (i.e., “easy”) way. In this paper we will show, however, that there are already many emerging examples of FAIR tools under development. This paper puts forward the position that we are likely already in a creolization phase where FAIR tools and technologies are merging and combining, before converging in a subsequent phase to solutions that make FAIR feasible in daily practice.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 96–107.
Published: 01 January 2020
FIGURES
Abstract
View article
PDF
In recent years, as newer technologies have evolved around the healthcare ecosystem, more and more data have been generated. Advanced analytics could power the data collected from numerous sources, both from healthcare institutions, or generated by individuals themselves via apps and devices, and lead to innovations in treatment and diagnosis of diseases; improve the care given to the patient; and empower citizens to participate in the decision-making process regarding their own health and well-being. However, the sensitive nature of the health data prohibits healthcare organizations from sharing the data. The Personal Health Train (PHT) is a novel approach, aiming to establish a distributed data analytics infrastructure enabling the (re)use of distributed healthcare data, while data owners stay in control of their own data. The main principle of the PHT is that data remain in their original location, and analytical tasks visit data sources and execute the tasks. The PHT provides a distributed, flexible approach to use data in a network of participants, incorporating the FAIR principles. It facilitates the responsible use of sensitive and/or personal data by adopting international principles and regulations. This paper presents the concepts and main components of the PHT and demonstrates how it complies with FAIR principles.
Journal Articles
Publisher: Journals Gateway
Data Intelligence (2020) 2 (1-2): 47–55.
Published: 01 January 2020
Abstract
View article
PDF
In order to provide responsible access to health data by reconciling benefits of data sharing with privacy rights and ethical and regulatory requirements, Findable, Accessible, Interoperable and Reusable (FAIR) metadata should be developed. According to the H2020 Program Guidelines on FAIR Data, data should be “as open as possible and as closed as necessary”, “open” in order to foster the reusability and to accelerate research, but at the same time they should be “closed” to safeguard the privacy of the subjects. Additional provisions on the protection of natural persons with regard to the processing of personal data have been endorsed by the European General Data Protection Regulation (GDPR), Reg (EU) 2016/679, that came into force in May 2018. This work aims to solve accessibility problems related to the protection of personal data in the digital era and to achieve a responsible access to and responsible use of health data. We strongly suggest associating each data set with FAIR metadata describing both the type of data collected and the accessibility conditions by considering data protection obligations and ethical and regulatory requirements. Finally, an existing FAIR infrastructure component has been used as an example to explain how FAIR metadata could facilitate data sharing while ensuring protection of individuals.