This paper focuses on research e-infrastructures in the open science era. We analyze some of the challenges and opportunities of cloud-based science and introduce an example of a national solution in the China Science and Technology Cloud (CSTCloud). We selected three CSTCloud use cases in deploying open science modules, including scalable engineering in astronomical data management, integrated Earth-science resources for SDG-13 decision making, and the coupling of citizen science and artificial intelligence (AI) techniques in biodiversity. We conclude with a forecast on the future development of research e-infrastructures and introduce the idea of the Global Open Science Cloud (GOSC). We hope this analysis can provide some insights into the future development of research e-infrastructures in support of open science.

Open science refers to an innovative sharing model in which the scientific community follows established rules to open and share various resources throughout the whole research lifecycle [1]. As a future-oriented model [2, 3], open science breaks the “closed” research system, extends the “openness principle” [4], and promotes systematic changes to the science community and the larger society [5]. It enables innovative approaches, in which diverse research resources can flow smoothly under appropriate management supported by a collaborative virtual research environment [6]. We thus see great potential embedded in robust research facilities for such a collaborative virtual research environment, and conversely may also experience challenges and threats from strained e-infrastructure service capabilities. It therefore makes sense to better understand the current development and future design of research e-infrastructures.

This paper would invite the national research e-infrastructure, the CSTCloud in China, as an example. Case studies in CSTCloud will be fully introduced and analyzed with discussions followed for lessons learned and future envisions. Hopefully, this CSTCloud case study could cast light on the development of research e-infrastructures in the evolving open-science world.

As one of the key pillars of open science [7], open science infrastructure refers to open and shared research facilities [8, 9]. According to the essential components assembled [10], e-infrastructures may be classified into network and computing infrastructures, research resources infrastructures, (such as open-access infrastructures for research outputs, open scientific data and reproducible research infrastructures, and open science policy and evaluation infrastructures), integrated infrastructures, as well as disciplinary and thematic infrastructures.

Endeavors ranging from international organizations and national governments to regional and disciplinary research institutions have contributed to the construction of various types of e-infrastructures [11], with selected examples listed in Table 1. Different open science infrastructures share similar features, such as to be federated, accessible, interconnected, and interoperable [7]. To be federated means that distributed research resources, services, and the supporting infrastructures will be carried out between collaborative facilities with effective cloud solutions linking with each other [12, 13]. Accessibility, also enshrined in the FAIR principles (namely findable, accessible, interoperable, and reusable) [14], refers to a better flow of research resources to the science community. International interconnectivity requires research resources to be shared across borders for scientific discovery. And interoperability refers to different levels of capabilities to communicate between resources and services seamlessly, including interoperability in data, technologies, services and policies [15, 16]. In the spirit of open science, these e-infrastructures highlight practices of open sharing technologies, open services (i.e., computing, network, and storage) and resources (i.e., data, publications), open governance, and open research community. And this shift to an open-science research paradigm also brings e-infrastructures challenges. For example, silos between e-infrastructures are still common while mutual trust mechanisms and technical interoperability [17] to bridge platforms remain in progress. Unbalanced data scale and data value make scientific research not an easy case [18], such as it is in the Sustainable Development Goals (SDGs) research [19]. Sharing of different research resources [20] is still challenging because of the fuzzy data, complex algorithms or inconsistent software, as well as lots of other concerns, like security, privacy, intellectual rights, and so on. And the interaction between users and infrastructure deployment cannot be neglected as well [21, 22]. In CSTCloud, similar challenges exist. Thus, this paper is going to analyze the barriers in specific scenarios, and illustrate possible solutions through case studies.

Table 1.
Examples of worldwide research e-infrastructures.
CategoriesExamples in ChinaExamples Elsewhere
Network and computing infrastructures   
Research resources infrastructures   
Integrated information infrastructures   
Disciplinary and thematic infrastructures   
CategoriesExamples in ChinaExamples Elsewhere
Network and computing infrastructures   
Research resources infrastructures   
Integrated information infrastructures   
Disciplinary and thematic infrastructures   

3.1 Overview of CSTCloud

As one of the national research e-infrastructures in China, CSTCloud aims to build robust, interoperable, and sustained research services featuring open-science solutions. The CSTCloud explores cutting-edge information technologies, such as big data management, cloud computing, and artificial intelligence to support massive data transfer, curation, computation, visualization, publishing, and long-term preservation. It enables the use of open data and other research resources for scientific discovery. Federated strategies are implemented for the scheduling, management, and monitoring of research resources and service delivery within the Chinese Academy of Sciences (CAS) and across the country. CSTCloud collaborates with national, regional, and institutional data centers, disciplinary science clouds and thematic demonstrations, to provide services covering layers of IAAS (infrastructure as a service), PAAS (platform as a service), and SAAS (software as a service) (see figure 1).

Figure 1.

Overview of CSTCloud services.

Figure 1.

Overview of CSTCloud services.

Close modal

3.2 Selected Challenges

CSTCloud provides resources and services for over one million users in CAS and across the country, with typical challenges identified throughout service delivery processes. First, the gaps between planned and expected service capacity for research e-infrastructures are growing. Research demands are increasing sharply, especially on the performance of Internet connectivity [23], data storage capacity[24], and computing. In many scientific and technological infrastructures, near-future development expectations may be ten times higher than the current service capabilities [25]. However, limited funding models demonstrate the necessity to re-design public platforms with improved service capabilities to meet the needs of those enormous research facilities. In astronomy, for example, the times of supernova explosion, the numbers of photons received, and the intensity of cosmic microwave radiation are all random. Therefore, continuing and timely observations and effective data analysis of all possible astronomical phenomena are necessary to ensure scientific discoveries, both large and small [26]. Thus, robust technologies, such as flexible and scalable cloud services, are needed to advance a healthier and more capable research ecosystem, adaptive to diverse research scenarios [27, 28]. Third, data management becomes more complex. In dataintensive areas such as high-energy physics, life-cycle data management is necessary to support the operation of major research e-infrastructures for scientific discovery. Then the data management should focus on the data characteristics. Exploration should go further under a core data management framework, in configurable work flow systems, with proper strategies for cross-domain resource scheduling and cloud services orchestrations. Fourth, distributed research resources require integration for one-stop and tailored services. Currently, the CSTCloud service catalogue provides links to disciplinary platforms. Many of them are operated by third-party service providers in a distributed manner while CSTCloud should keep a dynamic management mechanism to support single sign-on services for end users. Fifth, social alignment should follow the open-science way. Fragmentation of open science is still happening across regions [29]. Thus it is urgent to reach a broader consensus on multi-faceted open science visions, guiding principles, and normative frameworks based on shared values. E-infrastructures like CSTCloud should exploit the potential to the full to break silos and enhance social engagement in open science.

3.3 CSTCloud Supported Design and Development of Open Science Exemplars

To tackle the challenges mentioned above, several actions have been taken in CSTCloud, including the enhanced authentication and authorization infrastructures in CSTCloud AAI and the development of a cloud federation service platform, the CSTCloud Federation, to serve the uneven e-infrastructure demand. Examples include the exploration of scalable algorithm engineering to tackle complex astronomical data management, the integration of resources for tailored SDG-13 research, the coupling of AI and citizen science for enhanced data training models in biodiversity, and so on.

3.3.1 CSTCloud AAI and CSTCloud Federation for Enhanced Service Capacity

To embrace open science, CSTCloud AAI has been re-designed as an evolving platform based on the CSTCloud Passport to deliver sustained services for identity authentication and authorization, enabling access to open and convergent global resources. Besides, the CSTCloud Federation platform has been developed to manage distributed computing and storage resources between institutions for tailored science deployment. This cloud federation [30, 31] supports interoperable resources aggregation, to help reduce costs and enhance service capability. Currently, this system has been providing services for several different projects in which distributed compute and storage resources are neatly managed by the cloud system with autonomous resource configuration carried out smoothly. End users can log in the CSTCloud Federation platform and create on-demand virtual machines for tailored research with resources scheduling and orchestration jobs handled by the CSTCloud Federation platform automatically. Besides, unified operation monitoring and usage metrics also guarantee the quality of services and uncover their social impact as well. Moreover, open-science practices are also shaping disciplinary showcases.

3.3.2 Case 1: Scalable Cloud Services in FRB research

Open astronomical data, like the case in FAST (Five-hundred-meter Aperture Spherical radio Telescope) [32], enables significant research outputs worldwide, such as the search for possible ExoMoons [33], pulsars [34], detection of ultra-high-energy particles [35], and so on. The collaboration between CSTCloud and FAST team is to find effective ways of handling complex data for FRB (fast radio burst) research based on large-scale distributed e-infrastructures. Considering this, a cloud service platform entitled ScaleBox has been developed jointly with tailored algorithms developed, and validated by data samples on scientists’ laptops. Then boosted by algorithm engineering in the ScaleBox, large-scale streaming data processing and federated data learning are carried out by computing facilities of remote sites smoothly (see Figure 2). And to facilitate the astronomical data work flow in an open-science manner, the ScaleBox connects CSTCloud computer clusters and deploys the CENI (China Environment for Network Innovations) 100G network to ensure timely large-scale data transfer and computing. The new research work flow, covering optimized management of existing FAST radio bursts, pulsar search experiments, and data from the telescope observations, runs steadily to support FAST data distribution, on-line processing, computing, and archiving. It has reduced the potential for redundant data distribution and enhanced the efficiency and efficacy of data processing.

Figure 2.

Scalable cloud services for algorithm engineering in ScaleBox.

Figure 2.

Scalable cloud services for algorithm engineering in ScaleBox.

Close modal

3.3.3 Case 2: Convergence of Research Resources to Support SDG Research

The SDGs adopted in 2015 by all the United Nations member states [36], have been taken as one of the priorities in the CASEarth project[37]. As part of this activity, the CSTCloud team has been engaged in developing an SDG-13 system, to effectively integrate and manage multiple-sourced data, algorithms, tools, for on-demand SDG-13 service delivery. A first demonstration system focusing on Southeast Asia is currently under construction (see Figure 3). Contributed by different institutions, the joint cloud system includes a remote-sensing monitoring system for disaster mitigation and selected projection models and algorithms for both long-term and real-time climate prediction. Centralized computing facilities in CSTCloud and distributed data entities and algorithms from institutions are connected by APIs for on-line deployment.

Figure 3.

The preliminary technical framework for the SDG-13 case study.

Figure 3.

The preliminary technical framework for the SDG-13 case study.

Close modal

3.3.4 Case 3: Artificial Intelligence and Citizen Science in Wildlife Monitoring

Dynamic changes to wildlife and their habitats are important to research and operations in the nature reserves. To precisely deal with large-scale and real-time monitoring data for decision making, the CSTCloud collaborates with the Guangdong Chebaling Natural Reserve of China, with artificial intelligence technology and citizen science deployed [38]. As is shown in Figure 4, at the first stage, images and videos are captured by the infrared cameras automatically. Then data are transferred on a frequent basis to the cloud service platform through the 700 MHz FDD-LTE network that include four base stations, encompassing 91 Km2 in the Chebaling Nature Reserve. The cloud platform is maintained by the CSTCloud team remotely. Image recognition and video content analysis help track the ecological activities accordingly. Considering the role of citizens in this project, crowd sourcing is also included for data capture in addition to the 700,000 valid images captured by infrared cameras. Citizen science will increase the potential of the image and video pool, and somehow help offset the limitations of existing sampling methods, thus contributing to the feature model training.

Figure 4.

Wildlife monitoring framework in the Chebaling Natural Reserve.

Figure 4.

Wildlife monitoring framework in the Chebaling Natural Reserve.

Close modal

Table 2 summarizes the CSTCloud exemplars practicing open science. Based on the CSTCloud AAI and CSTCloud Federation, the FAST example in astronomy illustrates how we manage large-scale distributed research e-infrastructures with complex data flows. The CSTCloud re-designed the workflow with scaled-up solutions to handle astronomical data running between labs, computing clusters and data-capturing facilities. Supported by regional datasets validation, the SDG-13 case integrates multiple-source of data and algorithms for tailored cloud services. And inviting citizen science for data capture to enhance AI model performance in case 3 provides another example of open science engagement. However, all these cases just uncovered part of the open science practices. For open governance, the advisory board and the user committee are invited as compulsory parts of the governance model highlighting openness.

Table 2.
Selected challenges and solutions in CSTCloud case studies.
CategoriesSelected challengesStakeholdersKey actions
Open technologies Expanding infrastructure development demand and robust technical requests Infrastructure-level stakeholders, such as CSTCloud and other open science facilities CSTCloud AAI, CSTCloud federation, unified monitoring & metrics system 
Open resources & services Complex data management Resource-level stakeholders, such as data curators, end users Re-design of the research workflow and algorithm engineering (i.e. case 1) 
 Distributed resources vs one-stop services required Services-level stakeholders, among which CNIC is a key player in CAS CSTCloud AAI,
Resources integration for single sign-on services (i.e. case 2) 
Open community Social engagement The whole research community Social and international engagement (i.e. case 3) 
Open governance Tailored services design and delivery in the spirit of open science All stakeholders in the governance model Governance model highlights both insiders and potential outsiders, such as inviting the advisory board and user committee. 
CategoriesSelected challengesStakeholdersKey actions
Open technologies Expanding infrastructure development demand and robust technical requests Infrastructure-level stakeholders, such as CSTCloud and other open science facilities CSTCloud AAI, CSTCloud federation, unified monitoring & metrics system 
Open resources & services Complex data management Resource-level stakeholders, such as data curators, end users Re-design of the research workflow and algorithm engineering (i.e. case 1) 
 Distributed resources vs one-stop services required Services-level stakeholders, among which CNIC is a key player in CAS CSTCloud AAI,
Resources integration for single sign-on services (i.e. case 2) 
Open community Social engagement The whole research community Social and international engagement (i.e. case 3) 
Open governance Tailored services design and delivery in the spirit of open science All stakeholders in the governance model Governance model highlights both insiders and potential outsiders, such as inviting the advisory board and user committee. 

Although there are no “one-size-fits-all” solutions for open science infrastructures, we may still be enlightened by lessons learned from the cases. First, expanding demand for research facilities can never be precisely measured or fed up, thus cloud federation may be implemented to bridge the gaps. Demands may be exaggerated beyond the actual requirements or constrained by unexpected construction conditions, or under-evaluated for their potential longer-term contributions. To guarantee the robustness of future-oriented facilities, public e-infrastructures should be enhanced to provide backup options, such as cloud federation to connect different facilities contributing to a resources pool. Under unified resources orchestration, short-term jobs will be sent to facilities available at this moment and released for other jobs when necessary. Direct investment for facilities may be partly saved by sharing resources and services between e-infrastructures, especially to meet unsteady demand. Second, systematic approaches and comprehensive solutions should be taken to plan and develop every infrastructure module to facilitate the overall research work flow. For complex data management, one of the efficient ways is to explore effective work flow to help reduce redundancy of management work, thus facilitating transparent access and smart analysis of different resources. Third, innovative ways should be invented to promote the sharing of sundry research resources. The coupling of AI and citizen science provides a way and federated learning is another. Based on a federated model, data holders can retain their data for particular concerns, such as security and privacy, while contributing to the research by training sub-models. Then all sub-models are integrated through the cloud for a completely enhanced model. The federated computing technology strengthens the scalability of the model training framework on the premise of ensuring appropriate data control [39] and is being tested in CSTCloud SDGs research. Furthermore, interoperable persistent identifiers should be deployed to cover all digital research resources, thus the interconnectivity between these resources is depicted and managed appropriately. Fourth, incentives should always be ready to promote sustained open science models. For instance, the implementation of cloud federation should apply loose coupling strategies [40, 41] to retain independence for each e-infrastructure. Contributions to the federated resources pool must be precisely monitored and measured with appropriate reward mechanims, such as offset funding, paying back, long-term investment, priority in future deployment supported by the federated resources pool, research reputation, and other social impacts as well. There might be diversified interests embedded in implementing open science models, and robust incentive mechanisms should include reward systems for all stakeholders. Besides, compound interoperability solutions should include both FAIR policy design and trustworthy technologies development to increase the inclusiveness and usability of the collaborative research environment supported by the digital research infrastructures.

And to facilitate open science e-infrastructures development in the long run, particularly to help bridge different open science platforms based on mutual trust, the idea to co-design and co-develop a “Global Open Science Cloud” (GOSC) was proposed in 2019 and is being implemented by CODATA and the CAS Computer Network Information Center [42, 43]. The objective of this GOSC Initiative is to call for international collaboration and alignment between open science cloud activities internationally in a robust network of trusted research e-infrastructures to connect digital research resources and all stakeholders. It can thus enable innovative science discovery in the evolving open science environment within global, regional, national, and institutional participants. The existing supporting research infrastructures will undoubtedly serve as the foundations for the future open science cloud, while more studies will be produced that explore the underlying technologies, policies, and governance.

This work has been supported by funding from the National Key R&D Program of China (No.2021YFE0111500), the National Natural Science Foundation of China (No.72104229), the CAS Program for fostering international mega-science (No.241711KYSB20200023), and the CAS President's International Fellowship Initiative (No.2021VTA0006). We wish to express our thanks to the CODATA GOSC Steering Group members for providing insightful ideas on developing the GOSC Initiative, with a special thanks to the GOSC SDG-13 case study.

Lili Zhang drafted the paper; Jianhui Li and Paul Uhlir revised the paper; Liangming Wen made the literature review; Kaichao Wu, Ze Luo and Yude Liu provided facts and carried out the case studies.

CSTCloud AAI, https://aai.cstcloud.net

CSTCloud Federation, https://fed.cstcloud.cn

[1]
Watson
,
M.
:
When will ‘Open Science’ become simply ‘Science’?
Genome Biology
16
(
1
),
101
(
2015
). https://doi.org/10.1186/s13059-015-0669-2
[2]
Mirowski
,
P.
:
The future(s) of open science
.
Social Studies of Science
8
(
2
),
171
203
(
2018
). https://doi.org/10.1177/0306312718772086
[3]
National Academies of Sciences, Engineering, and Medicine
.:
Open Science by Design: Realizing a Vision for 21st Century Research
.
Washington, DC
:
The National Academies Press
(
2018
). https://doi.org/10.17226/25116
[4]
Saenen
,
B.
, et al.
:
Research Assessment in the Transition to Open Science: 2019 EUA Open Science and Access Survey Results
. Available at: https://eua.eu/downloads/publications/research%20assessment%20in%20the%20transition%20to%20open%20science.pdf (
2019
). Accessed 22 January 2022
[5]
Rentier
,
B.
:
Open science: a revolution in Sight?
Interlending & Document Supply
44
(
4
),
155
160
(
2016
). https://doi.org/10.1108/ILDS-06-2016-0020
[6]
Paul
,
M.
, et al.
:
Mapping heterogeneous research infrastructure metadata into a unified catalogue for use in a generic virtual research environment
.
Future Generation Computer Systems
101
,
1
13
(
2019
). https://doi.org/10.1016/j.future.2019.05.076
[7]
United Nations Educational, Scientific and Cultural Organization (UNESCO)
.
UNESCO Recommendation on Open Science
. Available at: https://en.unesco.org/science-sustainable-future/open-science/recommendation (
2021
). Accessed 11 September 2022
[8]
Munro
,
C.
, et al.
:
Towards an open infrastructure for relating scholarly assets
.
Study Health Technology Information
235
,
491
495
(
2017
). https://doi.org/10.3233/978-1-61499-753-5-491
[9]
SPARC Europe
.:
Scoping the Open Science Infrastructure Landscape in Europe
. Available at: https://zenodo.org/record/4159838 (
2020, 30 October
). Accessed 22 January 2022
[10]
Ribes
,
D.
:
The Kernel of a Research Infrastructure
. In
Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing
,
574-587
(
2014
). https://doi.org/10.1145/2531602.2531700
[11]
Zhao
,
Z.
,
Huang
,
J.
:
Analysis of information resource construction mode of open science infrastructures
.
Library Development
44
(
3
),
46
55
(
2021
). https://doi.org/10.19764/j.cnki.tsgjs.20201355
[12]
Grossman
,
R.L.
, et al.
:
An overview of the open science data cloud
. In
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
,
377-384
(
2010
). https://doi.org/10.1145/1851476.1851533
[13]
Buyya
,
R.
,
Ranjan
,
R.
,
Calheiros
,
R.N.
:
InterCloud: utility-oriented federation of cloud computing environments for scaling of application services
.
Lecture Notes in Computer Science
6081
,
13
31
(
2010
). https://doi.org/10.1007/978-3-642-13119-6_2
[14]
Mons
,
B.
, et al.
:
The FAIR principles: first generation implementation choices and challenges
.
Data Intelligence
2
(
1-2
),
1
9
(
2020
). https://doi.org/10.1162/dint_e_00023
[15]
Cloud Computing Standards Committee of the IEEE Computer Society
.:
IEEE Guide for Cloud Portability and Interoperability Profiles (CPIP)
(
2020
). Available at: https://standards.ieee.org/ieee/2301/5077/. Accessed 11 September 2021
[16]
Ide
,
N.
,
Pustejovsky
,
J.
:
What does interoperability mean, anyway? Toward an operational definition of interoperability for language technology
. In Proceedings of the Second International Conference on Global Interoperability for Language Resources. Hong Kong, China (
2020
)
[17]
de
Jong
, et al.
:
Interoperability in an Infrastructure Enabling Multidisciplinary Research: The case of CLARIN
. In
Proceedings of the 12th Language Resources and Evaluation Conference
,
3406
3413
(
2020
),
Marseille, France
.
European Language Resources Association
.
[18]
Cook
,
I.
,
Grange
,
S.
,
Eyre-Walker
,
A.
:
Research groups: how big should they be?
Peer J
3
,
e989
(
2015
). https://doi.org/10.7717/peerj.989
[19]
Bexell
,
M.
,
Jönsson
,
K.
:
Responsibility and the United Nations’ Sustainable Development Goals
.
Forum for Development Studies
44
(
1
),
13
29
(
2017
). https://doi.org/10.1080/08039410.2016.1252424
[20]
Yang
,
X.
, et al.
:
Cloud computing in e-Science: research challenges and opportunities
.
The Journal of Supercomputing
70
(
1
),
408
464
(
2014
). http://doi.org/10.1007/s11227-014-1251-5
[21]
Voss
,
A.
, et al.
:
Adoption of e-Infrastructure services: configurations of practice. Phil. Trans. R. Soc
.
A.
368
,
4161
4176
(
2010
). http://doi.org/10.1098/rsta.2010.0162
[22]
Barjak
,
F.
, et al.
:
Case Studies of e-Infrastructure Adoption
.
Social Science Computer Review
27
(
4
),
583
600
(
2009
). http://doi.org/10.1177/0894439309332310
[23]
Ji
,
C.
,
Yu
,
Y.
,
Liu
Z.
:
Binary Pulsar System Acceleration Search Method and Software Improvement
.
Astronomical Research & Technology
2
,
103
110
(
2022
)
[24]
Zhang
H.
, et al.
:
A Data Processing Acceleration Method and System for FAST Petabyte Pulsar Data Processing
.
Astronomical Research & Technology
1
,
129
137
(
2021
)
[25]
Xie
,
G.
,
Li
,
J.
,
Liu
,
Y.
:
Introduction to an information service infrastructure for scientific big data fusion
. CAS “14th Five-Year Plan” project meeting 2021, 7 June (
2021
)
[26]
Eggl
,
S.
, et al.
:
Dealing with uncertainties in asteroid deflection demonstration missions: NEOTwIST
.
In: Proceedings of the International Astronomical Union
,
10
(
S318
),
231
238
(
2015
). https://doi.org/10.1017/S1743921315008698
[27]
Firesmith
,
D.
:
System resilience: what exactly is it?
Available at: https://insights.sei.cmu.edu/blog/system-resilience-what-exactly-is-it/ (
2019, 25 November
). Accessed 22 January 2022
[28]
Moreno-Vozmediano
,
R.
, et al.
:
Implementation and provisioning of federated networks in hybrid clouds
.
Journal of Grid Computing
15
(
2
),
141
160
(
2017
). https://doi.org/10.1007/s10723-017-9395-1
[29]
Paul
,
A.B.
, et al.
:
Open Science for a Global Transformation
. Available at: https://zenodo.org/record/3935461#.YCCvjzHis2w (
2020, 8 July
). Accessed 22 January 2022
[30]
Villegas
,
D.
, et al.
:
Cloud federation in a layered service model
.
Journal of Computer and System Sciences
78
(
5
),
1330
1344
(
2012
). https://doi.org/10.1016/j.jcss.2011.12.017
[31]
Kurze
,
T.
, et al.
:
Cloud federation
. Cloud computing,
32
38
(
2011
)
[32]
FAST
.:
FAST Data Center Service Standard
. Available at: https://fast.bao.ac.cn/cms/article/88/. Accessed 3 March 2022
[33]
Lukic
,
D.V.
:
Search for possible ExoMoons with FAST telescope
.
Research in Astronomy and Astrophysics
17
(
12
),
303
306
(
2018
). https://doi.org/10.1088/1674-4527/17/12/121
[34]
Feng
,
Y.
, et al.
:
A single-pulse study of PSR J1022+1001 using the FAST radio telescope
.
The Astrophysical Journal
908
(
1
),
105
(
2021
). https://doi.org/10.3847/1538-4357/abd326
[35]
James
,
C.W.
,
Bray
,
J.D.
,
Ekers
,
R.D.
:
Prospects for detecting ultra-high-energy particles with FAST
.
Research in Astronomy and Astrophysics
19
(
2
),
19
(
2019
). https://doi.org/10.1088/1674-4527/19/2/19
[36]
United Nations Development Programme
.
What are the Sustainable Development Goals?
Available at: https://www.undp.org/sustainable-development-goals. Accessed 11 September 2022
[37]
Guo
,
H.
:
Big Earth data: A new frontier in Earth and information sciences
.
Big Earth Data
1
(
1-2
),
4
20
(
2017
)
[38]
Cheng
,
X.
, et al.
:
Data science and computing intelligence: concept, paradigm, and opportunities
.
Bulletin of Chinese Academy of Sciences
35
(
12
),
1470
1481
(
2020
). https://doi.org/10.16418/j.issn.1000-3045.20201116005
[39]
Samarakoon
,
S.
, et al.
:
Distributed federated learning for ultra-reliable low-latency vehicular communications
.
IEEE Transactions on Communications
68
(
2
),
1146
1159
(
2020
). https://doi.org/10.1109/TCOMM.2019.2956472
[40]
Margheri
,
A.
, et al.
:
A Distributed Infrastructure for Democratic Cloud Federations
.
IEEE International Conference on Cloud Computing
,
688-691
(
2017
). https://doi.org/10.1109/CLOUD.2017.93
[41]
Lee
,
C.A.
,
Bohn
,
R.B.
,
Michel
,
M.
:
The NIST Cloud Federation Reference Architecture
. Available at: https://doi.org/10.6028/NIST.SP.500-332 (
2020
). Accessed 22 January 2022
[42]
China Science and Technology Cloud (CSTCloud)
.
Global Open Science Cloud
. Available at: https://www.cstcloud.net/gosc.htm. Accessed 22 January 2022
[43]
CODATA
.
Invitation to Collaborate on the Global Open Science Cloud Initiative
. Available at: https://codata.org/initiatives/decadal-programme2/global-open-science-cloud/. Accessed 17 October 2021
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.