Lung infiltration is a non-communicable condition where materials with higher density than air exist in the parenchyma tissue of the lungs. Lung infiltration can be hard to be detected in an X-ray scan even for a radiologist, especially at the early stages making it a leading cause of death. In response, several deep learning approaches have been evolved to address this problem. This paper proposes the Slide-Detect technique which is a Deep Neural Networks (DNN) model based on Convolutional Neural Networks (CNNs) that is trained to diagnose lung infiltration with Area Under Curve (AUC) up to 91.47%, accuracy of 93.85% and relatively low computational resources.

Lung infiltration [3] is the existence of materials with higher density than air within the parenchyma tissue [22] of the lungs. These materials can vary from protein, pus, blood, surfactant, oedema to foreign cells [12]. The diagnosis of such disorders are considered quite complex because clinicians rely on the detection of hard to locate radiological abnormalities and checking the accompanying clinical symptoms which can be confusing and misleading. The infiltration condition is a non-resolving, slow and has no accurately defined radiological features [21]. Figure 1 shows the non-small-cell lung infiltration X-ray radio scan.

Figure 1.

Non-small-cell lung infiltration X-ray radio scan.

Figure 1.

Non-small-cell lung infiltration X-ray radio scan.

Close modal

Despite the vast advances and effort in medical sciences, the survival rates of lung cancer are very low, making it a leading cause of death [1]. At some point, even the resection will not be effective, unless the condition is recognized at early stages i.e. infiltration [10]. Aside from lung cancer, infiltration is recently associated with asthma conditions [33].

Infiltration in lung X-ray [32] scans can be very challenging to be interpreted, as the dense materials such as bones appear to have higher pixel value than lungs as shown in figure 1. The X-ray scans are 2D scans not volume scans like Magnetic Resonance Imaging (MRI). These facts made it rather difficult to distinguish the infiltration condition, where the bones of the thoracic cage surround the lungs, even for a radiologist.

DNN models can effectively extract spacial features of an image. Consequently, CNNs are very good performers in image classification, especially in Computer-Aided Diagnosis (CAD) [20] [6].

Although the CNNs are very performant in CAD, it is hard to train a network to detect a tiny segment of infiltration in a typical X-ray scan. A typical X-ray scan resolution is 28xxX28xx pixels. Thus, the techniques used in the literature had to use very deep CNN architectures to detect the presence of the unusual substance within the parenchyma tissue. Very deep CNN architectures introduce issues like over-fitting, high training cost due to the vast amount of parameters to tune and slow performance due to the too many matrices multiplication required to classify an image.

Deep neural networks are excellent pattern detectors, however, finding a visually undefined object(i.e. Infiltration)covered by bones, in large 2D images can be challenging. Slide-Detect approach addresses this problem, by making the training dataset of the lung infiltration concentrated on the image segment that is known to be infected. Thus, making the infiltration pattern clearer to the DNN to detect.

The rest of the paper is organized as follows: The “Related Work” section reviews the different techniques for automating the detection of lung infiltration in X-ray scans. The “Methodology” section details the proposed Slide-Detect technique. The “Results” section presents the results of running the Slide-Detect on the “ChestXray-NIHCC” dataset, it also provides a discussion of the results and a comparison with the state-of-the-art techniques. The “Conclusion” section concludes the work. Finally, the “Future Work” section discusses the limitations in the proposed methodology that can be addressed in future research.

Singhal et al. [30] tested the state-of-the-art deep architectures performance in training a classification model for several lung conditions. These conditions include “Atelectasis” [27], “Cardiomegaly” [8], “Effusion” [9], “Infiltration” [3], “Mass” [24], “Nodule” [34], “Pneumonia” [28] and “Pneumothorax” [23]. The tested DNN architectures are: AlexNet [36] which is a deep CNN with 5 convolutional layers and 3 fully connected multi-layer perceptrons [7], GoogLeNet [31] which is a 22-layer DNN developed by Google AI in 2015, VGGNet-16 [37] a 16-layer model that takes an RGB image with (224X224) in size and ResNet-50 [11] a very deep model with 152-layers. Table 1 compares the performance of the previously discussed architectures. The highest AUC score was 61.27% achieved by the ResNet-50 architecture.

Table 1.

Performance of standard DNN architectures.

NetworkAUC
AlexNet [3060.40% 
GoogLeNet [3060.87% 
VGGNet-16 [3058.95% 
ResNet-50 [3061.27% 
NetworkAUC
AlexNet [3060.40% 
GoogLeNet [3060.87% 
VGGNet-16 [3058.95% 
ResNet-50 [3061.27% 

Liang et al. [18] designed a CNN network with two branches called “dense networks with relative location awareness for thorax disease identification”. The first branch is a U-net [29] that masks and segments the lung and heart from the X-ray scan. Then it produces a location relative map of the generated masks based on the Euclidean distance among them. The other branch is 121-layer dense net that takes the resized image (256X256 pixels) along with the extracted Euclidean distance map as an input and produces its classification. The two branches were fused together. This method achieved 70.9% AUC in detecting the infiltration condition.

Ho et al. [13] proposed an ensemble feature extraction method that works by combining 5 techniques working in two branches. The first branch is composed of four shallow feature extraction techniques used in succession. These techniques are:

SIFT [19] which decomposes the structural features of an image patch, GIFT [26] which extracts the orientation and scale of the different objects of an image, LBP [25] which extracts the texture features of different objects in an image and HOG [5] which extracts the histogram-based features within an image. These techniques are used successively in parallel with a 121-layer pre-trained CNN. The outputs of the two branches are combined in a feature integration stage. Finally, the outputs of the feature integration stages are fed to a supervised machine learning algorithm [17] to make the final classification. The AUC of this method for detecting lung infiltration is 70.3%.

Allaouzi et al. [2] proposed a method which is composed of three stages. The Binary Relevance (BR) classifier was the fastest and achieved the highest AUC of %87.

Chen et al. [4] proposed a technique using a MobileNet CNN [15] to classify (diagnose) lung conditions including lung infiltration using the “ChestXray-NIHCC” dataset [35]. They preferred MobileNet [15] due to the fact that MobileNet was designed to run on embedded devices effectively. The proposed network had over 3 million parameters to optimize and resulted in an AUC of 57%.

Kavyashree et al. [15] proposed a two-stage algorithm for lung conditions classification. The first stage was to segment each image and extract the lungs themselves into a new image using u-net[14]. The resulting AUC was 75.1%.

A group of techniques were proposed in the literature [30] [13] [2] [4] and [15] that were mainly based on two assumptions, which are:

  1. One model can fit all the 14 lung diseases found in the dataset.

  2. They used the exact same features to diagnose all the diseases.

This has led to a low performance in terms of AUC especially in the lung infiltration disease. Using features like spatial distances, or geometrical shapes will not yield the best results, because the key point in detecting lung infiltration is density. Hence, a relatively deep neural network had to be used which contributed to a considerably slow performance.

This section discusses the proposed Slide-Detect technique and the whole methodology adopted for implementing and evaluating it. The approach is composed of composed of 4-successive stages. The “Pre-Processing” section illustrates the procedure adopted to pre-process the “ChestXray-NIHCC” dataset. The “Feature Extraction” section shows the procedure of extracting the relevant features of the images in preparation of the classification process. The “DNN” section details the architecture and training of the classifier. Finally, the “Testing Procedure” section reviews the algorithm used to evaluate the performance of the Slide-Detect technique.

3.1 Pre-Processing

Different X-ray equipment are shipped with different sensors, X-ray emitters, etc. Thus, the produced images have different pixel thresholds and spectrum responses. To address these problems, images are normalized. Then, converted to 8-bit RGB images and saved. Algorithm 1 illustrates the procedure of image normalization.

Algorithm 1.

Normalization algorithm

Algorithm 1.

Normalization algorithm

Close modal

After that, a series of rotations, transitions, re-scaling, flips and zoom operations are applied to both the control and sample datasets and added to their corresponding datasets together with their original images to improve the classifier capabilities. These operations are conducted using the ImageDataGenerator package of the keras [16] platform. At this step, it is important to keep both the sample and control datasets balanced.

3.2 Feature Extraction

Learning and training are most effective whenever they are most concentrated. Thus, instead of training the DNN model to find a vague object, the training used the bounding boxes located in the “BBox List 2017.csv” file in the dataset. So, images with positive infiltration labels are cropped around the infected area to create a sample dataset as shown in algorithm 2 

Algorithm 2.

Sample dataset creation algorithm

Algorithm 2.

Sample dataset creation algorithm

Close modal

To create a control class dataset, the healthy labeled images are randomly cropped as shown in algorithm 3. As a final preparation step, the datasets are divided into training and testing sets, each with sample and control sets, as shown in algorithms 4.

Algorithm 3.

Creating a control class dataset

Algorithm 3.

Creating a control class dataset

Close modal
Algorithm 4.

Creating training and testing datasets

Algorithm 4.

Creating training and testing datasets

Close modal

3.3 DNN

A 2-step pipeline is created, the first step is resizing the input image into (128X128X3). Then, feeding the resulting image to a 5-layer DNN network. As shown in figure 2, the DNN is composed of 3 convolutional layers, each of which is coupled with a max-pooling layer. After that, the net is flattened to a 2-layer dense Multi-Layer Perceptrons(MLP) with 0.2 dropout at the last hidden layer. The used optimizer is Adam [38].

Figure 2.

Visual representation of the DNN model.

Figure 2.

Visual representation of the DNN model.

Close modal

Figure 3 is a block diagram that shows how the proposed method (Slide-Detect) processes the data(scanned images). First, the entire dataset gets normalized. Then, the data classes get separated, After that, the image portions get created (each class separately). Then, the transformations are applied before training the CNN classifier. During these transformations, the density is still preserved, thus increasing the training data quality for lung infiltration diagnosis.

Figure 3.

Block diagram showing the Slide-Detect approach.

Figure 3.

Block diagram showing the Slide-Detect approach.

Close modal

Figure 4 illustrates the technique used to classify new unseen images. First, the new image gets normalized. Then, the striding window is used to simulate the image cropping during the training phase. Finally, each output is fed to the CNN classifier to make a decision. If the classifier classifies 5 consecutive portions as positive, the image should be classified as a sample image.

Figure 4.

Block diagram showing how the Slide-Detect model is run.

Figure 4.

Block diagram showing how the Slide-Detect model is run.

Close modal

3.4 Testing Procedure

To test the performance of the Slide-Detect technique, all the images marked as healthy in the dataset are loaded along with all the images marked as infected. Each image is progressively scanned in (128X128) patches. Each patch is normalized, then fed into the DNN classifier. If the DNN classifier marks at least 5 patches as abnormal, then the image is classified as an infiltrated scan. Otherwise, the image is classified as healthy. Algorithm 5 demonstrates the procedure adopted to test these images.

Algorithm 5.

Testing the performance of the Slide-Detect technique

Algorithm 5.

Testing the performance of the Slide-Detect technique

Close modal

In this section, the results of the experiments done on the “ChestXray-NIHCC” dataset are discussed.

The “ChestXray-NIHCC” [35] is the largest publicly available dataset published by the NIH clinical center. This dataset contains 112120 labeled records of 30,805 unique patients. Each record is linked with a chest X-Ray image. Each image is resized to 1024X1024X3 pixels. Each record may have one or more of the following labels: “Cardiomegaly, Emphysema, Healthy, Hernia, Infiltration, Mass, Nodule, Effusion, Pneumothorax, Pleural Thickening, Consolidation, Edema, Pneumonia and Atelectasis”.

The classes are interleaved and imbalanced as shown in figure 5. The highest-class count is the healthy class of 60412 instances, followed by the infiltration class. Age is an important indicator when it comes to non-resolving conditions such as infiltration. Figure 6 shows the age distribution of the infiltration patients. The mean age is 46.198, with a standard deviation of 17.08. The maximum number of cases beaks in the age interval (50, 60].

Figure 5.

Class distribution in the “ChestXray-NIHCC” dataset.

Figure 5.

Class distribution in the “ChestXray-NIHCC” dataset.

Close modal
Figure 6.

Age distribution of the infiltration patients.

Figure 6.

Age distribution of the infiltration patients.

Close modal

Table 2 shows that the Slide-Detect technique outperformed the state-of-the-art techniques together with offering relatively lower computational cost while table 3 shows the confusion matrix of the Slide-Detect technique. The confusion matrix shows that the Slide-Detect model has a larger false positive than false negative, which is more suitable for medical applications than the contrary.

Table 2.

AUC comparison of the state-of-the-art techniques.

TechniqueAUC
MobileNet [1557% 
AlexNet [3060.40% 
GoogLeNet [3060.87% 
VGGNet-16 [3058.95% 
ResNet-50 [3061.27% 
Dense Networks with Relative Location Awareness [1370.9% 
Multiple Feature Integration [270.3% 
Two-stream Collaborative Network [475.1% 
Slide-Detect 91.27% 
TechniqueAUC
MobileNet [1557% 
AlexNet [3060.40% 
GoogLeNet [3060.87% 
VGGNet-16 [3058.95% 
ResNet-50 [3061.27% 
Dense Networks with Relative Location Awareness [1370.9% 
Multiple Feature Integration [270.3% 
Two-stream Collaborative Network [475.1% 
Slide-Detect 91.27% 
Table 3.

The confusing matrix of the Slide-Detect technique.

TrueFalse
Positive 33027 9456 
Negative 69331 1 04 
TrueFalse
Positive 33027 9456 
Negative 69331 1 04 

Figure 7 Compares Slide-Detect with the state of the art techniques in terms of AUC and computational cost (number of layers used by each model). Although ResNet50 [30] used very deep CNN layers, it did not perform much better than less deep approaches such as MobileNet [15], AlexNet [30], GoogLeNet [30] and VGGNet-16 [30]. In addition, it was extremely surpassed by the much less deep Two-stream Collaborative Network [4]. This is a clear indication that just increasing the CNN depth will increase the computational cost but will not guarantee much better accuracy. The highest accuracies were achieved by Two-stream Collaborative Network [4], Dense Networks with Relative Location Awareness [13] and Multiple Feature Integration [2]. This is an indication that feature selection has more impact on solving this problem than using complex CNN networks. It is worth saying that Dense Networks with Relative Location Awareness [13] and Multiple Feature Integration [2] have by far more layers than Two-stream Collaborative Network[4] but did not manage to achieve a better accuracy. The slide-detect approach took advantage of both sides, it used a smaller CNN classifier and adopted a concentrated learning approach where it only considered images cropped around the infection area while training. Thus simplifying the model, reducing the computational cost while achieving better accuracy.

Figure 7.

Performance and cost comparison of the state-of-the-art techniques.

Figure 7.

Performance and cost comparison of the state-of-the-art techniques.

Close modal

The Slide-Detect used a very concentrated deep learning approach to train a DNN that is able to diagnose lung infiltration with AUC up to 91.47% and accuracy of 93.85% outperforming the current state-of-the-art techniques. The Slide-Detect approach is highly efficient in terms of computational cost and memory compared to the state-of-the-art approaches as it is composed of fewer layers. So, Slide-Detect made a huge leap: eliminating the logically irrelevant features, considering only features that are significant to the process of lung infiltration diagnosis, focusing the training process on the parts of the scans which have been labelled as positive and using a properly sized network for the classification process.

The Slide-Detect technique is designed specifically for the lung infiltration case that most of the state-of-the-art techniques failed to address. This is due to its nature, which is quite different from other lung diseases. Future research can be extended to other diseases of special nature.

A limitation of the Slide-Detect technique is that it only processes 2D chest scans, an adaptation can then be introduced to deal with multi-section or 3D scans. This can involve redesigning most of the algorithms used including the normalization, the feature extractions, the cropping and the CNN classifier itself.

[1]
Al-Shibli
,
K. I.
,
Donnem
,
T.
,
Al-Saad
,
S.
, et al
:
Prognostic effect of epithelial and stromal lymphocyte infiltration in non-small cell lung cancer
.
Clinical cancer research
14
(
16
),
5220
5227
(
2008
)
[2]
Allaouzi
,
I.
,
Ahmed
,
M. B.
:
A novel approach for multi-label chest x-ray classification of common thorax diseases
.
IEEE Access
7
,
64279
64288
(
2019
)
[3]
Begovatz
,
P.
,
Koliaki
,
C.
,
Weber
,
K.
, et al
:
Pancreatic adipose tissue infiltration, parenchymal steatosis and beta cell function in humans
.
Diabetologia
58
(
7
),
1646
1655
(
2015
)
[4]
Chen
,
B.
,
Zhang
,
Z.
,
Lin
,
J.
, et al
:
Two-stream collaborative network for multi-label chest x-ray image classification with lung segmentation
.
Pattern Recognition Letters
135
,
221
227
(
2020
)
[5]
Dalal
,
N.
,
Triggs
,
B.
:
Histograms of oriented gradients for human detection
. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, pp.
886
893
. Ieee (
2005
)
[6]
Esteva
,
A.
,
Kuprel
,
B.
,
Novoa
,
R. A.
, et al
:
Dermatologist-level classification of skin cancer with deep neural networks
.
Nature
542
(
7639
),
115
(
2017
)
[7]
Gardner
,
M. W.
,
Dorling
,
S. R.
:
Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences
.
Atmospheric environment
32
(
14-15
),
2627
2636
(
1998
)
[8]
Gollub
,
M. J.
,
Panu
,
N.
,
Delaney
,
H.
, et al
:
Shall we report cardiomegaly at routine computed tomography of the chest?
Journal of computer assisted tomography
36
(
1
),
67
71
(
2012
)
[9]
Gopi
,
A.
,
Madhavan
,
S. M.
,
Sharma
,
S. K.
, et al
:
Diagnosis and treatment of tuberculous pleural effusion in 2006
.
Chest
131
(
3
),
880
889
(
2007
)
[10]
Goya
,
T.
,
Asamura
,
H.
,
Yoshimura
,
H.
, et al
:
Prognosis of 6644 resected non-small cell lung cancers in japan: a japanese lung cancer registry study
.
Lung Cancer
50
(
2
),
227
234
(
2005
)
[11]
He
,
K.
,
Zhang
,
X.
,
Ren
,
S.
, et al
:
Deep residual learning for image recognition
. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp.
770
778
(
2016
)
[12]
Heinz
,
W. J.
,
Vehreschild
,
J. J.
,
Buchheidt
,
D.
:
Diagnostic work up to assess early response indicators in invasive pulmonary aspergillosis in adult patients with haematologic malignancies
.
Mycoses
62
(
6
),
486
493
(
2019
)
[13]
Ho
,
T. K. K.
,
Gwak
,
J.
:
Multiple feature integration for classification of thoracic disease in chest radiography
.
Applied Sciences
9
(
19
),
4130
(
2019
)
[14]
Kang
,
S.
,
Iwana
,
B. K.
,
Uchida
,
S.
:
Complex image processing with less data—document image binarization by integrating multiple pre-trained u-net modules
.
Pattern Recognition
109
,
107577
(
2021
)
[15]
Kavyashree
,
P. SP.
,
El-Sharkawy
,
M.
:
Compressed mobilenet v3: a light weight variant for resource-constrained platforms
. In: 2021 IEEE 11th Annual Computing and Communication Workshop and Conference(CCWC), pp.
0104
0107
. IEEE (
2021
)
[16]
Ketkar
,
N.
:
Introduction to keras
. In:
Deep Learning with Python
, pp.
97
111
.
Springer
(
2017
)
[17]
Kotsiantis
,
S. B.
,
Zaharakis
,
I.
,
Pintelas
,
P.
:
Supervised machine learning: A review of classification techniques
.
Emerging artificial intelligence applications in computer engineering
160
,
3
24
(
2007
)
[18]
Liang
,
X.
,
Peng
,
C.
,
Qiu
,
B.
, et al
:
Dense networks with relative location awareness for thorax disease identification
.
Medical physics
46
(
5
),
2064
2073
(
2019
)
[19]
Lowe
,
D. G.
:
Distinctive image features from scale-invariant keypoints
.
International journal of computer vision
60
(
2
),
91
110
(
2004
)
[20]
Lu
,
L.
,
Zheng
,
Y.
,
Carneiro
,
G.
, et al
:
Deep learning and convolutional neural networks for medical image computing
.
Advances in Computer Vision and Pattern Recognition
.
Springer
:
New York, NY, USA
(
2017
)
[21]
Menéndez
,
R.
,
Torres
,
A.
:
Evaluation of non-resolving and progressive pneumonia
. In:
Intensive Care Medicine
, pp.
175
187
.
Springer
(
2003
)
[22]
Morris
,
H.
,
Plavcová
,
L.
,
Cvecko
,
P.
, et al
:
A global analysis of parenchyma tissue fractions in secondary xylem of seed plants
.
New Phytologist
209
(
4
),
1553
1565
(
2016
)
[23]
Noppen
,
M.
,
De Keukeleire
,
T.
:
Pneumothorax
.
Respiration
76
(
2
),
121
127
(
2008
)
[24]
Norton
,
L. E.
,
Curtis
,
S. N.
,
Goldman
,
J. L.
:
A 9-year-old boy with a chest mass and eosinophilia
.
Journal of the Pediatric Infectious Diseases Society
5
(
4
),
476
479
(
2016
)
[25]
Ojala
,
T.
,
Pietikäinen
,
M.
,
Mäenpää
,
T.
:
Multiresolution gray-scale and rotation invariant texture classification with local binary patterns
.
IEEE Transactions on Pattern Analysis & Machine Intelligence
(
7
),
971
987
(
2002
)
[26]
Oliva
,
A.
,
Torralba
,
A.
:
Modeling the shape of the scene: A holistic representation of the spatial envelope
.
International journal of computer vision
42
(
3
),
145
175
(
2001
)
[27]
Peroni
,
D. G.
,
Boner
,
A. L.
:
Atelectasis: mechanisms, diagnosis and management
.
Paediatric respiratory reviews
1
(
3
),
274
278
(
2000
)
[28]
Rajpurkar
,
P.
,
Irvin
,
J.
,
Zhu
,
K.
, et al
:
Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning
. arXiv preprint arXiv: 1711.05225 (
2017
)
[29]
Ronneberger
,
O.
,
Fischer
,
P.
,
Brox
,
T.
:
U-net: Convolutional networks for biomedical image segmentation
. In:
International Conference on Medical image computing and computer-assisted intervention
, pp.
234
241
.
Springer
(
2015
). Prateek Singhal, Pawan Singh, and Ankit Vidyarthi. Interpretation and localization of thorax diseases using dcnn in chest x-ray. (2019)
[30]
Szegedy
,
C.
,
Liu
,
W.
,
Jia
,
Y.
, et al
:
Going deeper with convolutions
. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp.
1
9
(
2015
)
[31]
Traub
,
M.
,
Stevenson
,
M.
,
McEvoy
,
S.
, et al
:
The use of chest computed tomography versus chest x-ray in patients with major blunt trauma
.
Injury
38
(
1
),
43
47
(
2007
)
[32]
Van Dyken
,
S. J.
,
Garcia
,
D.
,
Porter
,
P.
, et al
:
Fungal chitin from asthma-associated home environments induces eosinophilic lung infiltration
.
The Journal of Immunology
187
(
5
),
2261
2267
(
2011
)
[33]
Wang
,
C.
,
Elazab
,
A.
,
Wu
,
J.
, et al
:
Lung nodule classification using deep feature fusion in chest radiography
.
Computerized Medical Imaging and Graphics
57
,
10
18
(
2017
)
[34]
Wang
,
X.
,
Peng
,
Y.
,
Lu
,
L.
, et al
:
Chestx-ray: Hospital-scale chest x-ray database and benchmarks on weakly
. Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics,
369
(
2019
)
[35]
Yuan
,
Z.-W.
,
Zhang
,
J.
:
Feature extraction and image retrieval based on alexnet
. In: Eighth International Conference on Digital Image Processing (ICDIP 2016), volume 10033, page 100330E. International Society for Optics and Photonics (
2016
)
[36]
Zhang
,
X.
,
Zou
,
J.
,
He
,
K.
, et al
:
Accelerating very deep convolutional networks for classification and detection
.
IEEE transactions on pattern analysis and machine intelligence
38
(
10
),
1943
1955
(
2015
)
[37]
Zhang
,
Z.
:
Improved adam optimizer for deep neural networks
. In:
2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)
, pp.
1
2
.
IEEE
(
2018
)
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.