Abstract

There are various kinds of brain monitoring techniques, including local field potential, near-infrared spectroscopy, magnetic resonance imaging (MRI), positron emission tomography, functional MRI, electroencephalography (EEG), and magnetoencephalography. Among those techniques, EEG is the most widely used one due to its portability, low setup cost, and noninvasiveness. Apart from other advantages, EEG signals also help to evaluate the ability of the smelling organ. In such studies, EEG signals, which are recorded during smelling, are analyzed to determine the subject lacks any smelling ability or to measure the response of the brain. The main idea of this study is to show the emotional difference in EEG signals during perception of valerian, lotus flower, cheese, and rosewater odors by the EEG gamma wave. The proposed method was applied to the EEG signals, which were taken from five healthy subjects in the conditions of eyes open and eyes closed at the Swiss Federal Institute of Technology. In order to represent the signals, we extracted features from the gamma band of the EEG trials by continuous wavelet transform with the selection of Morlet as a wavelet function. Then the -nearest neighbor algorithm was implemented as the classifier for recognizing the EEG trials as valerian, lotus flower, cheese, and rosewater. We achieved an average classification accuracy rate of 87.50% with the 4.3 standard deviation value for the subjects in eyes-open condition and an average classification accuracy rate of 94.12% with the 2.9 standard deviation value for the subjects in eyes-closed condition. The results prove that the proposed continuous wavelet transform–based feature extraction method has great potential to classify the EEG signals recorded during smelling of the present odors. It has been also established that gamma-band activity of the brain is highly associated with olfaction.

1  Introduction

Evaluating the stimulus coming from sense organs is one of the main tasks of the brain. The response of the brain to the stimulus can be measured by magnetoencephalography, functional magnetic resonance imaging, computed tomography, positron emission tomography, electrocorticography, and electroencephalography (EEG) (Polomac et al., 2015). Compared with these techniques, EEG is the most widely studied potential, mainly owing to its affordable and easy recording equipment, which facilitates real-time operation and fine temporal resolution with its low setup costs and noninvasive nature (Siuly & Li, 2015). One of the most useful advantages of EEG-based research is in evaluating the functionality of the smelling organ. In such research, EEG signals, which are recorded during smelling, are analyzed to determine if the subject lacks any smelling ability or to measure the response of the brain.

Previous studies have investigated the responses of the gas sensors (odor receptors) to various kinds of odors (Gorji-Chakespari et al., 2016; Gromski et al., 2014; Mishra, Dwivedi, & Das 2013). However, research that reveals the response of the human brain to different odors is limited. (Galán, Sachse, Galizia, & Herz, 2004; Saha, Konar, Chatterjee, Ralescu, & Nagar, 2014; Placidi, Avola, Petracca, Sgallari, & Spezialetti, 2015). Moreover, studies vary in terms of their experimental methodology, outcomes, and limited kinds of odors, which makes it difficult to draw firm conclusions (Lorig, 2000). In an odor-based EEG signal classification study, EEG signals were acquired from five subjects in eyes-closed condition when participants smelled four kinds of odors (Yazdani, Kroupi, Vesin, & Ebrahimiet, 2012). Afterward, the researchers asked the participants which of those odors they found the most pleasant and unpleasant. They then classified the pleasant and unpleasant odors for each subject and calculated an average classification accuracy (ACA) rate of 79.91%. In another odor-based EEG signal classification study, the researchers recorded EEG signals of five eyes-open and eyes-closed subjects as they smelled valerian (V), lotus flower (LF), rosewater (R), and rotten Swiss Tomme goat cheese (C) (Kroupi, Yazdani, Vesin, & Ebrahimi, 2014). Like previous work, they also asked the participants to identify the most pleasant and the most unpleasant odors among the four. Afterward, they proposed subject-specific feature extraction and classification methods in order to classify only the most pleasant and the most unpleasant odors among the four. In their binary classification challenge, they obtained a classification accuracy rate of approximately 90%.

Yazdani et al. (2012) and Kroupi et al. (2014) classified the EEG signals for only the most pleasant and the most unpleasant odors. It is naturally expected that the brain will give different responses for those of opposite odors.

In this work, instead of classifying EEG signals for the binary opposite odors, we focused on developing a classification model to show the emotional difference in EEG activity during perception of V, LF, C, and R, which were also used in Kroupi et al’s. (2014) study. To do so, all frequency bands—delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and gamma (30–70 Hz)—were evaluated in order to have the most discriminative features and achieve the highest classification performance. The features were extracted from each subband of the EEG trials by continuous wavelet transform (CWT) with the selection of Morlet as a wavelet function. Then the -nearest neighbor (-NN) algorithm was implemented as the classifier for recognizing the EEG trials as V, LF, C, and R. Based on the proposed method, ACA rates of 87.50% and 94.12% were achieved by gamma band for all subjects in the eyes-open and eyes-closed conditions, respectively.

In section 2, the data set description is introduced and the CWT-based feature extraction method is mathematically explained. Then the training and testing procedures of the -NN classifier are explained. The classification accuracy results are provided with tables, including a confusion matrix, in section 3. Section 4 presents the results and discusses the findings.

2  Materials and Methods

2.1  Data Set Description

The proposed algorithm was applied to the EEG data set, which was collected from five healthy, right-handed male participants at the Swiss Federal Institute of Technology in Switzerland. The sampling frequency was 250 Hz, and 256 electrodes were placed at the international standard positions on the scalp. The experimental environment was quiet, and the room temperature was around 22–24C. Participants, ages between 26 and 32 years, were informed about the protocol and the purpose of the study. However, they were not informed about the odors they would smell. The participants had no respiratory, chronic, or mental disease, and they were seated in a comfortable chair 1 meter away from a computer screen during the experiment. Neither the participants nor the experimenters were wearing perfumed products.

The EEG signals were recorded as the subjects smelled the four odors (V, LF, C, and R) in the eyes-open and eyes-closed conditions. For each condition, the experiment consisted of four runs. During each run, after a “smell” command, the experimenter randomly chose a bottle with an odor to place under the participant’s nose (1–2 cm under both nostrils) and kept it there for about 2 seconds, which constituted a single trial. This protocol was repeated 20 to 30 times with the same odor, resulting in 20 to 30 single trials. The time between two single trials of the same odor was adjusted to 4 seconds to prevent adaptation and subject fatigue. After one run was over, a 2-minute break was given to allow attendees to forget the odors. The odorants were put in covered bottles in order to avoid any effects from their visual characteristics. EEG trials were re-referenced to the common average, and single trials were generated so as to last for 1 second after the stimulus onset.

The total number of trials for each odor are presented in Table 1. Approximately half of the trials were randomly selected as the training set (S in the table) and the rest of them were selected as the testing set (T in the table). (For further information about the data set, see Kroupi et al., 2014.)

Table 1:
Total Number of Trials.
Eyes OpenEyes Closed
SubjectVLFCRVLFCR
18 (S: 9, T: 9) 18 (S: 9, T: 9) 19 (S: 10, T: 9) 21 (S: 11, T: 10) 17 (S: 9, T: 8) 18 (S: 9, T: 9) 21 (S: 11, T: 10) 20 (S: 10, T: 10) 
17 (S: 9, T: 8) 18 (S: 9, T: 9) 19 (S: 10, T: 9) 21 (S: 11, T: 10) 17 (S: 9, T: 8) 21 (S: 11, T: 10) 19 (S: 10, T: 9) 24 (S: 12, T: 12) 
22 (S: 11, T: 11) 23 (S: 12, T: 11) 22 (S: 11, T: 11) 23 (S: 12, T: 11) 21 (S: 11, T: 10) 20 (S: 10, T: 10) 18 (S: 9, T: 9) 20 (S: 10, T: 10) 
23 (S: 12, T: 11) 24 (S: 12, T: 12) 26 (S: 13, T: 13) 19 (S: 10, T: 9) 21 (S: 11, T: 10) 21 (S: 11, T: 10) 21 (S: 11, T: 10) 29 (S: 15, T: 14) 
20 (S: 10, T: 10) 19 (S: 10, T: 9) 15 (S: 8, T: 7) 22 (S: 11, T: 11) 18 (S: 9, T: 9) 20 (S: 10, T: 10) 22 (S: 11, T: 11) 21 (S: 11, T: 10) 
Total 100 102 101 106 94 100 101 114 
Eyes OpenEyes Closed
SubjectVLFCRVLFCR
18 (S: 9, T: 9) 18 (S: 9, T: 9) 19 (S: 10, T: 9) 21 (S: 11, T: 10) 17 (S: 9, T: 8) 18 (S: 9, T: 9) 21 (S: 11, T: 10) 20 (S: 10, T: 10) 
17 (S: 9, T: 8) 18 (S: 9, T: 9) 19 (S: 10, T: 9) 21 (S: 11, T: 10) 17 (S: 9, T: 8) 21 (S: 11, T: 10) 19 (S: 10, T: 9) 24 (S: 12, T: 12) 
22 (S: 11, T: 11) 23 (S: 12, T: 11) 22 (S: 11, T: 11) 23 (S: 12, T: 11) 21 (S: 11, T: 10) 20 (S: 10, T: 10) 18 (S: 9, T: 9) 20 (S: 10, T: 10) 
23 (S: 12, T: 11) 24 (S: 12, T: 12) 26 (S: 13, T: 13) 19 (S: 10, T: 9) 21 (S: 11, T: 10) 21 (S: 11, T: 10) 21 (S: 11, T: 10) 29 (S: 15, T: 14) 
20 (S: 10, T: 10) 19 (S: 10, T: 9) 15 (S: 8, T: 7) 22 (S: 11, T: 11) 18 (S: 9, T: 9) 20 (S: 10, T: 10) 22 (S: 11, T: 11) 21 (S: 11, T: 10) 
Total 100 102 101 106 94 100 101 114 

Notes: S: number of training trials. T: number of testing trials.

2.2  Continuous Wavelet Transform Based Feature Extraction

Wavelet transform has been successfully applied on many pattern recognition applications and helped to extract discriminative attributes (Liu, Si, Wen, Zang, & Lang, 2016; Junior & Backes, 2016; Karan, 2015). Compared with other feature extraction methods that operate in only one domain, such as autoregressive modeling or fast Fourier transform, wavelet transform is a powerful spectral estimation technique for the time–frequency analysis of nonstationary signals such as EEG.

Mathematically, the CWT is given as the convolution between the analyzed signal and the mother wavelet function in the time domain:
formula
2.1
where is the wavelet transform coefficient and is the complex conjugate. In addition, and are called the dilation and translation parameters, respectively. The mother wavelet function must satisfy
formula
2.2

The selection of the mother wavelet function is an important step in wavelet transform to extract valuable information from the signal in the time-frequency domain (Aydemir & Kayikcioglu, 2016). The selection of the mother wavelet function is generally determined from empirical or previous experience and knowledge or depends on the similarity of the mother wavelet function and the signal to be analyzed (Aydemir & Kayikcioglu, 2011). Among other wavelets, we selected the Morlet wavelet function based on our previous experience, which demonstrated that this wavelet is well localized in the frequency domain and has the potential to extract useful information due to the similarity with the signals to be analyzed.

In this study, in order to extract features, CWT was applied to each channel. Our statistical feature analysis on the training set showed that the averages and the standard deviations of the absolute values of the WTCs of all channels can be used for classifying the EEG trials recorded during smelling V, LF, C, and R. The average and the standard deviation of WTCs were calculated by equations 2.3 and 2.4, respectively:
formula
2.3
formula
2.4
where is the length of the WTC.

2.3  Training and Testing Procedures of the k-NN Classifier

The -NN is not only a simple algorithm to implement but also works well in practice and is useful for solving multiclass classification problems (Duda, Hart, & Stork, 2012). Moreover, it can yield competitive results compared to the most sophisticated classification algorithms (Schmah et al., 2010). The task of the -NN method is to predict the class label of a query vector on the set of labeled instances and predefined classes. The main idea is to find -nearest neighbor(s) of and use a majority vote to determine the class label of . The example in Figure 1 illustrates the two-dimensional sample feature space with four classes of data. We consider in this case . The unlabeled test trial (*) would be labeled by the category class 1 because two out of its three closest samples (neighbors) are class 1.

Figure 1:

An example for the -NN classifier.

Figure 1:

An example for the -NN classifier.

The performance of a -NN classifier is highly related by the selection of and the distance metric, which is used to determine -nearest neighbor(s). Euclidean distance, which is widely applied by the machine learning community, was used to calculate the closest neighbors.

Implementing a validation process in a feature set of training data is a fair way to determine the parameter. In this letter, because the number of training trials is limited, we used the leave-one-out cross-validation (LOOCV) technique to determine the best value of to maximize classification performance. LOOCV also provides one of the best uses of the available training data. The most suitable value, which provided the highest classification accuracy (CA), was investigated in the interval between 1 and 8 with a step size of 1. Because the lowest number of trials in a class in the training set was 8, the largest possible value of was set to 8. The CA metric was used in order to measure the performance of the classifier. The CA was calculated by dividing the correctly classified EEG trials to the total number of considered trials for classification and multiplying it by 100 to obtain it in terms of percentage:
formula
2.5
where CCT indicates the number of correctly classified trials and TT indicates the total number of considered trials.

The general framework of the proposed method is illustrated in Figure 2. The the left side of the flowchart represents the training procedure and the right part the testing procedure of the proposed method. In the training phase, the feature vector set is obtained by extracting features from training trials based on CWT coefficients, and then it is used to train the -NN classifier. In the testing phase, the trained -NN classifier predicts the class labels according to the extracted feature vector from corresponding test trials. Finally, the test CA is calculated by comparing the predicted and actual labels.

Figure 2:

Flowchart of the proposed method. (A) Training part; (B) testing part.

Figure 2:

Flowchart of the proposed method. (A) Training part; (B) testing part.

3  Results

In this letter, the responses of the brain to the V, LF, C, and R odors in the eyes-open and eyes-closed conditions were investigated using EEG signals of five healthy subjects. We extracted CWT transform-based features from the gamma-band of the EEG trials in order to represent the effects of the odors in the brain.

Because of the limited database and to show its robustness, we tested the proposed algorithm 100 times by splitting the training and testing data set randomly. We calculated an ACA rate and its standard deviation value for each subject. The average test CA results and their standard deviations for eyes-open and eyes-closed conditions are given in Tables 2 and 3, respectively. The highest performances achieved are in bold. In Table 2, in the eyes-open condition, the best CA was achieved for subject 1 (98.97% 1.9%), and the lowest CA was obtained for subject 3 (75.11% 5.7%). The average of all subjects was calculated as 87.50% 4.3%. In the eyes-closed condition, the best CA was achieved for subject 2 (99.31% 1.6%) and the lowest CA for subject 5 (83.40% 5.3%). The average of all subjects was calculated as 94.12% 2.9%.

Table 2:
Average Test Classification Accuracy Results in Eyes-Open Condition.
CA and SD
Subject 1 98.97 1.9 
Subject 2 90.86 4.4 
Subject 3 75.11 5.7 
Subject 4 92.24 3.4 
Subject 5 80.30 6.0 
Average 87.50 4.3 
CA and SD
Subject 1 98.97 1.9 
Subject 2 90.86 4.4 
Subject 3 75.11 5.7 
Subject 4 92.24 3.4 
Subject 5 80.30 6.0 
Average 87.50 4.3 
Table 3:
Average Test Classification Accuracy Results in Eyes-Closed Condition.
CA and SD
Subject 1 95.11 3.3 
Subject 2 99.31 1.6 
Subject 3 93.62 3.2 
Subject 4 99.18 1.1 
Subject 5 83.40 5.3 
Average 94.12 2.9 
CA and SD
Subject 1 95.11 3.3 
Subject 2 99.31 1.6 
Subject 3 93.62 3.2 
Subject 4 99.18 1.1 
Subject 5 83.40 5.3 
Average 94.12 2.9 

The classification results of the testing data for the eyes-open and eyes-closed conditions are also provided as a confusion matrix in Tables 4 and 5, respectively. The confusion matrix demonstrates valuable and detailed information about the percentage values of correctly classified and misclassified EEG trials (Aydemir & Kayikcioglu, 2014). Each row of the matrix represents the average percentages in a predicted class, and each column represents the average percentages in an actual class after 100 run times. Hence, each cell contains an average percentage value, demonstrating the percentages of trials of the actual observed class assigned by the model to the predicted class.

Table 4:
Confusion Matrix of the Subjects in the Eyes-Open Condition: (A) Subject 1, (B) Subject 2, (C) Subject 3, (D) Subject 4, (E) Subject 5.
 
 
Table 5:
Confusion Matrix of the Subjects in the Eyes-Closed Condition: (A) Subject 1, (B) Subject 2, (C) Subject 3, (D) Subject 4, (E) Subject 5.
 
 

The most remarkable results in Tables 4 and 5 are as follows:

  1. All of the C smelling–based EEG trials were correctly classified for subject 1 in the eyes-open condition and for subjects 2 and 4 in the eyes-closed condition.

  2. All of the R and LF smelling–based EEG trials were correctly classified for subjects 1 and 4 in the eyes-closed condition, respectively.

  3. The V and LF odors were confused only with each other to classify for subjects 2 and 3 in the eyes-open and eyes-closed condition, respectively.

  4. There is too much confusion to classify the EEG trials of the C odor for subjects 3 and 5 in the eyes-open condition.

  5. There is too much confusion to classify the EEG trials of the LF odor for subject 5 in the eyes-closed condition.

To validate the gamma-band effectiveness, we compared the performance of the proposed method with the results of the other subbands. Figures 3 and 4 represent the average test CA and standard deviation results of the delta, theta, alpha, and beta bands in the eyes-open and eyes-closed conditions, respectively. In these figures, the bars indicate the average CA, and the lines above the bars represent standard deviations. Additionally, the average CA and standard deviation of the subjects for each subband are given on the right side of the figures. It is worthwhile mentioning that the same training and testing procedures were applied as already noted. In Figure 3, the best CA was obtained for subject 2 as 86.11% by using delta-band features, and the highest average CA of the subjects was calculated as 73.73% by beta band features. In the eyes-closed condition, the best CA was achieved again for subject 2 at 96.94%, and the highest average CA of the subjects was obtained as 69.85% by beta-band features (see Figure 4).

Figure 3:

Average test classification accuracy results of other subbands in the eyes-open condition.

Figure 3:

Average test classification accuracy results of other subbands in the eyes-open condition.

Figure 4:

Average test classification accuracy results of other subbands in the eyes-closed condition.

Figure 4:

Average test classification accuracy results of other subbands in the eyes-closed condition.

When we consider individual and overall average classification accuracies, the proposed gamma-band features always provide higher performance than the rest of the subbands. Moreover, in terms of the average CA of the subjects, gamma-band features achieved 13.77% and 24.27% higher performance than the best case of the other subbands for the eyes-open and eyes-closed conditions, respectively.

4  Conclusion

In this study, we demonstrated the emotional difference in brain activity during perception of the V, LF, C, and R odors under eyes-open and eyes-closed conditions by analyzing the gamma-band of the EEG signals. The results showed that the proposed CWT-based feature extraction method has great potential for classifying EEG signals recorded during smelling V, LF, C, and R odors. Moreover, the results of within-subject analysis for both eyes-open and eyes-closed conditions demonstrated that the gamma-band activity in the brain is highly associated with olfaction.

In Tables 2 and 3, at the eyes-closed condition, the average CA of all subjects was 6.62% greater than the eyes-open condition. Thus, it can be said that the response of the brain in the eyes-closed condition is more discriminative than the eyes-open condition. We achieved greater than a 90% CA rate for three subjects in both conditions. In order to show the success of the proposed model and to make a fair comparison, we also computed the test classification accuracies for the rest of the subbands under the same training and testing conditions. The results showed that the proposed gamma-band features notably outperformed the results obtained by other subbands. When the results of 4-class EEG-based classification problems in the literature are considered, the within-subject CA results suggest that the CWT-based features in the gamma-band achieve satisfactory CA and can result in excellent performance for some of the subjects.

The proposed method is not subject specific; it can be applied to all subjects for both conditions (eyes open and eyes closed). Hence, the proposed method is not time-consuming for extracting discriminative features and capable of high recognition.

We have noticed that the response of the brain to the odors is very similar for both eyes-open and eyes-closed conditions for subjects 1 and 5. The differences of obtained average CA were only 3.86% and 3.10% for subject 1 and subject 5, respectively. However, the maximum difference for both conditions, 18.51%, was calculated for subject 3.

Because of the limited database, we ran the proposed method 100 times and calculated the average CA and its standard deviation. The smaller standard deviation values proved that the proposed method was robust and stable. Based on the results, it can be stated that the proposed method can greatly contribute to EEG-based olfactory recognition. Furthermore, it could clinically help to evaluate olfactory dysfunction, and instead of using a subject-specific model, it could be generalized and applied to all subjects.

Acknowledgments

I thank the Swiss Federal Institute of Technology, Switzerland, for providing the data set. This work was supported by the Scientific and Technological Research Council of Turkey, project EEEAG-215E155.

The author declares that there are no conflicts of interest.

References

Aydemir
,
O.
, &
Kayikcioglu
,
T.
(
2011
).
Wavelet transform based classification of invasive brain computer interface data
.
Radioengineering
,
20
(
1
),
31
38
.
Aydemir
,
O.
, &
Kayikcioglu
,
T.
(
2014
).
Decision tree structure based classification of EEG signals recorded during two dimensional cursor movement imagery
.
Journal of Neuroscience Methods
,
229
,
68
75
.
Aydemir
,
O.
, &
Kayikcioglu
,
T.
(
2016
).
Investigation of the most appropriate mother wavelet for characterizing imaginary EEG signals used in BCI systems
.
Turkish Journal of Electrical Engineering and Computer Sciences
,
24
(
1
),
38
49
.
Duda
,
R. O.
,
Hart
,
P. E.
, &
Stork
,
D. G.
(
2012
).
Pattern classification
.
Hoboken, NJ
:
Wiley
.
Galán
,
R. F.
,
Sachse
,
S.
,
Galizia
,
C. G.
, &
Herz
,
A. V.
(
2004
).
Odor-driven attractor dynamics in the antennal lobe allow for simple and rapid olfactory pattern classification
.
Neural Computation
,
16
(
5
),
999
1012
.
Gorji-Chakespari
,
A.
,
Nikbakht
,
A. M.
,
Sefidkon
,
F.
,
Ghasemi-Varnamkhasti
,
M.
,
Brezmes
,
J.
, &
Llobet
,
E.
(
2016
).
Performance comparison of Fuzzy ARTMAP and LDA in qualitative classification of Iranian Rosa damascena essential oils by an electronic nose
.
Sensors
,
16
(
5
),
636.1
636.15
.
Gromski
,
P. S.
,
Correa
,
E.
,
Vaughan
,
A. A.
,
Wedge
,
D. C.
,
Turner
,
M. L.
, &
Goodacre
,
R.
(
2014
).
A comparison of different chemometrics approaches for the robust classification of electronic nose data
.
Analytical and Bioanalytical Chemistry
,
406
(
29
),
7581
7590
.
Junior
,
J. J. M. S.
, &
Backes
,
A. R.
(
2016
).
ELM based signature for texture classification
.
Pattern Recognition
,
51
,
395
401
.
Karan
,
V.
(
2015
).
Wavelet transform-based classification of electromyogram signals using an ANOVA technique
.
Neurophysiology
,
47
(
4
),
302
309
.
Kroupi
,
E.
,
Yazdani
,
A.
,
Vesin
,
J. M.
, &
Ebrahimi
,
T.
(
2014
).
EEG correlates of pleasant and unpleasant odor perception
.
ACM Transactions on Multimedia Computing, Communications, and Applications
,
11
(
1s
),
13.1
13.17
.
Liu
,
T.
,
Si
,
Y.
,
Wen
,
D.
,
Zang
,
M.
, &
Lang
,
L.
(
2016
).
Dictionary learning for VQ feature extraction in ECG beats classification
.
Expert Systems with Applications
,
53
,
129
137
.
Lorig
,
T. S.
(
2000
).
The application of electroencephalographic techniques to the study of human olfaction: A review and tutorial
.
International Journal of Psychophysiology
,
36
(
2
),
91
104
.
Mishra
,
V. N.
,
Dwivedi
,
R.
, &
Das
,
R. R.
(
2013
).
Classification of gases/odors using dynamic responses of thick film gas sensor array
.
IEEE Sensors Journal
,
13
(
12
),
4924
4930
.
Placidi
,
G.
,
Avola
,
D.
,
Petracca
,
A.
,
Sgallari
,
F.
, &
Spezialetti
,
M.
(
2015
).
Basis for the implementation of an EEG-based single-trial binary brain computer interface through the disgust produced by remembering unpleasant odors
.
Neurocomputing
,
160
,
308
318
.
Polomac
,
N.
,
Leicht
,
G.
,
Nolte
,
G.
,
Andreou
,
C.
,
Schneider
,
T. R.
,
Steinmann
,
S.
, &
Mulert
,
C.
(
2015
).
Generators and connectivity of the early auditory evoked gamma band response
.
Brain Topography
,
28
(
6
),
865
878
.
Saha
,
A.
,
Konar
,
A.
,
Chatterjee
,
A.
,
Ralescu
,
A.
, &
Nagar
,
A. K.
(
2014
).
EEG analysis for olfactory perceptual-ability measurement using a recurrent neural classifier
.
IEEE Transactions on Human-Machine Systems
,
44
(
6
),
717
730
.
Schmah
,
T.
,
Yourganov
,
G.
,
Zemel
,
R. S.
,
Hinton
,
G. E.
,
Small
,
S. L.
, &
Strother
,
S. C.
(
2010
).
Comparing classification methods for longitudinal fMRI studies
.
Neural Computation
,
22
(
11
),
2729
2762
.
Siuly
,
S.
, &
Li
,
Y.
(
2015
).
Discriminating the brain activities for brain–computer interface applications through the optimal allocation-based approach
.
Neural Computing and Applications
,
26
(
4
),
799
811
.
Yazdani
,
A.
,
Kroupi
,
E.
,
Vesin
,
J. M.
, &
Ebrahimi
,
T.
(
2012
).
Electroencephalogram alterations during perception of pleasant and unpleasant odors
. In
Proceedings of the IEEE Fourth International Workshop on Quality of Multimedia Experience
(pp.
272
277
).
Piscataway, NJ
:
IEEE
.