## Abstract

In this letter, we propose two novel methods for four-class motor imagery (MI) classification using electroencephalography (EEG). Also, we developed a real-time health 4.0 (H4.0) architecture for brain-controlled internet of things (IoT) enabled environments (BCE), which uses the classified MI task to assist disabled persons in controlling IoT-enabled environments such as lighting and heating, ventilation, and air-conditioning (HVAC). The first method for classification involves a simple and low-complex classification framework using a combination of regularized Riemannian mean (RRM) and linear SVM. Although this method performs better compared to state-of-the-art techniques, it still suffers from a nonnegligible misclassification rate. Hence, to overcome this, the second method offers a persistent decision engine (PDE) for the MI classification, which improves classification accuracy (CA) significantly. The proposed methods are validated using an in-house recorded four-class MI data set (data set I, collected over 14 subjects), and a four-class MI data set 2a of BCI competition IV (data set II, collected over 9 subjects). The proposed RRM architecture obtained average CAs of 74.30% and 67.60% when validated using datasets I and II, respectively. When analyzed along with the proposed PDE classification framework, an average CA of 92.25% on 12 subjects of data set I and 82.54% on 7 subjects of data set II is obtained. The results show that the PDE algorithm is more reliable for the classification of four-class MI and is also feasible for BCE applications. The proposed low-complex BCE architecture is implemented in real time using Raspberry Pi 3 Model B+ along with the Virgo EEG data acquisition system. The hardware implementation results show that the proposed system architecture is well suited for body-wearable devices in the scenario of Health 4.0. We strongly feel that this study can aid in driving the future scope of BCE research.

## 1  Introduction

With the evolution of the internet of things (IoT), artificial intelligence (AI), and low-power electronics, multiple businesses, such as smarter health care, industrial automation, monitoring, and environmental sensing, are moving toward minimizing manual intervention and are offering widely available services (Da Xu, He, & Li, 2014; Wollschlaeger, Sauter, & Jasperneite, 2017; Kiran, Rajalakshmi, Bharadwaj, & Acharyya, 2014). The health care industry is especially witnessing a revolution prominently referred to as Health 4.0 (H4.0). In the context of H4.0, health care delivery will use all the technologies noted to improve the quality of living (Thuemmler & Bai, 2017). In many developed and developing nations, an aging population and accidents are increasing, posing significant challenges due to the unavailability of adequate health care providers (Angel, Vega, & López-Ortega, 2017; Sousa et al., 2017). Especially in the case of motor-disabled persons (where physical impairments due to aging or accidents impede the person's mobility), they rely on the help of others to control electrical appliances' such as turning on and off heating, ventilation, and air-conditioning (HVAC) systems and lighting, among others. The use of H4.0 systems can aid the elderly and disabled in performing these tasks without their own physical movement or an assisting human (Thuemmler & Bai, 2017). The use of H4.0 technologies seems to be a promising solution for elderly care and remote patient monitoring applications (Miori & Russo, 2017; Gope & Hwang, 2016).

Fifteen percent of the world population suffers from different kinds of disability (World Bank, 2019), and 13% of people worldwide are elderly, defined as over 60 years (United Nations, 2019). Recently developed technologies, such as home automation using natural language processing (NLP) and remote control, can assist those who are disabled. But in many developing countries, including India and Indonesia, people speak diversified regional languages, and the use of NLP technology is not yet popular in these countries. Hence, we feel a solution where brain activity such as motor imagery used for interaction with the environment, referred to as brain-controlled IoT-enabled environments (BCE), is more beneficial and can be adopted across populations.

Motor imagery (MI) is a promising solution for BCE to interact with electrical appliances so that those who are elderly or disabled can turn appliances on and off. However, multiple patterns of MI task detection using electroencephalography (EEG) is still an active research area, and classification accuracies are limited. Other EEG techniques such as P300 and SSVEP (steady-state visual evoked potential) also exist to recognize the brain pattern of the EEG signals. For example, Rebsamen et al. (2010) developed a brain-controlled wheelchair using P300 potential. Although the P300-based system provides multiple commands, it has limits because its information transfer rate is low. P300 and SSEVP methods therefore require visual stimulus for identifying positive peak present in the EEG signal.

The elderly and disabled may feel fatigue over prolonged use of these systems. Hence, to improve the care of elderly and disabled persons by reducing their dependence on the assistance of others, we propose an H4.0 framework (MI-based BCE system) that enables them to control IoT-enabled environments. Users are instructed to imagine performing one of four MI tasks: moving their tongue, feet, left hand, or right hand. Each of these tasks is mapped to control a single electrical appliance present in the IoT-enabled environment. When the required MI task is performed, the system uses features extracted from the collected EEG signals and classifies the task performed by the user using a simple low-hardware complex classifier. The identified task is then converted into real-time action by toggling the state of the corresponding electrical appliance (on to off or off to on) using the IoT wireless network developed. We make four primary contributions in this letter:

• We propose and develop a real-time implementable low-complex BCE architecture that uses MI tasks performed by the user to control the surrounding IoT-enabled electrical appliances.

• We validate the BCE framework in real time using the Raspberry Pi 3 (Model B+) interface along with an EEG data acquisition system and IoT network.1

• We proposed and develop a novel framework for classifying MI tasks (movement of the right hand, left hand, feet, and tongue) using regularized Riemannian mean (RRM) and linear support vector machines (LSVM) from the EEG data collected.

• We propose a novel persistent decision-based MI classifier to reduce false alarms and improve classification accuracies (CAs).

The rest of this letter is organized as follows. Section 2 discusses the important studies in the literature and the proposed H4.0 framework with a detailed description of the individual functional units. We discuss the performance of the proposed low-complex classifier without the persistent decision engine and the persistent decision-based classification framework using data set I and data set II in section 3. We also discuss, the real-time hardware implementation of the proposed architecture using Raspberry Pi 3 and the Virgo EEG2 data acquisition system in section 3. Section 4 concludes by summarizing the work performed and discussing future scope.

## 2  Methods

### 2.1  A Primer to MI Classification and Related Work

Motor imagery is an active cognitive process involving the imagination of performing a task such as the movement of hands, feet, or tongue without any physical activity. The MI task is internally reproduced in the sensorimotor area of the human brain. The performed MI task provides unique pattern information in the collected EEG signal, which is then converted to real-time control commands in BCE applications.

Many studies have analyzed EEG signals to determine MI tasks that users perform for various purposes such as gaming and neuroprosthetics (Lu, Li, Ren, & Miao, 2017; Bhattacharyya, Konar, & Tibarewala, 2017). A generic architecture to determine the MI task involves feature extraction and uses the extracted features to classify the task. Two popular methods developed for MI classification are common spatial patterns (CSP) (Blankertz, Tomioka, Lemm, Kawanabe, & Müller, 2008) and the use of Riemannian manifolds (Barachant, Bonnet, Congedo, & Jutten, 2012). Basic CSP is developed for classifying two-class MI, and when it is employed for four-class MI classification, its performance degrades. Multiple enhanced versions of CSP (Grosse-Wentrup & Buss, 2008; Nicolas-Alonso, Corralejo, Gomez-Pilar, Álvarez, & Hornero, 2015; Ge, Wang, & Yu, 2014; Dong, Li, Li, Du, Belkacem, & Chen, 2017) were developed to support multiclass MI data with limited CAs.

One of the recent and promising approaches identified for MI classification is the use of Riemannian geometry (Barachant et al., 2012; Zanini, Congedo, Jutten, Said, & Berthoumieu, 2018; Congedo, Barachant, & Bhatia, 2017). Barachant et al. (2012) and Zanini et al. (2018) proposed a Riemannian geometry-based multiclass MI classification where the signal covariances are used as features of interest using the minimum distance to Riemannian mean (MDRM) of the Riemannian manifold. Although the methodology in Barachant et al. (2012) gives a good CA for some subjects, there are also subjects that suffer from a high misclassification rate. Zanini et al. (2018) extended the work in Barachant et al. (2012) to improve the accuracy of cross-session and cross-subject validation. Zanini et al. (2018) proposed a new affine transformation to improve accuracy when a model trained on one subject is used on the other subject. For every session, the covariance matrices calculated are affine-transformed to center them with respect to a reference covariance matrix. Although this improves accuracy, it requires a predefined interval at the beginning of every session to find the affine transformation, which may not be feasible in real-time use.

Dong, Zhu, and Chen (2017) proposed four-class MI classification with 22-channel EEG signals between 3 and 24 Hz using CSP and a relevance vector machine with a combination of gaussian and Cauchy kernels. Although the performance is improved, these methods did not improve the CA of all subjects. Gaur, Pachori, Wang, and Prasad (2018) proposed a multivariate empirical mode decomposition-based filtering along with Riemannian geometry to improve the CA, but the performance analysis includes one-versus-one classification.

The systems we have noted define the classification methodology of several MI tasks. However, in this letter, for the first time to the best of our knowledge, we also define the system architecture using end-to-end classification and application as well in real time, wherein the users, even paralyzed patients, control the IoT appliances using MI tasks. In Jagadish, Kiran, and Rajalakshmi (2017), we proposed BCE using eye blink activity. We now extend this to four-class MI, thereby incorporating more controls than in previous work.

When compared to the existing studies, we propose three improvements in this letter. The first is the development of an end-to-end system architecture that uses a novel MI classification using RRM with LSVM to control the IoT-enabled environments. Second, we propose a novel and persistent decision-based classification framework that significantly improves the CA and reduces the false alarms in the BCE. Third, we implement the proposed architecture in real-time using Raspberry Pi 3 along with a Virgo EEG machine. To the best of our knowledge, this is the first study analyzing the performance of an H4.0 framework considering all of these advances.

### 2.2  Proposed Framework for Health 4.0 Using Low-Complex Brain-Controlled IoT Environments

Figure 1 shows the proposed framework for H4.0, comprising the novel architecture for BCE. The framework is divided into two stages. First is wearable EEG aggregation and processing device that aggregates 22-channel EEG data from the scalp of the subject, with montages described in sections 2.2.2 and 2.2.3. After the aggregation, the corresponding features for detecting four-class MI are extracted using RRM and classified using LSVM. After the classification, depending on the MI task identified, the decision engine generates the necessary actuation, which will be communicated to the IoT environment controller using wireless communication. Depending on the actuation received at the IoT environment controller, the corresponding appliance state is toggled.

Figure 1:

Proposed novel H4.0 framework using low-complex BCE.

Figure 1:

Proposed novel H4.0 framework using low-complex BCE.

In this letter, we consider four MI tasks: left hand, feet, tongue, and right hand to control the appliances: fan, HVAC, TV, and light, respectively, as shown in the architecture. We introduced an additional class, the no motor imagery task (NoMIT), to reduce the misclassification rate if no MI task is detected, and hence, nothing is triggered to the IoT controller when the persistent decision engine classifies as NoMIT.

#### 2.2.1  Data Acquisition and Denoising

The performance of the proposed RRM framework is analyzed using two data sets: (1) in-house recorded MI EEG data (data set I) to generate the model for real-time implementation of proposed BCE architecture and (2) the four-class MI data set 2a of BCI Competition IV (Brunner et al., 2008), which we refer to as data set II, to compare the performance of the proposed framework to that in the existing studies. These two data sets consist of 22-channel EEG information; the location of electrode positions to record the MI signal used for this study is shown in Figure 1. The acquired 22-channel analog EEG signals are digitized using a high-precision ADC with a sampling frequency of 250 Hz. The maximum MI EEG information is present in the frequency band between 8 Hz and 30 Hz (Lu et al., 2017). Hence, to eliminate low-frequency noise and power line noise and for whitening the EEG signals, we use the fifth-order Butterworth bandpass filter with a frequency range of 8 Hz to 30 Hz to denoise the signal for the considered datasets.

#### 2.2.2  Description of Data Set I: In-House Recorded MI EEG Data

Fourteen healthy subjects (seven men and seven women with an average age of 23.07 years) participated in the MI data collection, and all subjects signed the consent form before participating. This study was approved by the Ethics Committee of Indian Institute of Technology Hyderabad, India. Monopolar EEG signals were recorded in one session using a commercially available 40-channel VIRGO EEG machine (see Table 1). The montages used for the MI data collection were ($Fp1$, $Fp2$, $F7$, $F3$, $Fz$, $F4$, $F8$, $C3$, $Cz$, $C4$, $P3$, $Pz$, $P4$, $T3$, $T4$, $T5$, $T6$, $O1$, $Oz$, $O2$, $A1$, $A2$).

The experimental paradigm used for MI data collection is shown in Figure 2. Initially, the subjects were seated in a comfortable armchair in front of a computer screen. At the beginning of each trial, the subject was in IDLE mode for 3 s. We guided the user to perform the MI task (imagining moving their right hand, left hand, feet, and tongue) for a duration of 3 s after the fixation cross disappeared. For every subject, we collected 240 such trials, with 60 trials for each MI task. Figure 3 shows the subject involved in the experimental paradigm before performing the MI task. The wrapper in Figure 3 is a headband used to hold all the electrodes tightly during the data collection process. To analyze the performance of the proposed algorithms, we considered MI data between 5.5 s and 7.5 s of each trial and did not consider the $Fp1$, $Fp2$ data.

Figure 2:

Experimental paradigm of the in-house recorded MI EEG data.

Figure 2:

Experimental paradigm of the in-house recorded MI EEG data.

Figure 3:

Subject preparation before the experiment.

Figure 3:

Subject preparation before the experiment.

#### 2.2.3  Description of Data Set II: Four-Class MI Data Set 2a of BCI Competition IV

Data set II was collected from nine subjects with the four types of MI task similar to the tasks in data set I. The MI tasks in data set II are imaging the right hand, left hand, feet, and tongue. Data set II was acquired using 22 leads ($Fz$, $FC3$, $FC1$, $FCz$, $FC2$, $FC4$, $C5$, $C3$, $C1$, $Cz$, $C2$, $C4$, $C6$, $CP3$, $CP1$, $CPz$, $CP2$, $CP4$, $P1$, $Pz$, $P2$, $POz$) and comprises two sessions of EEG data; session 1, the training data set and session 2, the evaluation data set. The EEG data from these two sessions were used to evaluate the performance of the proposed algorithms for training and validation purposes as performed in Zanini et al. (2018). In each session, every subject completed 288 trials, with 72 trials for each MI class. From the analysis performed in studies, subjects S1, S3, S7, S8, and S9 obtained moderate CAs ($>$50%), and therefore performing reasonably well, while subjects S2, S4, S5, and S6 delivered poor performance. Hereafter, without loss of generality, we refer to subjects who delivered reasonable performance as “good subjects” and the others as “bad subjects.”

### 2.3  Proposed RRM-Based Feature Extraction and LSVM Classifier

Although the methodology proposed in Barachant et al. (2012) and Zanini et al. (2018) performs with reasonable accuracy, a few subjects still had poor CA. The reason for this poor performance of these subjects tends to be the strong effect of outliers and noise on the location of the Riemannian mean. Hence, to reduce the impact of noise and outliers on the performance of the classifier, we propose a novel regularized Riemannian mean (RRM)–based feature extraction (FE) approach, which improves the CA for most of the subjects. To achieve this while determining the Riemannian mean for the four MI classes, we perform regularization, which makes the feature extraction robust against the outliers. Also, after calculating the distances to the Riemannian mean of all four classes, we employ an LSVM for classification (Cortes & Vapnik, 1995).

#### 2.3.1  A Primer to Riemannian Manifold

Riemannian manifolds are well studied in the literature (Barachant et al., 2012; Zanini et al., 2018; Congedo et al., 2017; Gaur et al., 2018; Moakher, 2005). In this letter, we emphasize the important properties of the Riemannian manifold used for developing the RRM. (We refer to Barachant et al., 2012 and Moakher, 2005, for a more detailed description of the Riemannian manifold.) A manifold is a nonlinear structure that maps the high-dimensional original data with an efficient low-dimensional feature space while maintaining important properties such as geometry and topology. Riemannian manifolds are smooth manifolds equipped with Riemannian metrics, which allow one to measure geometric quantities such as distance and angle.

Let $M$ be the Riemannian manifold and $S(n)={S∈M(n),ST=S}$ be the vector space of all $n×n$ symmetric matrices in a space of square matrices $M(n)$. Let $P(n)={P∈S(n),uTPu>0,u∈R}$ be the set of all $n×n$ symmetric positive definite (SPD) matrices. $P(n)$ is the most importance while analyzing the MI data because the spatial covariance matrice (SCM) of 22-channel EEG data belongs to space $P(n)$. Figure 4 shows the Riemannian manifold $M$ with the SPD points $P$ and $Pi$.

Figure 4:

Riemannian manifold ($M$) comprising the SPD point $P$ with its corresponding tangent space $TP$ and tangent vector $Si$. $Γi(t)$ represents the geodesic between points $P$ and $Pi$.

Figure 4:

Riemannian manifold ($M$) comprising the SPD point $P$ with its corresponding tangent space $TP$ and tangent vector $Si$. $Γi(t)$ represents the geodesic between points $P$ and $Pi$.

The following properties of SPD matrices present in the Riemannian manifold considered in this study are the essential ones:

1. The exponential matrix of $P$ using the eigenvalue decomposition is calculated as follows:
$P=Udiag(σ1,…,σn)UT,$
(2.1)
$exp(P)=Udiag(exp(σ1),…,exp(σn))UT,$
(2.2)
where $σ1>σ2>…σn>0$ and $U$ are the eigenvalues and eigenvectors of $P$.
2. The logarithm of the SPD matrix $P$ (the inverse of the exponential given in equation 2.2) is given as
$log(P)=Udiag(log(σ1),…,log(σn))UT,$
(2.3)
3. Following are the additional properties of SPD matrices in the space $P(n)$:
$∀P∈P(n),det(P)>0,$
(2.4)
$∀P∈P(n),P-1∈P(n),$
(2.5)
$∀P∈P(n),log(P)∈S(n),$
(2.6)
$∀S∈S(n),exp(S)∈P(n).$
(2.7)

The spatial covariance matrix space $P(n)$ is a differentiable Riemannian manifold $M$ with a nonpositive curvature (Förstner & Moonen, 2003). Hence, for every point $P∈P(n)$, there exists a tangent space $TP$ that lies in space $S(n)$. The inner product $〈·,·〉$ and the norm, which vary smoothly over $P(n)$, are defined as
$〈S1,S2〉P=tr(S1P-1S2P-1),$
(2.8)
$||S||P2=〈S,S〉P=tr(SP-1SP-1).$
(2.9)
$Γi(t)$ in Figure 4 is known as the geodesic and is the shortest path between two points in space $P(n)$. In Figure 4, $Γ(0)=P$, and $Γ(1)=Pi$, while the length of the geodesic, also known as the Riemannian geodesic distance, is given by
$δR(P,Pi)=||log(P-1Pi)||F=∑j=1nlog2λj12,$
(2.10)
(For more insight into equation 2.10, refer to Moakher, 2005.) From Figure 4, one can also observe that the vector $Si$ is the first derivative of the $Γi(t)$ between $P$ and $Pi$ at $t=0$. Hence, the relation between $Pi$ and $Si$ can be defined as
$ExpP(Si)=Pi=P12expP-12SiP-12P12,$
(2.11)
$LogP(Pi)=Si=P12logP-12PiP-12P12.$
(2.12)
Now, using the Riemannian geodesic distance defined using equation 2.10, the Riemannian mean of the SPD matrices can be calculated as
$Gsc(P1,…PI)=argminP∈P(n)∑i=1IδR2(P,Pi),$
(2.13)
where $I$ represent the total number of trials present in a single class $c$ for a single subject $s$. The Riemannian mean, usually obtained using equation 2.13, is sensitive to the noise and the outliers present in the SCM. Hence, to minimize the effect of noise and outliers on the calculation of Riemannian mean, we introduce regularization.

#### 2.3.2  Proposed Feature Extraction Using the RRM Approach

Equation 2.14 describes the approach adhered to for calculating the regularized Riemannian mean of a single class $c$ for a target subject $s$,
$G˜sc=(1-β)scGsc+βGc,$
(2.14)
where $β$ is the regularization factor and $Gc$ is a generic covariance matrix. $Gc$ is built using a weighted sum of the SCM matrices of other subjects by deemphasizing covariance matrices estimated from fewer trials. Table 2 summarizes the notations used for the proposed RRM framework. Equations 2.15 and 2.16 provide the methodology for determining $Gc$ and $sc$:
$Gc=∑i∈Ss∑t=1NciNciNcCt,ci,$
(2.15)
$sc=NcsNc.$
(2.16)

Because regularization aims at improving the robustness of feature extraction in the presence of noise and outliers, it helps in obtaining a relatively less noisy estimation of Riemannian mean when compared to using only the information present in the target subject.

Table 1:
Technical Specifications of the VIRGO EEG Data Acquisition System.
Serial NumberFeatureValueSerial NumberFeatureValue
Number of channels 40 Sweep speed 7.5, 15, 30, 60 mm/sec
Sampling rate 1024/256 Hz Common mode rejection ratio (CMRR) 100 dB
ADC resolution 16-bit Input Impedance >100 M$Ω$
Sensitivity 1--1000 $μ$V/mm
Serial NumberFeatureValueSerial NumberFeatureValue
Number of channels 40 Sweep speed 7.5, 15, 30, 60 mm/sec
Sampling rate 1024/256 Hz Common mode rejection ratio (CMRR) 100 dB
ADC resolution 16-bit Input Impedance >100 M$Ω$
Sensitivity 1--1000 $μ$V/mm
Table 2:
Summary of Primary Notations Used for the RRM Framework.
ParameterDescription
$s$ Target subject $s$
$c$ MI class $c$ of a target subject $s$
$β$ Regularization factor
$Gsc$ Riemannian mean of SPD matrices
$Gc$ Generic covariance matrix
$G˜sc$ Regularized Riemannian mean of a target subject $s$ for class c
$Ss$ Comprises all subjects except target subject s
$Nc$ Total number of trials of all subjects including target subject for class c
$Ncs$ Total number of trials of target subject for class c
$Nci$ Total number of trials of subject i for the class c
$Ct,ci$ Spatial covariance matrix of a trail $t$ for the class c and for subject i
ParameterDescription
$s$ Target subject $s$
$c$ MI class $c$ of a target subject $s$
$β$ Regularization factor
$Gsc$ Riemannian mean of SPD matrices
$Gc$ Generic covariance matrix
$G˜sc$ Regularized Riemannian mean of a target subject $s$ for class c
$Ss$ Comprises all subjects except target subject s
$Nc$ Total number of trials of all subjects including target subject for class c
$Ncs$ Total number of trials of target subject for class c
$Nci$ Total number of trials of subject i for the class c
$Ct,ci$ Spatial covariance matrix of a trail $t$ for the class c and for subject i

#### 2.3.3  Estimation of $β$⁠, Feature Extraction Using RRM, and Classification Using LSVM

In general, the optimal $β$ used for regularization varies from subject to subject and takes values in the interval [0, 1]. To calculate the optimal $β$ that improves the CA, we adhere to a simple parameter search approach. Data set I comprises 14 subjects (seven male, seven female), and every subject performed 240 trials (60 trials for each MI task). The data of data set I are divided into a training and validation set using a 10-fold cross-validation (CV) procedure. Again, the training set is further divided into subtraining and subtesting data using a 10-fold CV procedure. The subtraining and subtesting sets are used for determining the optimal $β$ for the respective subject, and the process is explained using algorithm 1.

Initially, for a single $β$, the RRM matrices for all four classes are determined using the subtraining set ($subTrainingSet$) in algorithm 1. Then, for all the trials in the subtraining set, the Riemannian distance to all four RRM matrices are calculated and used as a single feature vector ($featureVector$). After all the training features ($trainFeatures$) are calculated, the LSVM is trained. Using the trained LSVM and RRM matrices, we analyzed performance using the subtesting set. We repeated a similar process for all the values of $β$ and considered the $β$ with the highest accuracy to be optimal for the target subject. After achieving the optimal $β$, we performed validation on the validation set as described in algorithm 2. The performance of the proposed RRM framework is also analyzed on BCI Competition IV data set 2a (data set II) with a similar procedure, as explained in algorithms 1 and 2.

## 3  Results

### 3.1  Performance Analysis of the Proposed Novel Low-Complex Classifier without the Persistent Decision Engine Algorithm

In this section, we analyze the performance of the proposed novel low-complex classifier without the PDE algorithm in terms of CA and kappa value. We considered data sets I and II for the performance analysis. Row 1 of Table 3 shows the CAs of data set I using RRM classification framework. From Table 3, it can be observed that the proposed RRM framework achieves an overall classification accuracy of 74.30% when validated using data set I.

Table 3:
Classification Accuracy of Data Set I Using Proposed RRM Framework and RRM Framework with Persistent Decision Engine Algorithm Using a 10-Fold CV Procedure.
IndexMethod$S1$$S2$$S3$$S4$$S5$$S6$$S7$$S8$$S9$$S10$$S11$$S12$$S13$$S14$Mean CA (%)
Proposed RRM framework without a persistent decision engine 47.39 82.72 87.50 43.33 60.83 74.34 89.58 81.25 74.16 82.50 97.08 69.58 82.08 67.91 74.30
Proposed RRM framework with a persistent decision engine algorithm (for 20 s MI task) 40.50 98.50 99.50 39.00 70.50 93.00 99.50 94.50 93.50 98.50 100.00 85.50 90.00 84.00 84.75
Proposed RRM framework with persistent decision engine algorithm (on 12 subjects) – 98.50 99.50 – 70.50 93.00 99.50 94.50 93.50 98.50 100.00 85.50 90.00 84.00 92.25
IndexMethod$S1$$S2$$S3$$S4$$S5$$S6$$S7$$S8$$S9$$S10$$S11$$S12$$S13$$S14$Mean CA (%)
Proposed RRM framework without a persistent decision engine 47.39 82.72 87.50 43.33 60.83 74.34 89.58 81.25 74.16 82.50 97.08 69.58 82.08 67.91 74.30
Proposed RRM framework with a persistent decision engine algorithm (for 20 s MI task) 40.50 98.50 99.50 39.00 70.50 93.00 99.50 94.50 93.50 98.50 100.00 85.50 90.00 84.00 84.75
Proposed RRM framework with persistent decision engine algorithm (on 12 subjects) – 98.50 99.50 – 70.50 93.00 99.50 94.50 93.50 98.50 100.00 85.50 90.00 84.00 92.25

The performance of the proposed RRM framework is also analyzed using data set II. Table 4 shows the performance comparison of the proposed RRM framework with other studies (Dong, Zhu et al., 2017; Zanini et al., 2018; Yang, Sakhavi, Ang, & Guan, 2015; Lawhern et al., 2018; Schirrmeister et al., 2017). In Table 4, SESN-1 and SESN-2 represent the session 1 and session 2 data sets of data set II. Zanini et al. (2018) developed an affine transformation along with the minimum distance to mean (MDM) and mixture of gaussian distribution-based classification for improving the CAs. Dong, Zhu et al. (2017) considered a kernel-based mechanism to increase the MI CAs. Yang et al. (2015), Schirrmeister et al. (2017), and Lawhern et al. (2018) obtained a four-class MI mean CA of 69.00%, 68.00%, and 69.27% (rows 3, 4, and 5, respectively) using a convolutional neural network (CNN). The proposed RRM algorithm outperforms when compared with the previous methods and achieved a mean CA of 70.24% for the evaluation data set (SESN-2) of data set II. The same can be observed from row 6 of Table 4.

Table 4:
Performance Comparison of the Proposed RRM Framework with Existing Studies When Validated Using BCI Competition IV Data Set 2a (Training Data Set SESN-1 Data) $+$ Evaluation Data Set SESN-2 Data) with a 10-Fold CV Procedure.
IndexMethodSession$S1$$S2$$S3$$S4$$S5$$S6$$S7$$S8$$S9$Mean CA (%)
0.5 $×$ Gaussian-K $+$ 0.5 $×$ Cauchy-K Dong, Zhu et al. (2017) (5-fold CV) SESN-1 CAs 71.43 50.36 78.21 54.64 41.43 45.00 75.71 85.00 75.71 64.17
SESN-2 CAs 71.43 46.07 82.50 62.50 51.79 46.07 79.64 83.21 85.00 67.58
Mean(SESN-1,SESN-2) 71.43 48.21 80.35 58.57 46.61 45.53 77.67 84.10 80.35 65.87
Gaussian mixture Bayesian model (without affine transformation) Mean(SESN-1,SESN-2) 73.10 40.10 73.80 46.40 32.70 42.90 69.80 73.10 77.30 58.80
Gaussian mixture Bayesian model (with affine transformation) (Zanini et al., 2018Mean(SESN-1,SESN-2) 79.60 50.60 81.50 51.60 43.20 44.60 75.20 82.10 81.80 65.58
CNN (5-fold CV) (Yang et al., 2015SESN-2 CAs 77.15 49.82 80.41 53.88 65.47 48.70 81.37 84.39 82.29 69.27
FBCSP (Schirrmeister et al., 2017SESN-2 CAs – – – – – – – – – 68.00
EEGNet (4-fold CV) (Lawhern et al., 2018SESN-2 CAs – – – – – – – – – 69.00
Proposed RRM framework without persistent decision engine SESN-1 CAs 77.92 59.58 81.25 52.50 40.42 47.62 78.16 80.00 67.33 64.98
SESN-2 CAs 74.05 60.42 82.56 67.50 45.12 50.00 86.67 80.83 85.00 70.24
Mean(SESN-1,SESN-2) 75.99 60.00 81.90 60.00 42.77 48.81 82.42 80.42 76.17 67.60
Proposed RRM framework with persistent decision engine (for 20 s MI task) Mean(SESN-1,SESN-2) 89.25 58.75 95.50 54.25 19.00 31.75 96.00 94.75 89.25 69.83
Proposed RRM framework with persistent decision engine (on seven subjects-for 20 s MI task) Mean(SESN-1,SESN-2) 89.25 58.75 95.50 54.25 – – 96.00 94.75 89.25 82.54
IndexMethodSession$S1$$S2$$S3$$S4$$S5$$S6$$S7$$S8$$S9$Mean CA (%)
0.5 $×$ Gaussian-K $+$ 0.5 $×$ Cauchy-K Dong, Zhu et al. (2017) (5-fold CV) SESN-1 CAs 71.43 50.36 78.21 54.64 41.43 45.00 75.71 85.00 75.71 64.17
SESN-2 CAs 71.43 46.07 82.50 62.50 51.79 46.07 79.64 83.21 85.00 67.58
Mean(SESN-1,SESN-2) 71.43 48.21 80.35 58.57 46.61 45.53 77.67 84.10 80.35 65.87
Gaussian mixture Bayesian model (without affine transformation) Mean(SESN-1,SESN-2) 73.10 40.10 73.80 46.40 32.70 42.90 69.80 73.10 77.30 58.80
Gaussian mixture Bayesian model (with affine transformation) (Zanini et al., 2018Mean(SESN-1,SESN-2) 79.60 50.60 81.50 51.60 43.20 44.60 75.20 82.10 81.80 65.58
CNN (5-fold CV) (Yang et al., 2015SESN-2 CAs 77.15 49.82 80.41 53.88 65.47 48.70 81.37 84.39 82.29 69.27
FBCSP (Schirrmeister et al., 2017SESN-2 CAs – – – – – – – – – 68.00
EEGNet (4-fold CV) (Lawhern et al., 2018SESN-2 CAs – – – – – – – – – 69.00
Proposed RRM framework without persistent decision engine SESN-1 CAs 77.92 59.58 81.25 52.50 40.42 47.62 78.16 80.00 67.33 64.98
SESN-2 CAs 74.05 60.42 82.56 67.50 45.12 50.00 86.67 80.83 85.00 70.24
Mean(SESN-1,SESN-2) 75.99 60.00 81.90 60.00 42.77 48.81 82.42 80.42 76.17 67.60
Proposed RRM framework with persistent decision engine (for 20 s MI task) Mean(SESN-1,SESN-2) 89.25 58.75 95.50 54.25 19.00 31.75 96.00 94.75 89.25 69.83
Proposed RRM framework with persistent decision engine (on seven subjects-for 20 s MI task) Mean(SESN-1,SESN-2) 89.25 58.75 95.50 54.25 – – 96.00 94.75 89.25 82.54

Row 6 of Table 4 provides the subject-wise CAs of the proposed RRM classification framework in comparison with Dong, Zhu et al. (2017), Zanini et al. (2018), and Yang et al. (2015) (rows 1, 2, and 3, respectively). Zanini et al. (2018) referred to the subjects highlighted in red as the bad subjects due to their noisy EEG data leading to reduced CAs, whereas the subjects in green were “good subjects.” We use this notation for analyzing the performance of the proposed RRM framework.

On average, the proposed RRM classification framework achieves a mean CA of 67.60% when validated using data set II (two sessions: SESN-1 and SESN-2), thereby outperforming both Zanini et al. (2018) and Dong, Zhu et al. (2017). In terms of subjects' performance, the proposed RRM framework offers better CAs for five subjects (S2, S3, S4, S6, and S7) when compared to Zanini et al. (2018) and Dong, Zhu et al. (2017). The proposed RRM algorithm improves the accuracy of all subjects except S1, S5, and S8; provides close CAs for subjects S1 and S8 when compared with Yang et al. (2015). Yang et al. (2015) employed eight levels of frequency band and obtained a high number of features for four-class MI classification, whereas the proposed RRM algorithm provides only four discriminative features using a single frequency band and classifies four-class MI using a single SVM classifier, thereby achieving lower complexity. The regularization of the proposed RRM framework leads to improved CAs for the bad subjects while maintaining the CAs of the good subjects. To the best of our knowledge, although many state-of-the-art methods for classifying four-class MI improved overall accuracy, none of the methods were able to improve the accuracy of every subject. Also, one major drawback of Zanini et al.'s (2018) method is that it requires calculating affine transformed covariance matrix for every subject in every session of data collection, which makes it impractical to use in real time. The main advantage of the proposed RRM algorithm is that it can be implemented in real time and classifies the performed MI task with minimal latency.

#### 3.1.1  Performance Comparison of the Proposed RRM Framework Using Kappa Coefficient

The performance of the proposed RRM framework is also evaluated in terms of kappa values and compared with the studies in Table 5. From Table 5, it can be observed that the proposed RRM framework performs well when compared with the BCI Competition results (Ang et al., 2008; Guangquan, Gan, & Xiangyang, 2008; Barachant et al., 2012). The proposed algorithm obtained a close mean kappa value when compared with the methods in Jafarifarmand et al. (2018) and Gaur et al. (2018). Gaur et al. (2018) determined the relevant MI information in offline mode for every subject in the preprocessing stage (hence, the frequency range is varied for every subject) and proposed a subject-specific multivariate empirical mode decomposition-based filtering method (SS-MEMDBF) for MI classification. Hence, the SS-MEMDBF method cannot be used as a generalized model for multiclass MI classification in real time.

Table 5:
Performance Comparison of Proposed RRM Framework with Existing Studies When Validated Using SESN 2 Data of BCI Competition IV Data Set 2a in Terms of Kappa Coefficient.
MethodS1S2S3S4S5S6S7S8S9Mean
Ang et al. (20080.69 0.34 0.71 0.44 0.16 0.21 0.66 0.73 0.69 0.52
Guangquan et al. (20080.68 0.42 0.75 0.48 0.40 0.27 0.77 0.75 0.61 0.57
Barachant et al. (2012) (30-fold CV) 0.75 0.37 0.66 0.53 0.29 0.27 0.56 0.58 0.68 0.52
Gaur et al. (2018) (5-fold CV) 0.86 0.24 0.70 0.68 0.36 0.34 0.66 0.75 0.82 0.60
Jafarifarmand et al. (2018) method 1 0.79 0.42 0.79 0.63 0.42 0.34 0.84 0.65 0.78 0.63
Jafarifarmand et al. (2018) method 2 0.76 0.42 0.78 0.47 0.33 0.33 0.66 0.79 0.75 0.588
Jafarifarmand et al. (2018) method 3 0.70 0.39 0.71 0.60 0.35 0.35 0.65 0.80 0.75 0.589
Proposed RRM framework (10-fold CV) 0.65 0.47 0.77 0.57 0.27 0.33 0.82 0.74 0.80 0.603
MethodS1S2S3S4S5S6S7S8S9Mean
Ang et al. (20080.69 0.34 0.71 0.44 0.16 0.21 0.66 0.73 0.69 0.52
Guangquan et al. (20080.68 0.42 0.75 0.48 0.40 0.27 0.77 0.75 0.61 0.57
Barachant et al. (2012) (30-fold CV) 0.75 0.37 0.66 0.53 0.29 0.27 0.56 0.58 0.68 0.52
Gaur et al. (2018) (5-fold CV) 0.86 0.24 0.70 0.68 0.36 0.34 0.66 0.75 0.82 0.60
Jafarifarmand et al. (2018) method 1 0.79 0.42 0.79 0.63 0.42 0.34 0.84 0.65 0.78 0.63
Jafarifarmand et al. (2018) method 2 0.76 0.42 0.78 0.47 0.33 0.33 0.66 0.79 0.75 0.588
Jafarifarmand et al. (2018) method 3 0.70 0.39 0.71 0.60 0.35 0.35 0.65 0.80 0.75 0.589
Proposed RRM framework (10-fold CV) 0.65 0.47 0.77 0.57 0.27 0.33 0.82 0.74 0.80 0.603

Jafarifarmand et al. (2018) proposed three methods using AR-BCSP $+$ SRSG-FasArt framework for multiclass MI classification. Although the performance of method 1 in Jafarifarmand et al. (2018) results in a slightly better kappa value when compared with the proposed algorithm, the computational complexity of the methods the authors proposed is high when compared with the proposed RRM framework. Their methods extracted the features using CSP and classification using the fuzzy logic systems (three classifiers) for four-class MI classification, whereas the proposed RRM framework provides four distinguished features and uses only one SVM classifier for multiclass classification. Moreover, the main advantage of the RRM algorithm lies in the feasibility for real-time implementation with minimal latency for decisions on the performed MI task.

#### 3.1.2  Feature Discrimination (FD) Analysis of the Proposed RRM Framework

The Riemannian distances computed using the proposed RRM framework are the features used for the classification. To provide a more detailed explanation on the contribution of features, we analyzed the performance of the proposed framework by analyzing the features extracted using 6-, 10-, 14- (15 for data set II), and 20- (22 for data set II) channel sampled data. Data set I subjects S1 and S11 and data set II subjects S4 and S7 are considered for the performance analysis. The electrode positions used to evaluate the performance of the proposed framework is shown in Figures 5 and 6.

Figure 5:

Data set I electrode positions considered for feature discrimination analysis.

Figure 5:

Data set I electrode positions considered for feature discrimination analysis.

Figure 6:

Data set II electrode positions considered for feature discrimination analysis.

Figure 6:

Data set II electrode positions considered for feature discrimination analysis.

The mean value of all the features (i.e., Riemannian distances calculated to the four regularized Riemannian mean matrices for the four MI classes over all trials of subject S11 of data set I) is shown in Figure 7. From Figure 7, it is observed that the mean Riemannian distances to all four classes are becoming increasingly distinguishable with the increase in the number of channels. Figure 7d shows the variation among features when 20 channels are used, and it can be observed that the features are highly distinguishable when compared to the scenario where fewer electrodes are used. Due to the fact that subject S11 of data set I has better distinguishable features when using 20 channels, a maximum classification accuracy of 97.08% is obtained when the proposed framework is used (see row 1 of Table 3).

Figure 7:

Feature discrimination of subject S11 of data set I with an increased number of channels.

Figure 7:

Feature discrimination of subject S11 of data set I with an increased number of channels.

Subject S7 of data set II shows similar behavior when the number of channels is increased (see Figure 8). Hence, subject S7 of data set II obtained a better CA, 82.42% (see row 6 of Table 4), for 22-channel sampled data.

Figure 8:

Feature discrimination of subject S7 of data set II with an increased number of channels.

Figure 8:

Feature discrimination of subject S7 of data set II with an increased number of channels.

We also evaluated, the performance of subject S1 of data set I and S4 of data set II. The feature discrimination analysis results of these subjects are shown in Figures 9 and 10. From Figure 9, it is observed that when the number of channels is increased, the distinguishable aspect among the features did not improve much. Hence, subject S1 of data set I has a low CA of 47.39% (see row 1 of Table 3). Subject S4 of data set II shows moderate variation in the mean feature value as the number of channels is increased (see Figure 10). Hence, subject S4 of data set II obtained moderate accuracy, 60.00% (see row 6 of Table 4). Also, it can be observed that the CAs depend on the cognitive state of the user, electrode placement, and subject-wise performance.

Figure 9:

Feature discrimination of subject S1 of data set I with an increased number of channels.

Figure 9:

Feature discrimination of subject S1 of data set I with an increased number of channels.

Figure 10:

Feature discrimination of subject S4 of data set II with an increased number of channels.

Figure 10:

Feature discrimination of subject S4 of data set II with an increased number of channels.

### 3.2  Persistent Decision Engine for False Alarm Reduction

From row 1 of Table 3, one can observe that the average CA obtained for data set I using the proposed low-complex classifier is 74.30%. In a realistic scenario, where the proposed architecture is used by those who are elderly or disabled, this accuracy creates an unpleasant user experience. Because of the high misclassification rate (25.70%), the system toggles the state of the unintended devices frequently. Hence, we propose a persistent decision-based classification framework for improving confidence in decision making, which reduces the false alarms. To achieve this, we asked the users to perform the same MI task for an extended duration (varying from 4 s to 20 s) rather than 2 s. The EEG data collected over this extended duration (which we refer to as a cycle hereafter) will now be divided into multiple trials with durations of 2 s each. We then classified these trials using the RRM$+$LSVM classifier rather than a single cycle.

Using all the classified outputs, we make use of a threshold-based decision for generating the control action. For example, if a user has performed right-hand MI for a cycle of 20 s, it can be divided into 10 trials and each trial can be classified into any one of the available four classes. Due to the moderate accuracy of the classifier using the RRM framework, every trial will not be classified as the true label (which is the right hand); instead, some trials will be classified as other classes, such as left hand, feet, or tongue. Hence, from the classifier outputs of the 10 trials, we will be calculating the class that has the majority. If the majority exceeds the threshold, the entire cycle is labeled with the majority class and the state of the respective appliance is toggled. If the majority class does not exceed the threshold, the entire cycle will be discarded and is labeled “NoMIT” (no motor imagery task). The same is also considered while analyzing the CA. The advantage of this mechanism is that the decision will be made if confidence is high, leading to the user's control of the intended appliance; otherwise, no appliance will be controlled, thereby reducing the false alarms.

Figure 11 plots the performance of the proposed PDE algorithm considering a cycle duration of 20 s for the data set I subjects, varying the threshold from 60% (at least 6 trials should be classified as majority class out of 10 output labels) to 100% (all the 10 trials should be classified as the same class). For the analysis, we considered only the right-hand MI performed by all subjects of data set I except S1 and S4. The trial data for this 20 s duration are generated by concatenating 10 right-hand MI trials of every subject randomly. With a threshold of 60%, most of the subjects are performing well, and false alarms are reduced significantly. For a threshold of 60%, the average CA obtained for the right-hand MI is 94.33%, and the same can also be observed from Figure 11.

Figure 11:

Performance of data set I (except S1 and S4) using the proposed RRM framework with the PDE algorithm for the right-hand MI with a cycle duration of 20 s.

Figure 11:

Performance of data set I (except S1 and S4) using the proposed RRM framework with the PDE algorithm for the right-hand MI with a cycle duration of 20 s.

The performance of the proposed PDE algorithm is also tested with the seven subjects of data set II. Figure 12 shows the performance of the proposed PDE framework with a cycle duration of 20 s for the feet MI task. The mean CA of feet MI obtained with a threshold of 60% over 10 trials is 87.57%. When the threshold is increased to 90%, the CA reduces for subjects S4 and S8, as the chance of correctly classifying at least 9 trials from all the 10 feet trials is very low.

Figure 12:

Performance of the subjects of data set II (except S5 and S6) using the proposed RRM framework with the PDE algorithm for the feet MI with a cycle duration of 20 s.

Figure 12:

Performance of the subjects of data set II (except S5 and S6) using the proposed RRM framework with the PDE algorithm for the feet MI with a cycle duration of 20 s.

In order to understand the significance of the cycle duration (20 s), considered, we analyzed the performance of the proposed PDE algorithm for five cycle durations ranging from 4 s to 20 s with a step size of 4 s. Although the duration of the cycle is varied, the decision model is kept the same and adheres to the majority and threshold approach. Hence, for the cycle durations 4 s (2 trials), 8 s (4 trials), 12 s (6 trials), 16 s (8 trials), and 20 s (10 trials), the threshold adheres to the rule that the majority should be at least 2, 3, 4, 5, 6 trials, respectively, for the classification of the cycle into any of the four MI tasks to happen. Otherwise, the entire cycle is discarded and labeled as NoMIT. Figures 13a to 13d show the performance analysis of the proposed PDE algorithm for data sets I and II for the four MI tasks. From Figure 13, it can be observed that the accuracy of the four MI tasks (left hand, right hand, feet, and tongue) increases with the increase in cycle duration for the two data sets and obtained maximum accuracy for the 20 s duration for all four MI classes.

Figure 13:

Performance of the proposed RRM-based PDE algorithm for data set I (except subjects S1 and S4) and data set II (except subjects S5 and S6) with the increase in cycle time—4 s (2 trials) to 20 s (10 trials)—for four MI classes in terms of CA.

Figure 13:

Performance of the proposed RRM-based PDE algorithm for data set I (except subjects S1 and S4) and data set II (except subjects S5 and S6) with the increase in cycle time—4 s (2 trials) to 20 s (10 trials)—for four MI classes in terms of CA.

The overall mean classification accuracy of the four MI classes of data sets I (except subjects S1 and S4) and II (except subjects S5 and S6) using the PDE algorithm for cycle duration from 4 s (2 trials) to 20 s (10 trials) is shown in Figure 14. The highest overall CA is obtained with a cycle duration of 20 s for both data sets, and it can also be observed in Figure 14. The performance analysis shows that few subjects achieved high CA on a small MI interval. We strongly believe that a slight improvement in latency for increased classification accuracy will improve the user experience considering the current state-of-the-art CAs. From the analysis, it is also observed that the CAs depend on the cognitive state of the user, electrode placement, and subject-wise performance. From row 1 of Table 3 and rows 1, 2, and 6 of Table 4, it is observed that data set I subjects S1 and S4 and data set II subjects S5 and S6 delivered poor performance. The EEG data of these subjects show low cognitive MI information and, hence, low CA ($<$50%) using the proposed RRM framework. The performance also follows a similar trend with the existing studies as well. The proposed PDE algorithm works on the principle of majority classification. Due to the fact that the CA of bad subjects is less than 50%, the overall CA further degrades with the use of the PDE. Therefore, we consider all the subjects of data sets I (except subjects S1 and S4) and II (except S5 and S6) for analyzing the overall performance. For the good subjects, where the CA is greater than 50%, the false alarm reduction method improves the CA significantly. Hence, a mean CA of 92.25% is achieved for data set I (except S1 and S4) and 82.54% for data set II (except S5 and S6).

Figure 14:

Mean CA of four MI classes of data sets I (except S1 and S4) and II (except S5 and S6) using the proposed RRM-based PDE algorithm.

Figure 14:

Mean CA of four MI classes of data sets I (except S1 and S4) and II (except S5 and S6) using the proposed RRM-based PDE algorithm.

The MCM (mean confusion matrix) of data sets I and II with the proposed RRM framework and RRM-based PDE algorithm is shown in Tables 6 and 7. Due to the limited number of trials in the two data sets for the four MI tasks, we have used five-fold cross-validation for analyzing the performance of the RRM-based PDE algorithm. From panel C in Table 6, one can observe that the highest CA of 95.50% is obtained for the left MI of data set I. Also, using the same algorithm, the feet MI of data set II achieved a maximum accuracy of 87.57%. The same can also be observed from panel C of Table 7. The proposed algorithms are implemented in Matlab running on an Intel i5 processor with 16 GB RAM and 2.8 GHz clock. The inbuilt Matlab function libsvm is used for the MI classification to implement the LSVM classifier.

Table 6:
Mean Confusion Matrix (MCM) of Data Set I (In-House Recorded Motor Imagery Data).
A. MCM of Data Set I for the RRM Framework without Persistent Decision Engine Algorithm
L HandR HandFeetTongue
L hand 76.25 15.38 4.89 3.48
R hand 11.93 75.88 8.11 4.08
Feet 3.81 8.22 71.17 16.80
Tongue 3.36 4.17 18.56 73.91
B. MCM of Data Set I with the RRM Framework
Using Persistent Decision Engine Algorithm, 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 86.29 0.28 0.00 0.00 13.43
R hand 0.43 87.14 0.00 0.00 12.43
Feet 0.00 0.29 80.00 1.00 18.71
Tongue 0.00 0.00 1.42 85.58 13.00
C. MCM of Data Set I (except Subjects S1, S4) with the RRM
Framework Persistent Decision Engine Algorithm for a 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 95.50 0.17 0.00 0.00 4.33
R hand 0.17 94.33 0.00 0.00 5.50
Feet 0.00 0.34 87.33 1.00 11.33
Tongue 0.00 0.00 1.50 91.83 6.67
A. MCM of Data Set I for the RRM Framework without Persistent Decision Engine Algorithm
L HandR HandFeetTongue
L hand 76.25 15.38 4.89 3.48
R hand 11.93 75.88 8.11 4.08
Feet 3.81 8.22 71.17 16.80
Tongue 3.36 4.17 18.56 73.91
B. MCM of Data Set I with the RRM Framework
Using Persistent Decision Engine Algorithm, 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 86.29 0.28 0.00 0.00 13.43
R hand 0.43 87.14 0.00 0.00 12.43
Feet 0.00 0.29 80.00 1.00 18.71
Tongue 0.00 0.00 1.42 85.58 13.00
C. MCM of Data Set I (except Subjects S1, S4) with the RRM
Framework Persistent Decision Engine Algorithm for a 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 95.50 0.17 0.00 0.00 4.33
R hand 0.17 94.33 0.00 0.00 5.50
Feet 0.00 0.34 87.33 1.00 11.33
Tongue 0.00 0.00 1.50 91.83 6.67
Table 7:
Mean Confusion Matrix (MCM) of Data Set II (BCI Competition IV Four-Class Motor Imagery Data Set 2a).
A. MCM of Data Set II for the RRM Framework without Persistent Decision Engine Algorithm
L HandR HandFeetTongue
L hand 65.84 17.20 9.37 7.60
R hand 15.47 65.08 10.75 8.70
Feet 8.99 9.08 69.45 12.48
Tongue 6.83 8.71 14.40 70.06
B. MCM of Data Set II with the RRM Framework Using
Persistent Decision Engine Algorithm for a 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 66.34 1.78 0.44 0.00 31.44
R hand 1.89 65.33 1.00 0.00 31.78
Feet 0.11 0.67 73.56 0.44 25.22
Tongue 0.22 0.11 0.89 74.11 24.67
C. MCM of Data Set II (except Subjects S5, S6) with the RRM
Framework Persistent Decision Engine Algorithm for a 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 78.29 2.00 0.43 0.00 19.28
R hand 1.71 80.71 1.14 0.00 16.43
Feet 0.00 0.86 87.57 0.57 11.00
Tongue 0.14 0.00 1.14 83.57 15.15
A. MCM of Data Set II for the RRM Framework without Persistent Decision Engine Algorithm
L HandR HandFeetTongue
L hand 65.84 17.20 9.37 7.60
R hand 15.47 65.08 10.75 8.70
Feet 8.99 9.08 69.45 12.48
Tongue 6.83 8.71 14.40 70.06
B. MCM of Data Set II with the RRM Framework Using
Persistent Decision Engine Algorithm for a 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 66.34 1.78 0.44 0.00 31.44
R hand 1.89 65.33 1.00 0.00 31.78
Feet 0.11 0.67 73.56 0.44 25.22
Tongue 0.22 0.11 0.89 74.11 24.67
C. MCM of Data Set II (except Subjects S5, S6) with the RRM
Framework Persistent Decision Engine Algorithm for a 20 s Cycle
L Hand R Hand Feet Tongue NoMIT
L hand 78.29 2.00 0.43 0.00 19.28
R hand 1.71 80.71 1.14 0.00 16.43
Feet 0.00 0.86 87.57 0.57 11.00
Tongue 0.14 0.00 1.14 83.57 15.15

### 3.3  Real-Time Hardware Implementation of Proposed Architecture

The hardware prototype of the proposed RRM framework is designed using Raspberry Pi 3 Model B$+$ and the Virgo EEG data acquisition system. From the implementation, it is observed that the average latency required to classify the MI task (2 s) performed by the user in real time using the proposed RRM framework is 57.667 ms, and the same can also be observed from Table 8, which also provides latency incurred for important individual functional units present in the framework. The incurred latency can be further reduced by implementing the proposed architecture in an ASIC (application-specific integrated circuit) platform for the wearable EEG systems.

Table 8:
Latency of Proposed Low-Complex BCE Architecture in Real Time.
Hardware UnitLatency (ms)Hardware UnitLatency (ms)
Fifth-order BPF 9.443 SCM 2.776
Riemannian distance features 43.016 LSVM 2.431
Average latency   57.667
Hardware UnitLatency (ms)Hardware UnitLatency (ms)
Fifth-order BPF 9.443 SCM 2.776
Riemannian distance features 43.016 LSVM 2.431
Average latency   57.667

#### 3.3.1  Online Data Acquisition and Real-Time Evaluation of Proposed Framework

The two offline data sets were used for model training, validation, and comparison of the proposed framework with existing studies. However, due to the fact that data sets I and II have different channel configurations, for the real-time implementation, we used the model trained using data set I because its channel configuration is similar to the Virgo EEG data acquisition system used in the implementation. In general, the duration required to perform an MI task strictly depends on the subject, and usually requires up to 1 s. Hence, to better classify the MI task, the collected data should comprise the entire task information, which requires EEG data of 1 s duration. To avoid loss of information across the successive windows, we considered a data duration of 2 s.

Figures 15a and 15b show the subject performing the MI task and the developed real-time prototype of the proposed low-complex BCE architecture using the Virgo EEG data acquisition system along with the Raspberry Pi 3 Model B$+$, respectively. The Raspberry Pi aggregates the EEG data through USART interface from the Virgo and classifies the performed MI task using the architecture shown in Figure 1. The individual average latency required to process the above operations is shown in Table 8. Hence, the end-to-end average latency incurred using the proposed framework for the decision making is observed to be 57.667 ms and the same is provided in Table 8. Similarly, for the PDE algorithm, the low-complex classifier output is accumulated over multiple trials and is fed to the PDE algorithm after of 20 s. The output of the PDE algorithm is then transmitted to the receiver actuation system to toggle the desired appliance for the performed MI task.

Figure 15:

(a) A subject involved in the real-time experiment. (b) The real-time setup of proposed low-complex BCE architecture. (c) A developed array of TRIAC switches being controlled by the ESP 8266 at the receiver.

Figure 15:

(a) A subject involved in the real-time experiment. (b) The real-time setup of proposed low-complex BCE architecture. (c) A developed array of TRIAC switches being controlled by the ESP 8266 at the receiver.

#### 3.3.2  IoT Gateway

The developed IoT gateway is shown in Figure 15c. On the receiver side, we have developed a TRIAC array controled by the NodeMCU using the decision obtained from the proposed framework. After the decision is made, Raspberry Pi communicates the control decision to the IoT Gateway using Wi-Fi, and the NodeMCU toggles the state of the electrical appliance associated with the MI task performed by the user.

## 4  Conclusion

In this letter, we have developed a novel real-time implementable Health 4.0 architecture for brain-controlled IoT-enabled environments. We also proposed a low-complex and persistent decision engine–based four-class MI classifier using RRM and LSVM. We considered the in-house recorded MI EEG data (data set I with 14 subjects) and four-class MI data set 2a from BCI Competition IV (data set II with 9 subjects) for analyzing the performance of the proposed system architecture. The proposed RRM framework obtained a mean CA of 74.30% for data set I and 67.80% for data set II. When compared with the existing studies for data set II, the proposed low-complex classifier achieved better accuracy for five subjects. To further improve the accuracy of the proposed RRM framework, we developed a persistent decision engine algorithm that uses probabilistic inferences to improve the CA. The mean CA of 92.25% for 12 subjects of data set I and 82.54% for 7 subjects of data set II is obtained using the proposed persistent decision engine algorithm. We also analyzed the performance of the proposed architecture in real time using Raspberry Pi 3 Model B$+$. The hardware implementation results show that the proposed architecture offers an average latency of 57.667 ms to classify the performed MI task. We would like to develop a more efficient framework that improves the classification accuracy of every subject more than 50%. Also, we will work toward developing a novel feature extraction and classification method for classifying four-class MI using a minimal set of electrodes.

## Acknowledgments

This work is partly supported by Visvesvaraya PhD Scheme, Media Lab Asia, MeitY, Govt. of India and partly funded by Indian Institute of Technology Hyderabad.

## References

Ang
,
K. K.
,
Chin
,
Z. Y.
,
Wang
,
C.
,
Guan
,
C.
,
Zhang
,
H.
,
Phua
,
K. S.
, …
Tee
,
K. P.
(
2008
).
BCI Competition IV Dataset 2A Results
. www.bbci.de/competition/iv/results/index.html#dataset2a
Angel
,
J. L.
,
Vega
,
W.
, &
López-Ortega
,
M.
(
2017
).
Aging in Mexico: Population trends and emerging issues
.
Gerontologist
,
57
(
2
),
153
162
.
Barachant
,
A.
,
Bonnet
,
S.
,
Congedo
,
M.
, &
Jutten
,
C.
(
2012
).
Multiclass brain–computer interface classification by Riemannian geometry
.
IEEE Transactions on Biomedical Engineering
,
59
(
4
),
920
928
.
Bhattacharyya
,
S.
,
Konar
,
A.
, &
Tibarewala
,
D.
(
2017
).
Motor imagery and error related potential induced position control of a robotic arm
.
IEEE/CAA Journal of Automatica Sinica
,
4
(
4
),
639
650
.
Blankertz
,
B.
,
Tomioka
,
R.
,
Lemm
,
S.
,
Kawanabe
,
M.
, &
Müller
,
K.-R.
(
2008
).
Optimizing spatial filters for robust EEG single-trial analysis
,
IEEE Signal Processing Magazine
,
25
(
1
),
41
56
.
Brunner
,
C.
,
Leeb
,
R.
,
Müller-Putz
,
G.
,
Schlögl
,
A.
, &
Pfurtscheller
,
G.
(
2008
).
BCI Competition 2008–Graz Data Set A
.
Graz
:
Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology
.
Congedo
,
M.
,
Barachant
,
A.
, &
Bhatia
,
R.
(
2017
).
Riemannian geometry for EEG-based brain-computer interfaces: A primer and a review
,
Brain-Computer Interfaces
,
4
(
3
),
155
174
.
Cortes
,
C.
, &
Vapnik
,
V.
(
1995
).
Support-vector networks
,
Machine Learning
,
20
(
3
),
273
297
.
Da Xu
,
L.
,
He
,
W.
, &
Li
,
S.
(
2014
).
Internet of things in industries: A survey
.
IEEE Transactions on Industrial Informatics
,
10
(
4
),
2233
2243
.
Dong
,
E.
,
Li
,
C.
,
Li
,
L.
,
Du
,
S.
,
Belkacem
,
A. N.
, &
Chen
,
C.
(
2017
).
Classification of multi-class motor imagery with a novel hierarchical SVM algorithm for brain–computer interfaces
.
Medical and Biological Engineering and Computing
,
55
(
10
),
1809
1818
.
Dong
,
E.
,
Zhu
,
G.
, &
Chen
,
C.
(
2017
). Classification of four categories of EEG signals based on relevance vector machine. In
Proceedings of the IEEE International Conference on Mechatronics and Automation
(pp.
1024
1029
).
Piscataway, NJ
:
IEEE
.
Förstner
,
W.
, &
Moonen
,
B.
(
2003
). A metric for covariance matrices. In
Geodesy: The challenge of the 3rd millennium
(pp.
299
309
).
Berlin
:
Springer
.
Gaur
,
P.
,
Pachori
,
R. B.
,
Wang
,
H.
, &
,
G.
(
2018
).
A multi-class EEG-based BCI classification using multivariate empirical mode decomposition based filtering and Riemannian geometry
.
Expert Systems with Applications
,
95
,
201
211
.
Ge
,
S.
,
Wang
,
R.
, &
Yu
,
D.
(
2014
).
Classification of four-class motor imagery employing single-channel electroencephalography
.
PloS One
,
9
(
6
),
e98019
.
Gope
,
P.
, &
Hwang
,
T.
(
2016
).
BSN-care: A secure IOT-based modern healthcare system using body sensor network
.
IEEE Sensors Journal
,
16
(
5
),
1368
1376
.
Grosse-Wentrup
,
M.
, &
Buss
,
M.
(
2008
).
Multiclass common spatial patterns and information theoretic feature extraction
.
IEEE transactions on Biomedical Engineering
,
55
(
8
),
1991
2000
.
Guangquan
,
L.
,
Gan
,
H.
, &
Xiangyang
,
Z.
(
2008
).
BCI Competition IV Dataset 2A Results
. www.bbci.de/competition/iv/results/index.html#dataset2a
Jafarifarmand
,
A.
,
,
M. A.
,
,
S.
,
Nazari
,
M. A.
, &
Tazehkand
,
B. M.
(
2018
).
A new self-regulated neuro-fuzzy framework for classification of EEG signals in motor imagery BCI
.
IEEE Transactions on Fuzzy Systems
,
26
(
3
),
1485
1497
.
,
B.
,
Kiran
,
M. P. R. S.
, &
Rajalakshmi
,
P.
(
2017
). A novel system architecture for brain controlled IoT enabled environments. In
Proceedings of the IEEE 19th International Conference on e-Health Networking, Applications and Services
(pp.
1
5
).
Piscataway, NJ
:
IEEE
.
Kiran
,
M. S.
,
Rajalakshmi
,
P.
,
,
K.
, &
Acharyya
,
A.
(
2014
). Adaptive rule engine based IoT enabled remote health care data acquisition and smart transmission system. In
Proceedings of the IEEE World Forum on the Internet of Things
(pp.
253
258
).
Piscataway, NJ
:
IEEE
.
Lawhern
,
V. J.
,
Solon
,
A. J.
,
Waytowich
,
N. R.
,
Gordon
,
S. M.
,
Hung
,
C. P.
, &
Lance
,
B. J.
(
2018
).
EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces
.
Journal of Neural Engineering
,
15
(
5
),
056013
.
Lu
,
N.
,
Li
,
T.
,
Ren
,
X.
, &
Miao
,
H.
(
2017
).
A deep learning scheme for motor imagery classification based on restricted Boltzmann machines
.
IEEE Transactions on Neural Systems and Rehabilitation Engineering
,
25
(
6
),
566
576
.
Miori
,
V.
, &
Russo
,
D.
(
2017
). Improving life quality for the elderly through the social internet of things (SIOT). In
Proceedings of the IEEE Global Internet of Things Summit
(pp.
1
6
).
Piscataway, NJ
:
IEEE
.
Moakher
,
M.
(
2005
).
A differential geometric approach to the geometric mean of symmetric positive-definite matrices
.
SIAM Journal on Matrix Analysis and Applications
,
26
(
3
),
735
747
.
Nicolas-Alonso
,
L. F.
,
Corralejo
,
R.
,
Gomez-Pilar
,
J.
,
Álvarez
,
D.
, &
Hornero
,
R.
(
2015
).
Adaptive stacked generalization for multiclass motor imagery–based brain computer interfaces
.
IEEE Transactions on Neural Systems and Rehabilitation Engineering
,
23
(
4
),
702
712
.
Rebsamen
,
B.
,
Guan
,
C.
,
Zhang
,
H.
,
Wang
,
C.
,
Teo
,
C.
,
Ang
,
M. H.
, &
Burdet
,
E.
(
2010
).
A brain controlled wheelchair to navigate in familiar environments
.
IEEE Transactions on Neural Systems and Rehabilitation Engineering
,
18
(
6
),
590
598
.
Schirrmeister
,
R. T.
,
Springenberg
,
J. T.
,
Fiederer
,
L. D. J.
,
Glasstetter
,
M.
,
Eggensperger
,
K.
,
Tangermann
,
M.
,
Hutter
,
F.
, …
Ball
,
T.
(
2017
).
Deep learning with convolutional neural networks for eeg decoding and visualization
.
Human Brain Mapping
,
38
(
11
),
5391
5420
.
Sousa
,
K. d. M.
,
Oliveira
,
W. I. F. d.
,
Melo
,
L. O. M. d.
,
Alves
,
E. A.
,
Piuvezam
,
G.
, &
Gama
,
Z. A. d. S.
(
2017
).
A qualitative study analyzing access to physical rehabilitation for traffic accident victims with severe disability in Brazil
,
Disability and Rehabilitation
,
39
(
6
),
568
577
.
Thuemmler
,
C.
, &
Bai
,
C.
(
2017
).
Health 4.0: How virtualization and big data are revolutionizing healthcare
.
New York
:
Springer
.
Wollschlaeger
,
M.
,
Sauter
,
T.
, &
Jasperneite
,
J.
(
2017
).
The future of industrial communication: Automation networks in the era of the Internet of things and industry 4.0
.
IEEE Industrial Electronics Magazine
,
11
(
1
),
17
27
.
World Bank
. (
2019
).
Disability Inclusion
. https://www.worldbank.org/en/topic/disability/
Yang
,
H.
,
Sakhavi
,
S.
,
Ang
,
K. K.
, &
Guan
,
C.
(
2015
). On the use of convolutional neural networks and augmented CSP features for multi-class motor imagery of EEG signals classification. In
Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society
(pp.
2620
2623
).
Piscataway, NJ
:
IEEE
.
Zanini
,
P.
,
Congedo
,
M.
,
Jutten
,
C.
,
Said
,
S.
, &
Berthoumieu
,
Y.
(
2018
).
Transfer learning: A Riemannian geometry framework with applications to brain–computer interfaces
.
IEEE Transactions on Biomedical Engineering
,
65
(
5
),
1107
1116
.