Abstract
1. Introduction
Wireless sensors and human computer interaction have been involved significantly from medicine to military in recent years. In recent years, an electroencephalogram (EEG) becomes more popular due to its low cost with high perfection. EEG measures the electrical signals from a human scalp in real-time Brain Computer Interface (BCI) systems. A common and effective method of feature extraction in EEG based secured wireless sensor networks is desired to facilitate Brain Computer Interface [1–7]. Human learning process is significantly influenced from their emotional behavior. More specifically, while students are attentive with pleasant feelings, they will definitely produce positive results. Normally, teachers observe students' expressions through face-to-face communication. It will help the teachers and instructors to determine the current situation of the student. However, the existing methods are extremely subjective and waste a significant energy of the teacher. Furthermore, students may participate in distance learning through the Internet. This way of teaching further increases the complexity of finding the student attention remotely. The brain neurons are subjective and always active without any reason in the human brain. These neurons are also producing significant amount of electromagnetic wave patterns. These signal patterns are further used as EEG signals and recorded by EEG enabled machines.
EEG based emotion research is challenging field within the area of brain sensors and signal processing. Several studies were performed to explore the human emotion through EEG signal patterns. Du et al. [8] presented the different emotional responses in alpha band by using the EEG power spectrum analysis. The aim of this research is to find out the physiological features of human brain wave pattern. They have employed the independent component and time frequency analysis to identify the EEG power dynamics. EEG activities have been found in different emotion-related responses with the help of event-related spectral perturbation (ERSP). Furthermore, on the basis of arousal-valence model, the ERSP maps had shown the high EEG activity in the lower alpha band for all selected emotions except from neutral state. Jirayucharoensak et al. [9] propose the utilization of a deep learning network (DLN) to find out the hidden features in EEG signals. Stacked autoencoder (SAE) was used with a hierarchical technique of feature learning. Power spectral densities of 32 EEG channels from 32 subjects were extracted for building input features of the network. Results had shown the capability of DLN classification in three different levels of arousal and valence with an accuracy of 46.03% and 49.52%, respectively. Jatupaiboon et al. [10] introduced using EEG signal to detect happy and unhappy emotions from emotional pictures and music. They used Power Spectral Density (PSD) and SVM for feature extraction and classification, respectively. They claim that the average accuracies of both models, subject-dependent and subject-independent, were approximately 75.62% and 65.12%, respectively. Cho and Lee [11] presented a methodology for game users which is based on Brain Computer Interface. The game called FPS is a common video game focused on gun shooting and projectile weapon based combat on a single user. The FPS game designer puts efforts to increase or enhance the interest of players in this game by using graphic effect, interface, sound, and so on. Generally, EEG data of single frequency band (0.5 to 70 Hz) are coming through secured sensors. It is established through previous studies [9–12] that the desired results from original frequency are difficult to achieve due to some noise artifacts and existence of other unknown signal patterns. We considered five frequency bands in our research, such as delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–50 Hz), also denoted as
Our goal of this study is to determine the maximum EEG channel's strength in different frequency bands. The prominent frequency with maximum strength of EEG channels may lead towards the most useful frequency wave for implementation of real-time EEG based emotion recognition systems. We had selected five different emotions in this analysis, such as happy, calm, sad, scared, and neutral. Each of the EEG channels contains signal patterns of the five emotions. We computed the signal regularity of all channels through Hjorth parameters [13, 14] for each emotion separately. Further, we applied the balanced one-way ANOVA method to select the appropriate EEG channel which has discriminating emotions. The selected channels are further summed up into channel's strength in each frequency wave. However, previous work is closely related to this research area but our proposed method has a novelty in feature selection of five emotions in brain signals. Studies from [9, 10] presented the emotion recognition methods for three different levels of arousal-valence and happy-unhappy emotional situations, respectively. A detailed research methodology will be presented in Section 2. Section 3 includes the result and discussion. Finally the conclusion will be presented in Section 4.
2. Materials and Methods
Thirty healthy males in the age group of 23 to 25 years were recruited as subjects in this experiment. They are all right-handed and have good visions. All of the subjects were undergraduate students of the same institution and were informed about the purpose of this research. The subjects were given a simple introduction about our research and the whole process of experiment after the consent forms were filled up.
2.1. EEG Sensors
Internationally recognized 10/20 electrodes placement system was used to collect the EEG signals through sensors. The adopted 10/20 system is based on the underlined area of cerebral cortex [16] and it has a relationship of several points of electrodes located on scalp shown in Figure 1.

EEG channels distribution based on 10/20 system of electrode placement.
Brain Vision amplifier was designed by Brain Products, Germany. It was used for recording of EEG signals. Silver/silver-chloride electrodes (Ag/AgCl) were used in relationship with the “Easy Cap System.” Eighteen electrodes (Fp1, Fp2, F3, F4, Fz, F7, F8, Cz, C3, C4, P3, P4, P7, P8, T7, T8, O1, and O2) were placed on scalp to record EEG signals using the Easy Cap presented in Figure 2. EEG signals data were recorded on 0.5–70 Hz with sampling rate of 500 data per second. Subjects were instructed to remain calm and not to blink or move their eyes during recording of this experiment.

A highly sophisticated amplifier designed for the recording of electrophysiological signals.
2.2. Experiment Setting
A common method had been used to evoke the distinct emotions from subjects by presenting the emotional pictures with corresponding content [17–20]. The whole experiment was designed to induct emotion within the valence and arousal space shown in Figure 3. International Affective Picture System (IAPS) database was used in this experiment and it is completely based on standards of arousal-valence model [21, 22]. Five affective states were selected: low arousal-low valence (LA_LV), low arousal-high valence (LA_HV), high arousal-high valence (HA_HV), high arousal-low valence (HA_LV), and middle arousal-middle valence (MA_MV) to make a clear distinction among emotions.

The relationship of valence-arousal model [15] adopted in this study.
Five affective states were selected such as happy, scared, calm, sad, and neutral. With the help of arousal-valence model, we selected total of 70 pictures from IAPS database. In Figure 4, we can see the selected pictures from IAPS presented by black circles.

The scatter plot of International Affective Picture System (IAPS) images, based on valence-arousal model. The circles show that the images were selected in our experiment.
Figure 5 shows the timeline of simulation of selected emotion-related images. We presented the image based slideshow twice for each subject and it lasts for 296 seconds. The random pictures were selected and presented for 4 seconds. Blurred images were also presented for 4 seconds in between each emotional picture. Fixation mark was presented for 8 seconds at the beginning and end of this experiment.

Timing diagram of emotional stimuli for each subject.
2.3. Data Preprocessing
EEGLAB is an open source framework developed by SCCN Lab [23]. It is running under the MATLAB environment. It is used for both preprocessing and EEG signals analysis. It includes functions of data collection, EEG channels and epoch data management, and visual analysis. We performed the artifact rejection [24], filtering [25], epoch selection, and averaging of the signals in this phase of our analysis. The artifact rejection method [24] was adopted with specified parameters, such as

EEG signals artifact rejection.
2.4. Proposed Method for Selecting Prominent Frequency Wave
Emotions are subjective and vary from one subject to another. It is not possible to perceive and produce the same emotional response of subjects on the same picture. Therefore, we decided to analyze the EEG signals data of 4 seconds through sliding window with different overlap size. We are considering the five different waves existing in EEG signals, such as
3. Results and Discussion
The aim of this research is to identify the prominent frequency wave for recognition of human emotion from secured sensors. It is very important to select the appropriate frequency in real-time systems due to limitation of signal processing in timely manners. Problem is that the real-time EEG signals are encoded in single frequency band with lot of artifacts such as muscle or eye movements or eye blinks. Therefore, we selected the target of exploring the EEG signals to find out the useful frequency waves. In this section, we presented our results with visual analysis in Figures 7 and 8. We selected the five frequency waves as legend and listed them in the keys of Figures 7 and 8. We selected three different pairs of timing windows and overlap sizes, such as [

Visual analysis of energy function of all frequency waves for

Visual analysis of energy function of all frequency waves for
Furthermore, we are going into detailed investigation of our approach through visual analysis. Figures 9 to 14 are showing the strength of EEG channels against each frequency wave and timing window. We processed different cases to extract maximum number of EEG channels. We considered

Visual analysis of EEG channel's strength on selected frequency waves, while,

Visual analysis of EEG channel's strength on selected frequency waves, while,

Visual analysis of EEG channel's strength on selected frequency waves, while,

Visual analysis of EEG channel's strength on selected frequency waves, while,

Visual analysis of EEG channel's strength on selected frequency waves, while,

Visual analysis of EEG channel's strength on selected frequency waves, while,
Figures 9–14 presented the detailed computation of results in Figures 7 and 8. Figure 9 shows total number of selected EEG channels in time and frequency domain. Further, we sum up the selected EEG channels to get the total energy value of corresponding frequency band. Actually, the energy value of individual frequency band on each timing window is sum of EEG channels through (5). For example, energy of delta frequency band was computed as
Figure 10 presented the detailed computation of results of Figure 7 where timing window and overlap size are 400 and 100, respectively. The energy value of individual frequency band on each timing window is sum of EEG channels through (5) as presented in Figure 7. Furthermore, energy value of all frequency bands (
Figure 11 presented the detailed computation of results of Figure 7 where timing window and overlap size are 500 and 250, respectively. The energy value of individual frequency band on each timing window is sum of EEG channels through (5) as presented in Figure 7. Furthermore, energy value of all frequency bands (
Figure 12 presented the detailed computation of results of Figure 8 where timing window and overlap size are 400 and 200, respectively. The energy value of individual frequency band on each timing window is sum of EEG channels through (5) as presented in Figure 8. Furthermore, energy value of all frequency bands (
Figure 13 presented the detailed computation of results of Figure 8 where timing window and overlap size are 400 and 100, respectively. The energy value of individual frequency band on each timing window is sum of EEG channels through (5) as presented in Figure 8. Furthermore, energy value of all frequency bands (
Figure 14 presented the detailed computation of results of Figure 8 where timing window and overlap size are 500 and 250, respectively. The energy value of individual frequency band on each timing window is sum of EEG channels through (5) as presented in Figure 8. Furthermore, energy value of all frequency bands (
4. Conclusion
Our research work provides an application of time and frequency domain analysis on EEG signal data of controlled human subjects. We computed the signal regularity with respect to viewing the emotional conditions (i.e., neutral, sad, happy, calm, and scared) by applying the Hjorth parameters on all EEG channels separately. Further, we computed the energy function of all frequency waves with different timing windows and overlap sizes. We successfully explored the prominent frequency band which may lead the researchers to identify the emotional behavior from human subjects. Specifically, we analyzed all frequency bands showing high number of EEG channels while timing window and overlap size were of 400 ms × 100 ms, respectively. From the results, we would conclude that the delta frequency wave has more energy value in many cases, and it can be used as a single frequency in real-time systems. This approach showed a good indication of using only single frequency wave for recognition of emotion in real-time EEG system. Therefore, we are well motivated to carry on the prominent frequency wave selection method towards the emotion classification.
