A feasibility study: Application of brain-computer interface in augmentative and alternative communication for non-speaking individuals with neurodevelopmental disabilities

Application of brain-computer interface in augmentative and alternative communication for non-speaking individuals with neurodevelopmental disabilities

Institute on Community Integration University of Minnesota

Maryam Mahmoudi, Ph.D., University of Minnesota

“Nonspeaking does not mean Nonthinking”

~Emily Grodin, I Have Been Buried Under Years of Dust: A Memoir of Autism and Hope


The goal of this project is to use an electroencephalogram (EEG) based brain-computer interface in augmentative and alternative communication (AAC) as a communication method for nonspeaking individuals with neurodevelopmental disabilities (NDD).


For this protocol, 10 Images (i.e., AAC) from different categories (e.g., fun activities, food, animals) will be developed based on the top 10 frequently-recommended items from our autism community members and our expert team (speech pathologist, neuroscientist and psychologists). In each trial, 4 images will randomly be presented on a monitor in front of the subject and 4 LEDs placed in four corners of the monitor (Figure 1). LEDs flicker with 8, 10, 12, and 15 Hz respectively. We will set up an eye-tracker on the monitor to gather information about where their eyes are and whether the errors are due to where they are or are not looking. Participants will be asked to select one of 4 images on the monitor (the output command). To do this, they must pay attention to a visual cue (i.e., an arrow pointing at the picture).

Each session will include 100 trials and each picture will be presented 10 times. Each trial time is equal to 5 seconds including 3 seconds picture presentation followed by a 2-second rest black screen. Overall, 3 sessions will be presented by having a 5-minute break between sessions (Figure 1).

Figure 1: Schematic presentation of task

Figure 1: The monitor will be in front of each participant and will display 4 images, one at a time, for 5 seconds each. After each image is displayed, there will be a 2-second rest before the next image appears. The images will be shown randomly, so you won't know which one is coming next. In each corner of the monitor, there is an LED (a small light) that flickers at a different frequency. The LED in the top left corner flickers at 8 Hz (meaning it turns on and off 8 times per second), the LED in the top right corner flickers at 10 Hz, the LED in the bottom left corner flickers at 12 Hz, and the LED in the bottom right corner flickers at 15 Hz.

An 8-channel g.USBamp series B amplifier will acquire data on a 10-20 standard system. Data will be collected at a frequency (256 Hz) from the occipital and parietal areas. The right ear and Fpz are dedicated to reference and ground electrodes respectively (Figure 2). Brain signal patterns will be detected using an SSVEP-based BCI in response to visual stimuli (10 pictures) and will be processed offline by BCI2000.

Figure 2: The location of EEG electrodes

Figure 2: An EEG cap with an 8-channel g.USBamp series B amplifier will be used to acquire data on a 10-20 standard system. The electrodes will be placed on the surface of the scalp in specific locations, based on this system. Data will be collected specifically from the occipital and parietal areas of the head, which are located toward the back and top of the head respectively. The right ear and Fpz (which is located on the forehead, between the eyebrows) will be used as reference and ground electrodes respectively. The right ear and Fpz electrodes will be used as reference and ground respectively to establish a stable baseline for the data.

EEG data will be preprocessed with a baseline correction and offline appropriate bandpass filters. Then the data of each trial will be extracted using triggers/synchronization pulses. The extracted signals of each trial will be analyzed with time, frequency, and time-frequency analysis. LDA will be applied to classify the signal patterns and determine the output command that subsequently will be translated into audio output presented via a phone app or computer. A general block diagram of the experimental setup and data analysis is illustrated in Figure 3.

Two well-known criteria will be measured for validation of the analysis, i.e., Accuracy (Acc) and Information Transfer Rate (ITR). A general evaluation metric devised for BCI systems determines the amount of information that is conveyed by a system's output. ITR is equal to information transferred in bits per trial, N= number of targets, and P is equal to the classification accuracy. It is calculated by dividing the number of correct command classifications by the total number of classified commands.

Figure 3: General block diagram of the experimental setup and data analysis

Figure 3: The experiment involves recording brain signals from EEG electrodes placed on the head using an EEG recorder. The EEG signals will be processed through several stages, including preprocessing, feature extraction, and classification. During the experiment, a person will be seated in front of a monitor which displays four pictures simultaneously. At each corner of the monitor, there is an LED light. The brain signals recorded by the EEG will be preprocessed to remove any unwanted noise or artifacts. Then, features will be extracted from the EEG signals to identify patterns of brain activity associated with the target image. While the person is looking at the monitor with the four pictures, the EEG signals will be recorded and processed through these steps. The LED lights at each corner of the monitor will be flickered to elicit certain brain activity patterns. Overall, the experiment involves recording and analyzing EEG signals from the brain while a person is looking at a monitor displaying four pictures and with LED lights at each corner. The data will be processed through several stages to extract meaningful features and classify the signals into different categories.



We will recruit participants (N=15, age=12-18) from disabilities-related communities and organizations in MN. They may speak minimally or not be able to speak. For those participants with minimal speech, word counts will be assessed based on a guideline to define the level of speech. Inclusion criteria: participants should have a formal diagnosis of either autism or neurodevelopmental disability. Those with secondary conditions of mild and moderate intellectual disabilities (ID) as well as those without ID will be included. Participants should be willing to share their diagnostic files and should have a Peabody report. Participants should have normal vision or corrected normal vision (not less than 20/40 on a Snellen test). Photosensitivity assessment will be checked before enrolment in the study with visual light sensitivity questionnaire-8. Exclusion criteria: participants who do not have the mentioned formal diagnoses, those with epilepsy history, those who have metallic cranial implants, and those with the most significant ID will be excluded. Once the IRB application will be confirmed, the recruitment will be started. The consent (parent) and assent (youth with NDD) letters will be provided for those who declare their interest in the study voluntarily.


The Peabody Picture Vocabulary Test, 5th Edition (PPVT-5). The score will be collected from the participants' diagnostic files to define the level of receptive language. Further, before each experiment, the comprehension of participants, or receptive attention, to each experiment picture will be checked by asking them to point out each picture by telling the name of the picture. The number of distinct words for minimally-speaking participants will be reported. We will be able to have a phenotype of current levels of participants’ receptive language. Further, PPVT and SCQ will be analyzed as moderator variables to control their effects on the experiment.


I would like to thank the following collaborators, mentors, and consultants:

Renata Ticha, Ph.D., Univeristy of Minnesota

Brian Abery, Ph. D., University of Minnesota

Vassilios Morellas, Ph.D., University of Minnesota

Eric Feczko, Ph.D., University of Minnesota

Kevin Pitt, Ph.D., Lincoln Nebraska University

Theresa Vaughan, B.A., NCAN, Stratton VA Medical Center

Oscar Miranda Dominguez, Ph.D., University of Minnesota

Jolene Hyppa Martin, Ph.D., University of Minnesota

I would also like to thank Connie Burkhart, graphic designer at the Institute on Community Integration (ICI), for helping with developing this accessible webpage and designing the poster for the CNS conference.