A feasibility study: Application of brain-computer interface in augmentative and alternative communication for non-speaking individuals with neurodevelopmental disabilities
Application of brain-computer interface in augmentative and alternative communication for non-speaking individuals with neurodevelopmental disabilities
Maryam Mahmoudi, Ph.D., University of Minnesota
“Nonspeaking does not mean Nonthinking”
~Emily Grodin, I Have Been Buried Under Years of Dust: A Memoir of Autism and Hope
The goal of this project is to use an electroencephalogram (EEG) based brain-computer interface in augmentative and alternative communication (AAC) as a communication method for nonspeaking individuals with neurodevelopmental disabilities (NDD).
For this protocol, 10 Images (i.e., AAC) from different categories (e.g., fun activities, food, animals) will be developed based on the top 10 frequently-recommended items from our autism community members and our expert team (speech pathologist, neuroscientist and psychologists). In each trial, 4 images will randomly be presented on a monitor in front of the subject and 4 LEDs placed in four corners of the monitor (Figure 1). LEDs flicker with 8, 10, 12, and 15 Hz respectively. We will set up an eye-tracker on the monitor to gather information about where their eyes are and whether the errors are due to where they are or are not looking. Participants will be asked to select one of 4 images on the monitor (the output command). To do this, they must pay attention to a visual cue (i.e., an arrow pointing at the picture).
Each session will include 100 trials and each picture will be presented 10 times. Each trial time is equal to 5 seconds including 3 seconds picture presentation followed by a 2-second rest black screen. Overall, 3 sessions will be presented by having a 5-minute break between sessions (Figure 1).
An 8-channel g.USBamp series B amplifier will acquire data on a 10-20 standard system. Data will be collected at a frequency (256 Hz) from the occipital and parietal areas. The right ear and Fpz are dedicated to reference and ground electrodes respectively (Figure 2). Brain signal patterns will be detected using an SSVEP-based BCI in response to visual stimuli (10 pictures) and will be processed offline by BCI2000.
EEG data will be preprocessed with a baseline correction and offline appropriate bandpass filters. Then the data of each trial will be extracted using triggers/synchronization pulses. The extracted signals of each trial will be analyzed with time, frequency, and time-frequency analysis. LDA will be applied to classify the signal patterns and determine the output command that subsequently will be translated into audio output presented via a phone app or computer. A general block diagram of the experimental setup and data analysis is illustrated in Figure 3.
Two well-known criteria will be measured for validation of the analysis, i.e., Accuracy (Acc) and Information Transfer Rate (ITR). A general evaluation metric devised for BCI systems determines the amount of information that is conveyed by a system's output. ITR is equal to information transferred in bits per trial, N= number of targets, and P is equal to the classification accuracy. It is calculated by dividing the number of correct command classifications by the total number of classified commands.
We will recruit participants (N=15, age=12-18) from disabilities-related communities and organizations in MN. They may speak minimally or not be able to speak. For those participants with minimal speech, word counts will be assessed based on a guideline to define the level of speech. Inclusion criteria: participants should have a formal diagnosis of either autism or neurodevelopmental disability. Those with secondary conditions of mild and moderate intellectual disabilities (ID) as well as those without ID will be included. Participants should be willing to share their diagnostic files and should have a Peabody report. Participants should have normal vision or corrected normal vision (not less than 20/40 on a Snellen test). Photosensitivity assessment will be checked before enrolment in the study with visual light sensitivity questionnaire-8. Exclusion criteria: participants who do not have the mentioned formal diagnoses, those with epilepsy history, those who have metallic cranial implants, and those with the most significant ID will be excluded. Once the IRB application will be confirmed, the recruitment will be started. The consent (parent) and assent (youth with NDD) letters will be provided for those who declare their interest in the study voluntarily.
The Peabody Picture Vocabulary Test, 5th Edition (PPVT-5). The score will be collected from the participants' diagnostic files to define the level of receptive language. Further, before each experiment, the comprehension of participants, or receptive attention, to each experiment picture will be checked by asking them to point out each picture by telling the name of the picture. The number of distinct words for minimally-speaking participants will be reported. We will be able to have a phenotype of current levels of participants’ receptive language. Further, PPVT and SCQ will be analyzed as moderator variables to control their effects on the experiment.
I would like to thank the following collaborators, mentors, and consultants:
Renata Ticha, Ph.D., Univeristy of Minnesota
Brian Abery, Ph. D., University of Minnesota
Vassilios Morellas, Ph.D., University of Minnesota
Eric Feczko, Ph.D., University of Minnesota
Kevin Pitt, Ph.D., Lincoln Nebraska University
Theresa Vaughan, B.A., NCAN, Stratton VA Medical Center
Oscar Miranda Dominguez, Ph.D., University of Minnesota
Jolene Hyppa Martin, Ph.D., University of Minnesota
I would also like to thank Connie Burkhart, graphic designer at the Institute on Community Integration (ICI), for helping with developing this accessible webpage and designing the poster for the CNS conference.