Non-invasive magnetic brain-computer interface enables hand-gesture decoding using deep learning
May. 20, 2023.
2 min. read. 3 likes. 0
Deep-learning model combines spatial and temporal features
Researchers from University of California San Diego have developed a way to distinguish among hand gestures that people with paralysis, amputated limbs or other physical challenges make. It uses noninvasive magnetoencephalography (MEG) for brain imaging.
The research, published in the journal Cerebral Cortex, represents the best results so far in distinguishing single-hand gestures using a non-invasive technique, according to Mingxiong Huang, PhD, co-director of the MEG Center at the Qualcomm Institute at UC San Diego.
MEG uses a helmet with an embedded 306-sensor array to detect the magnetic fields produced by electric currents moving between neurons in the brain. Existing brain-computer interface techniques are based on electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface; or scalp electroencephalography (EEG), which locates brain activity less precisely.
Deep learning model
The study evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects. The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors. MEG functional information was superimposed on MRI images, which provided structural information on the brain.
To interpret the data generated, Yifeng (“Troy”) Bu, an electrical and computer engineering PhD student in the UC San Diego Jacobs School of Engineering and first author of the paper, wrote a high-performing deep learning model called MEG-RPSnet.
“The special feature of this network is that it combines spatial and temporal features simultaneously,” said Bu. “That’s the main reason it works better than previous models.”
The researchers found that their techniques could be used to distinguish among hand gestures with more than 85% accuracy, comparable with an (invasive) ECoG brain-computer interface.
The team also found that MEG measurements from only half of the brain regions sampled could generate results with only a small (2 – 3%) loss of accuracy, indicating that future MEG helmets might require fewer sensors.”
The researchers are associated with the US Veterans Administration, |San Diego Healthcare System, UC San Diego School of Medicine and UC San Diego.The work was supported in part by Merit Review Grants from the US Department of Veterans Affairs, Naval Medical Research Center’s Advanced Medical Development program and Congressionally Directed Medical Research Programs/Department of Defense.
Citation: ifeng Bu et al. Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning, 13 May 2023, Cerebral Cortex, https://doi.org/10.1093/cercor/bhad173
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
One thought on “Non-invasive magnetic brain-computer interface enables hand-gesture decoding using deep learning”
Less invasive with no data degredation is great
Most likely less expensive