back Back

Non-invasive magnetic brain-computer interface enables hand-gesture decoding using deep learning

May. 20, 2023.
2 min. read. . 0

Deep-learning model combines spatial and temporal features

About the Writer

Amara Angelica

19.66877 MPXR

Amara Angelica is Senior Editor of Mindplex

MEG helmet (credit: MEG Center at UC San Diego Qualcomm Institute)

Researchers from University of California San Diego have developed a way to distinguish among hand gestures that people with paralysis, amputated limbs or other physical challenges make. It uses noninvasive magnetoencephalography (MEG) for brain imaging.

The research, published in the journal Cerebral Cortex, represents the best results so far in distinguishing single-hand gestures using a non-invasive technique, according to Mingxiong Huang, PhD, co-director of the MEG Center at the Qualcomm Institute at UC San Diego.

MEG uses a helmet with an embedded 306-sensor array to detect the magnetic fields produced by electric currents moving between neurons in the brain. Existing brain-computer interface techniques are based on electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface; or scalp electroencephalography (EEG), which locates brain activity less precisely.

Deep learning model

The study evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects. The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors. MEG functional information was superimposed on MRI images, which provided structural information on the brain.


In the study, volunteers were equipped with a MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors. A high-performing deep learning model interpreted the MEG data, distinguishing among hand gestures with more than 85% accuracy (credit: MEG Center at UC San Diego Qualcomm Institute)

To interpret the data generated, Yifeng (“Troy”) Bu, an electrical and computer engineering PhD student in the UC San Diego Jacobs School of Engineering and first author of the paper, wrote a high-performing deep learning model called MEG-RPSnet.

“The special feature of this network is that it combines spatial and temporal features simultaneously,” said Bu. “That’s the main reason it works better than previous models.”

The researchers found that their techniques could be used to distinguish among hand gestures with more than 85% accuracy, comparable with an (invasive) ECoG brain-computer interface.

The team also found that MEG measurements from only half of the brain regions sampled could generate results with only a small (2 – 3%) loss of accuracy, indicating that future MEG helmets might require fewer sensors.”

The researchers are associated with the US Veterans Administration, |San Diego Healthcare System, UC San Diego School of Medicine and UC San Diego.The work was supported in part by Merit Review Grants from the US Department of Veterans Affairs, Naval Medical Research Center’s Advanced Medical Development program and Congressionally Directed Medical Research Programs/Department of Defense.

Citation: ifeng Bu et al. Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning, 13 May 2023, Cerebral Cortex, https://doi.org/10.1093/cercor/bhad173

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.

Like
Dislike
Share
1 Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this botton; you can only select one article every month.

People's Choice
Bookmarks