A brain-scale neuromorphic supercomputer

Would it be possible one day to create a supercomputer capable of emulating networks at the scale of the human brain?

Yes. It’s called Deep South. And researchers from the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University plan to have it operational by April 2024, they announced at this week’s 2023 ICNS NeuroEng Workshop.

Mimics biological processes, so less power required

DeepSouth uses a neuromorphic system that mimics biological processes, using hardware to efficiently emulate large networks of spiking neurons at 228 trillion synaptic operations per second—rivaling the estimated rate of operations in the human brain.

DeepSouth is purpose-built to operate like networks of neurons, so it will require less power and enable greater efficiencies, says CNS Director Professor André van Schaik. This contrasts with supercomputers optimized for more traditional computing loads, which are power-hungry.

“Progress in our understanding of how brains compute using neurons is hampered by our inability to simulate brain like networks at scale,” he said. “Simulating spiking neural networks on standard computers using graphics processing units (GPUs) and multicore central processing units (CPUs) is just too slow and power intensive. Our system will change that.”

Large-scale AI and other applications

“This platform will progress our understanding of the brain and develop brain-scale computing applications in diverse fields, including sensing, biomedical, robotics, space, and large-scale AI applications.”

Practically, this will lead to advances in smart devices, such as mobile phones, sensors for manufacturing and agriculture, and less power-hungry and smarter AI applications, he said. It will also enable a better understanding of how a healthy or diseased human brain works.

Western Sydney University’s ICNS team collaborated with partners across the neuromorphic field in developing this ground-breaking project, with researchers from the University of Sydney, University of Melbourne, and University of Aachen, Germany.

The supercomputer is aptly named DeepSouth, paying homage to IBM’s TrueNorth system, which initiated efforts to build machines simulating large networks of spiking neurons, and Deep Blue, which was the first computer to become a world chess champion. The name is also a nod to its geographical location.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Mind-reading AI turns thoughts into text

Researchers at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text.

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as operating a bionic arm or robot.

The research at the GrapheneX-UTS Human-centric Artificial Intelligence Centre was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre

AI learning process

In the study, participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp, using an electroencephalogram (EEG). An AI model called DeWave, developed by the researchers, translates these EEG signals into words and sentences . 

UNIVERSITY OF TECHNOLOGY SYDNEY MEETING Neural Information Processing Systems, FUNDER; Australian Research Council, GrapheneX

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding,” Lin said.

No need for invasive Neuralink-type implants or an MRI machine

He also said the new system eliminates the need to translate brain signals to language, which has required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink, or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

The UTS research was carried out with 29 participants, so it’s likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals at a time.

Accuracy to be improved

The translation accuracy score is currently around 40% on BLEU-1 (a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations). The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force. It uses brainwaves to command a quadruped robot, demonstrated in this ADF video.

A paper will be presented at the Dec. 12 at the NeurIPS conference, to be held in New Orleans on December 12, 2023, and will be linked here.

EurekAlert! The Global Source for Science News
AAAS - American Association for the Advancement of Science

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ultrasound microrobots guide medications to treat brain tumors, other conditions

Brain tumors, brain hemorrhages and neurological and psychological conditions are often hard to treat with medication. Even when effective drugs are available, they tend to have severe side effects because they circulate throughout the brain, not just the area they are meant to treat.

So researchers at ETH Zurich, the University of Zurich and University Hospital Zurich have now developed mini-​transporters that can be guided through the dense maze of blood vessels in the brain of an animal, using ultrasound. Compared to magnetic fields and other methods, “ultrasound is widely used in the medical field, is safe, and penetrates deep into the body,” said Daniel Ahmed, Professor of Acoustic Robotics at ETH Zurich and supervisor of the study.

Ultrasound-guided microbubbles “likely to be approved”

The gas-​filled 1.5 micrometer microbubbles are coated in lipids, the substances that biological cell membranes are made of. The bubbles are used as contrast materials and are guided through blood vessels.

“Since these bubbles, or vesicles, are already approved for use in humans, it’s likely that our technology will be approved and used in treatments for humans more quickly than other types of microvehicles currently in development,” says Ahmed, who was awarded a Starting Grant by the European Research Council ERC in 2019 for his project to research and develop this technology.

Ultrasound-​guided microbubbles also dissolve in the body once they’ve done their job, unlike magnetic-based microvehicles, and are small and smooth. “This makes it easy for us to guide them along narrow capillaries,” says Alexia Del Campo Fonseca, a doctoral student in Ahmed’s group and lead author of the study.

Tests in mice

Ahmed’s team has tested this method on blood vessels in the brains of mice. They also used ultrasound to hold the vesicles in place and guide them through the brain vessels against the direction of blood flow, including convoluted blood vessels.

The researchers attached four small transducers to the outside of each mouse’s skull. These devices generate vibrations in the ultrasonic range, which spread through the brain as waves.

To create the imaging for this study, the researchers used two-​photon microscopy. In the future, they want to use ultrasound itself for imaging. The researchers next plan to use these medical devices for treating cancer, stroke and psychological conditions. The researchers’ next step will be to attach drug molecules to the outside of the bubble casing for transport. They want to enhance the entire method to the point at which it can be used in humans, hoping it will one day provide the basis for the development of new treatments.

Citation: Alexia Del Campo FonsecaChaim GlückJeanne DrouxYann FerryCarole FreiSusanne WegenerBruno WeberMohamad El Amki,Daniel Ahmed I2023). Ultrasound trapping and navigation of microrobots in the mouse brain vasculature. Nature Communications volume 14, Article number: 5889 (21-Sep-2023). https://www.nature.com/ar ticles/s41467-023-41557-3 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Blue light exposure can damage neuronal function and induce aging

In a study on fruit flies, daily low-intensity blue-light exposure—similar to that experienced daily by billions of humans in the form of LED lighting and device screens—changed flies at the sub-cellular level, affecting processes related to aging and circadian rhythms.

Xiaoyun Wang and colleagues exposed fruit flies (Drosophila melanogaster) to different durations of daily low-intensity blue light, and then analyzed the consequences to the cellular makeup of the insects, compared to flies raised in darkness.

Blue light’s effects on flies

The authors measured blue light’s effects on N6-Methyladenosine (m6A), an RNA modification that is common across the tree of life and plays a role in a wide range of processes.

The authors found that blue light induced genetic transcriptomic** (m6A epitranscriptomic) and metabolomic reprogramming.

Damage to neuronal function

The RNA profiles of 25-day-old male fly heads exposed to blue light were significantly different from 25-day-old male dark-raised flies’ heads, say the authors, and the types of genes that were up- and down-regulated* suggest that blue light exposure can damage neuronal function.

In comparisons of whole-body transcriptomes (the protein-coding part of an organism’s genome) of RNA molecules that are present in cells, however, age created bigger differences between groups than light exposure, suggesting that eye and brain tissues of Drosophila flies are the major tissues affected by blue light.

Differences in mRNA m6A levels and other m6A-related markers between light- and dark-raised flies also indicate that m6A methylation (which regulates reproductive system development) was involved in the impacts of blue light on Drosophila.

Hazards with blue light

According to the authors, attention should be paid to the potential hazards of cumulative blue light exposure in humans because blue light can influence the circadian rhythm and processes mediated by melanopsin-expressing intrinsically photosensitive retinal ganglion cells.

* The process by which a cell increases and decreases (respectively) the production and quantities of its cellular components, such as RNA and proteins, in response to an external stimulus

** The set of all the ribonucleic acid (RNA) molecules (called “transcripts”) expressed in some given entity, such as a cell, tissue, or organism

Citation: Jia Huang, Fan Zhou, Huanchan Zhou, Xiaoqi Zheng, Zhengyi Huo, Meng Yang, Zihe Xu, Runzhou Liu, Luoluo Wang, Xiaoyun Wang, Systematic assessment of transcriptomic and metabolic reprogramming by blue light exposure coupled with aging, PNAS Nexus, Volume 2, Issue 12, December 2023, pgad390, https://doi.org/10.1093/pnasnexus/pgad390 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Solar-powered atmospheric water harvesting

Researchers from Shanghai Jiao Tong University in China have developed a new solar-powered atmospheric water harvesting technology that could help provide enough drinking water for people to survive in dryland areas in water-stressed countries.

The United Nations estimates that 3.5 million die every year from water-related diseases and the areas most in need of improved drinking water are also located in some of the sunniest places in the world.

Multiple uses

“This atmospheric water harvesting technology can be used to increase the daily water supply needs, as household drinking water, industrial water, and water for personal hygiene,” said author Ruzhu Wang.

The researchers synthesized a super-hygroscopic gel using plant derivatives and hygroscopic salts that were capable of absorbing and retaining an unparalleled amount of water. One kilogram of dry gel could adsorb 1.18 kilograms of water in arid atmospheric environments and up to 6.4 kilograms in humid atmospheric environments, they estimate. This hygroscopic gel was simple and inexpensive to prepare and would consequently be suitable for large-scale preparation.

In addition to daily water production, these sorbent materials that harvest atmosphere water could also play an important role in future applications such as dehumidification, agriculture irrigation, and thermal management for electronic devices, according to the researchers.

In an outdoor prototype demonstration, the team found it released adsorbed water even in the morning or afternoon when the sun is weak. The system could also achieve simultaneous adsorption and desorption during the daytime.

The team now plans to work to achieve simultaneous adsorption and desorption, using renewable energy to maximize daily water yield per unit mass of adsorbent.

Citation: Chengjie Xiang, Xinge Yang, Fangfang Deng, Zhihui Chen, and Ruzhu Wang, Dec. 5, 2023, Applied Physics Reviewshttps://doi.org/10.1063/5.0160682 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Wearable helmet with magnetic sensors records brain functions while patient is moving

A new wearable brain scanner can accurately record magnetic fields generated by brain activity while people are in motion. It will enable researchers to learn more about brain development and neurological conditions that can affect movement, including autism, epilepsy, stroke, concussion, and Parkinson’s disease, say researchers. 

“This advance could also make it easier to conduct brain scans in young children and individuals with neurological disorders, who can’t always remain still in conventional scanners,” said Niall Holmes, Ph.D., a Mansfield Research Fellow in the School of Physics and Astronomy at the University of Nottingham, who led the research. “Unconstrained movement during a scan opens a wealth of possibilities for clinical investigation and allows a fundamentally new range of neuroscientific experiments.”

How magnetic fields are recorded

When neurons interact, they generate a small electric current. This current produces a magnetic field that can be detected, recorded, and analyzed by sensitive magnetic sensors, using a technique called magnetoencephalography (MEG).

MEG technology can record brain signals every millisecond. By overlaying the neuronal sources of these magnetic fields onto an anatomical image of the brain, clinicians can visualize where and when specific brain activities originate. However, current MEG systems are bulky and rigid, like an old-fashioned hair dryer (where you must keep your head still for a while). Also, these sensors require cooling at or below freezing temperatures, so they can’t be placed directly on your scalp.

Now researchers at the University of Nottingham have used a new generation of magnetic field sensors called “optically pumped magnetometers” (OPMs), which operate at room temperature and can be placed close to the head, enhancing data quality. They are also flexible, allowing children and adults to move during scanning. 

The researchers also designed a magnetic shielding system that would cancel out or compensate for external magnetic fields (see “Technical background: Designing a matrix coil system” below).

Research applications

A company co-founded by Holmes and his colleagues is selling the OPM-MEG systems (which include a magnetically shielded room) to research centers in North America and Europe to conduct a variety of neuroscientific experiments.

One of the U.S. centers to use the MEG-OPM system is Virginia Polytechnique Institute and State University, which collaborated with the Nottingham team on another study to determine how well the OPM-MEG helmet worked when two individuals each wore one and then interacted.  To conduct this proof-of-concept study, two experiments involving social interaction were conducted.

“To really study how the human brain works, we have to embed people in their favorite natural environment—that’s a social setting,” said Read Montague, Ph.D., the principal investigator of the Virginia Tech team and director of the university’s Center for Human Neuroscience Research. The research was published this year in Sensors

For social interactions, two participants stroked each other’s hands and then played a game of ping pong.  Both experiments showed that despite large and unpredictable motions by participants, each person’s brain activity was clearly recorded.

The company is collecting data to obtain approval from regulatory bodies, including the FDA, to deploy the system in clinical populations, which can take up to five years.

Technical background: Designing a matrix coil system

The Nottingham research team constructed a system of electromagnetic coils to shield against the background magnetic-field noise and positioned them on two panels around the participant. Prior research published in Nature shows that eight large coils cancelled the background magnetic fields, but at a fixed position that only allowed small head movements.

Holmes and his team instead designed a new matrix coil system that features 48 smaller coils on two panels positioned around the participant. The coils can be individually controlled and continually recalibrate to compensate for the magnetic field changes experienced by the moving sensors, ensuring high-quality MEG data are recorded.

“This enables magnetic field compensation in any position, which makes OPM-MEG scans more comfortable for everyone and allows people to walk around,” said Holmes.

The researchers demonstrated the capabilities of the new matrix coil system with four experiments. They first wanted to show that the stationary helmet (not worn by anyone) placed inside the two coil panels could reduce background magnetic fields, which it did. Then a healthy participant wore the helmet, demonstrating that the OPMs recorded his brain function when he moved his head and that the coils cancelled the magnetic fields.

A third experiment used a wire coil as a proxy for brain cell activity because it produces magnetic fields when electric currents are applied. The wire coil attached to the helmet with OPM sensors showed that the matrix coil compensated for motion-related changes, ensuring accurate measurements. The last experiment showed that the helmet worn by a second healthy participant could produce a high-quality recording of brain activity when walking around.

“By taking advantage of recent OPM-MEG technology and designing a new magnetic shielding system, this helmet represents a novel magnetoencephalography approach that could help reveal more about how the brain works,” said Shumin Wang, Ph.D., a program director in the U.S. National Institute of Biomedical Imaging and Bioengineering (NIBIB) Division of Applied Science & Technology (Bioimaging).

Citations: Holmes, N, et al. Enabling ambulatory movement in wearable magnetoencephalography with matrix coil active magnetic shielding. NeuroImage. (2023), https://www.sciencedirect.com/science/article/pii/S1053811923003087?via%3Dihub (open-access); and Holmes N, et al. Naturalistic Hyperscanning with Wearable Magnetoencephalography. Sensors. (2023), https://www.mdpi.com/1424-8220/23/12/5454 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists build tiny healing biological robots from human cells

These multicellular bots move around and help heal “wounds” created in cultured neurons (and other possible uses)

Researchers at Tufts University and Harvard University’s Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells. They can move across a surface and encourage the growth of neurons across a region of damage in a lab dish.

The self-assembling, multicellular robots, which range in size from the width of a human hair to the point of a sharpened pencil, are shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers’ vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.

The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots (biobots) from frog embryo cells called Xenobots. These are capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own.

In the current study, published in Advanced Science, Levin, along with PhD student Gizem Gumuskaya, discovered that such bots can be created from adult human cells without any genetic modification and demonstrate some capabilities beyond what was observed with the Xenobots.

Repairing neurons

The researchers used human cells in the trachea to reboot to create new multicellular shapes and move over a surface of human neurons grown in a lab dish to encourage new growth to fill in gaps caused by scratching the layer of cells. (We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats.)

Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a “superbot.”

“The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body,” said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute.

“It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage,” he said. “We’re now looking at how the healing mechanism works, and asking what else these constructs can do.”

The advantages of using human cells include the ability to construct bots from a patient’s own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants.

How to make Anthrobots

Each Anthrobot starts out as a single celld, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung.

Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.

Potential other vital uses

Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, travel to and perform functions in the body, or help build engineered tissues in the lab.

Other uses of the bots could include clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues.

A swarm of Anthrobots. Credit: Gizem Gumuskaya, Tufts University

Citation: Gumuskaya, G., Srivastava, P., Cooper, B. G., Lesser, H., Semegran, B., Garnier, S., & Levin, M. Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells. Advanced Science, 2303575. https://doi.org/10.1002/advs.202303575 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astronomers discover star with six planets in orbits with ‘rhythmic beat’

Astronomers have discovered six planets that orbit the bright star HD 110067 with a rhythmic beat, a rare case of an “in sync” gravitational lockstep that could offer “deep insight into planet formation and evolution,” according to NASA.

An international team of astronomers, led by Rafael Luque, of the University of Chicago, published a paper online on the discovery, “A resonant sextuplet of sub-Neptunes transiting the bright star HD 110067,” in the journal Nature on Nov. 29.

Precise musical patterns

The strange family of six “sub-Neptune” planets (possibly smaller versions of our own Neptune) is moving in a cyclic rhythm. “This orbital waltz repeats itself so precisely it can be readily set to music,” notes NASA.

“The planet closest to the star makes three orbits for every two of the next planet out—called a 3/2 resonance, a pattern that is repeated among the four closest planets.” A pattern of four orbits for every three of the next planet out (a 4/3 resonance) is repeated twice.

“And these resonant orbits are rock-solid: The planets likely have been performing this same rhythmic dance since the system formed billions of years ago. 

This animation shows six “sub-Neptune” exoplanets in rhythmic orbits around their star, with a musical tone as each planet passes a line drawn through the system. The line is where the planets cross in front of (“transit”) their star from Earth’s perspective. In these rhythms, known as “resonance,” the innermost planet makes three orbits for every two of the next planet out. Among the outermost planets, a pattern of four orbits for every three of the next planet out is repeated twice. Animation credit: Dr. Hugh Osborn, University of Bern

Citation: Luque, R., Osborn, H. P., Leleu, A., Pallé, E., Bonfanti, A., Barragán, O., Wilson, T. G., Broeg, C., Cameron, A. C., Lendl, M., Maxted, P. F., Alibert, Y., Gandolfi, D., Delisle, J., Hooton, M. J., Egger, J. A., Nowak, G., Lafarga, M., Rapetti, D., . . . Zingales, T. (2023). A resonant sextuplet of sub-Neptunes transiting the bright star HD 110067. Nature, 623(7989), 932-937. https://doi.org/10.1038/s41586-023-06692-3

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Google DeepMind invents nearly 400,000 new compounds for use by researchers

Google DeepMind has contributed nearly 400,000 new compounds to the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) Materials Project for use by researchers.

Think future materials like lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, and faster transistors for the next generation of computers. 

The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials for use in future technologies. The new compounds include how the atoms of a material are arranged (the crystal structure) and how istable it is (formation energy). 

“We have to create new materials if we are going to address the global environmental and climate challenges,” said Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley. “With innovation in materials, we can potentially develop recyclable plastics, harness waste energy, make better batteries, and build cheaper solar panels that last longer, among many other things.”

Deep learning tool

To generate the new data, Google DeepMind developed a deep learning tool called Graph Networks for Materials Exploration, or GNoME. Researchers trained GNoME using workflows and data that were developed over a decade by the Materials Project, and improved the GNoME algorithm through active learning.

GNoME researchers ultimately produced 2.2 million crystal structures, including 380,000 that they are adding to the Materials Project and predict are stable, making them potentially useful in future technologies. The new results from Google DeepMind are published today in the journal Nature.

Quick discovery

Some of the computations from GNoME were used alongside data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. A-Lab’s first paper, also published today in Nature, showed that the autonomous lab can quickly discover novel materials with minimal human input. 

To make the novel compounds predicted by the Materials Project, A-Lab’s AI created new recipes by combing through scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were used to evaluate the materials’ predicted stability.

The Materials Project is the most widely used open-access repository of information on inorganic materials in the world. The database holds millions of properties on hundreds of thousands of structures and molecules, information primarily processed at Berkeley Lab’s National Energy Research Science Computing Center.

More than 400,000 people are registered as users of the site and, on average, more than four papers citing the Materials Project are published every day. The contribution from Google DeepMind is the biggest addition of structure-stability data from a group since the Materials Project began.

New data freely available

The Materials Project is now processing the compounds from Google DeepMind and adding them into the online database. The new data will be freely available to researchers, and also feed into projects such as A-Lab that partner with the Materials Project.

By following promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed useful properties in new materials across several areas.

These include carbon capture (pulling carbon dioxide from the atmosphere), photocatalysts (materials that speed up chemical reactions in response to light and could be used to break down pollutants or generate hydrogen), thermoelectrics (materials that could help harness waste heat and turn it into electrical power) and transparent conductors (which might be useful in solar cells, touch screens, or LEDs).

Citation: Szymanski, N. J., Rendy, B., Fei, Y., Kumar, R. E., He, T., Milsted, D., McDermott, M. J., Gallant, M., Cubuk, E. D., Merchant, A., Kim, H., Jain, A., Bartel, C. J., Persson, K., Zeng, Y., & Ceder, G. (2023). An autonomous laboratory for the accelerated synthesis of novel materials. Nature, 1-6. https://doi.org/10.1038/s41586-023-06734-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Immersive ‘presence’ in mixed reality can be measured using reaction time

With mixed-reality devices (which combine virtual and augmented reality), such as Meta’s Quest 3 or Apple’s forthcoming Vision Pro, what you feel is real is called “presence.” 

Now researchers at the University of Massachusetts Amherst have identified reaction time as a possible way to adjust presence. That’s better than asking the user to respond to a questionnaire after exiting, they note.

To test this, the researchers put participants in modified “Fruit Ninja” mixed-reality scenarios. In one experiment, they adjusted place illusion and the fruit appeared either as lifelike fruit or abstract cartoons. In another experiment, they created a plausibility illusion by showing mugs filling up with coffee, either in the correct upright position or sideways.  

They found that people were quicker in reacting to the lifelike fruit than they would to the cartoonish-looking food.

Important in medical procedures

Yasra Chandio, doctoral candidate in computer engineering and lead study author, gives medical procedures as an example of the importance of this real-time presence calibration: If a surgeon needs millimeter-level precision, they may use mixed reality as a guide to tell them exactly where they need to operate.  

“If we just show the organ in front of them, and we don’t adjust for the height of the surgeon, ​​​​for instance, that could be delaying the surgeon and could ​​​​have inaccuracies for them,” she says. Low presence can also contribute to cybersickness, a feeling of dizziness or nausea that can occur in the body when a user’s bodily perception does not align with what they’re seeing.

However, if the mixed-reality system is internally monitoring presence, it can make adjustments in real time, like moving the virtual organ rendering to eye level. 

Citation: Y. Chandio, N. Bashir, V. Interrante and F. M. Anwar, “Investigating the Correlation Between Presence and Reaction Time in Mixed Reality,” in IEEE Transactions on Visualization and Computer Graphics, doi: 10.1109/TVCG.2023.3319563

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter