AI model designed to improve human-machine understanding

Researchers in Finland have developed a model that they say enables computers to interpret and understand human emotions. The model could improve the interface between humans and smart technologies (including AI systems), making it more intuitive and responsive to user feelings.

The model could also be used by a computer in the future to predict that the user has feelings of happiness, boredom, irritation, rage, despair and anxiety, according to Jussi Jokinen, Associate Professor of Cognitive Science at the University of Jyväskylä in a statement. And It could also be integrated into AI systems.

Jokinen’s project uses mathematical psychology to solve the misalignment problem between intelligent computer systems and their users.

Next step: influence user emotions

The research is anchored in a theory postulating that emotions are generated when human cognition evaluates events from various perspectives.

Jokinen elaborates: “Consider a computer error during a critical task. This event is assessed by the user’s cognition as being counterproductive. An inexperienced user might react with anxiety and fear due to uncertainty about how to resolve the error, but an experienced user might feel irritated and annoyed at having to waste time resolving the issue. Our model predicts the user’s emotional response by simulating this cognitive evaluation process.”

Handling negative emotions

The next phase of this project will also explore potential applications of this emotional understanding. “With our model, a computer could preemptively predict user distress and attempt to mitigate negative emotions,” Jokinen suggests.

“This proactive approach could be utilized in various settings, from office environments to social media platforms, improving user experience by sensitively managing emotional dynamics … a glimpse into a future where computers are empathetic partners in user interaction.”

Citation: Jiayi Eurus Zhang, Bernhard Hilpert, Joost Broekens, Jussi P. P. Jokinen. 11 May 2024. Simulating Emotions With an Integrated Computational Model of Appraisal and Reinforcement Learning. CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems. May 2024. Article No.: 703Pages 1–12. https://doi.org/10.1145/3613904.3641908 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI helps medical professionals read confusing brain EEGs

Researchers at Duke University have developed an assistive-machine learning model that they say greatly improves the ability of medical professionals to read the brain electroencephalography (EEG) charts of intensive-care patients.

EEG readings are the only method for knowing when unconscious patients are in danger of suffering a seizure or are having seizure-like events, and the new computational tool could help save thousands of lives each year, the researchers say.

The results appear online May 23 in the New England Journal of Medicine AI.

Interpreting EEGs

EEGs use small sensors attached to the scalp to measure the brain’s electrical signals, producing a long line of up and down squiggles. When a patient is having a seizure, these lines jump up and down dramatically like a seismograph during an earthquake—a signal that is easy to recognize.

But other medically important anomalies called seizure-like events are much more difficult to discern, even by highly trained neurologists.  

“Interpretable” machine learning algorithms

To build a tool to help make these determinations, the doctors turned to colleagues specializing in developing “interpretable” machine learning algorithms. (Most machine-learning models are a “black box” that makes it impossible for a human to know how it’s reaching conclusions; interpretable machine learning models essentially must show their work.)

The research group started by gathering EEG samples from over 2,700 patients and having more than 120 experts pick out the relevant features in the graphs, categorizing them as either a seizure, one of four types of seizure-like events or “other.”

Patterns showing seizure-like events

When displayed visually, that continuum looks something like a multicolored starfish swimming away from a predator. Each differently colored arm represents one type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the surer it is of its decision, while those placed closer to the central body are less certain.

The algorithm also points to the patterns in the brainwaves that it used to make its determination and provides three examples of professionally diagnosed charts that it sees as being similar.

This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark, the researchers say. “Even if they’re not highly trained to read EEGs, they can make a much more educated decision.”

Testing the algorithm

Putting the algorithm to the test, the collaborative team had eight medical professionals with relevant experience categorize 100 EEG samples into the six categories, once with the help of AI and once without. The performance of all of the participants greatly improved with AI, with their overall accuracy rising from 47% to 71%.

This work was supported by the National Science Foundation, the National Institutes of Health, and the Nebraska Stem Cell Grant.

Citation: Alina Jade Barnett et al. “Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine Learning.” NEJM AI, May 23, 2024. DOI: 10.1056/AIoa2300331 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bio-inspired cameras and AI help drivers detect pedestrians and obstacles faster

University of Zurich (UZH) researchers have combined a novel bio-inspired camera with AI to develop a system that can detect obstacles around a car much more quickly than current systems.

Quicker detection using less computational power

Cameras currently used for driver assistance on cars typically capture 30 to 50 frames per second. An artificial neural network can be trained to recognize objects in their images. But if something happens during the 20 or 30 milliseconds between two snapshots, the camera may see it too late, according to the researchers.

The solution would be increasing the frame rate, but that translates into more data that needs to be processed in real-time and more computational power, the scientists note in an open-access Nature paper published today.

Combining the best of two camera types with AI to emulate human eyes

Event (“neuromorphic”) cameras are a recent innovation based on a different principle. Instead of a constant frame rate, they record information every time they detect fast movements, similar to how human eyes perceive images. But they can miss things that move slowly and their images are not easily converted into the kind of data that is used to train the AI algorithm.

The researches created a hybrid system that combines the best of both worlds: It includes a standard camera that collects 20 images per second (a relatively low frame rate). Its images are processed by a convolutional neural network, that is trained to recognize cars or pedestrians. The data from the event camera is coupled to a different type of AI system, called an asynchronous graph neural network, which is particularly apt for analyzing 3-D data that change over time.

Detections from the event camera are used to anticipate detections by the standard camera and also boost its performance. The result is a visual detector that can detect objects just as quickly as a standard camera taking 5,000 images per second would do but only requires the same bandwidth as a standard 50-frame-per-second camera.

One hundred times faster detections using less data

The team tested their system against the best cameras and visual algorithms currently on the automotive market, finding that it leads to one hundred times faster detections while reducing the amount of data that must be transmitted between the camera and the onboard computer as well as the computational power needed to process the images without affecting accuracy.

Crucially, the system can effectively detect cars and pedestrians that enter the field of view between two subsequent frames of the standard camera, providing additional safety for both the driver and traffic participants, which can make a huge difference, especially at high speeds.

According to the scientists, the method could be made even more powerful in the future by integrating cameras with LiDAR sensors, like the ones currently used on self-driving cars.

Citation: Gehrig, D., Scaramuzza, D. Low-latency automotive vision with event cameras. Nature 629, 1034–1040 (2024). https://doi.org/10.1038/s41586-024-07409-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI headphones let you listen to only a single person in a crowd

A University of Washington team has developed an AI system that lets a user wearing headphones look at a person speaking for three to five seconds and then listen only to that person (“enroll” them).

Their “Target Speech Hearing” app then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time, even if the listener moves around in noisy places and no longer faces the speaker.

How it works

To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously.

The headphones send that signal to an on-board embeded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more instant training data.

This work builds on the team’s previous “semantic hearing” research, which allowed users to select specific sound classes—such as birds or voices—that they wanted to hear, and automatically cancel other sounds in the environment.

The team plans to use the Target Speech Hearing app with earbuds and hearing aids in the future. The code for the proof-of-concept device is available for others to build on, but not commercially available.

Citation: Bandhav Veluri, Malek Itani, Tuochao Chen,Takuya Yoshioka, Shyamnath Gollakota. CHI ’24. Look Once to Hear: Target Speech Hearing with Noisy Examples. Proceedings of the CHI Conference on Human Factors in Computing Systems, May 2024 No.: 37 pages 1–16 https://doi.org/10.1145/3613904.3642057 (open source)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Euclid telescope reveals amazing images from the Universe’s distant past

Scientists have just released the first set of scientific data captured with the Euclid telescope, showing an exciting glimpse of the Universe’s distant past.

The mission seeks to unlock mysteries of dark matter and dark energy and reveal how and why the Universe looks as it does today.

Five never-before-seen images of the Universe

Early observations, described in a series of 10 scientific papers published today (May 23, 2024), include five never-before-seen images of the Universe: free-floating new-born planets, newly identified extragalactic star clusters, new low-mass dwarf galaxies in a nearby galaxy cluster, the distribution of dark matter and intracluster light in galaxy clusters, and very distant bright galaxies from the first billion years of the Universe.

Most precise map of our Universe over time

The images obtained by Euclid are at least four times sharper than those that can be taken from ground-based telescopes. They cover large patches of sky at unrivalled depth, looking far into the distant Universe using both visible and infrared light.

The Euclid telescope is designed to provide the most precise map of our Universe over time and demonstrates Euclid’s ability to unravel the secrets of the cosmos.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Machine-learning algorithm tracks dementia-related protein clumping in real time

More than 55 million people around the world live with dementia-related disorders such as Alzheimer’s and Parkinson’s and other neurodegenerative disorders.

Now chemists at the University of Copenhagen have developed a machine-learning algorithm that they say could lead to developing new drugs and therapies to combat these diseases, which are caused by proteins clumping together and destroying vital functions.

The algorithm can track clumping under a microscope in real-time, automatically mapping and tracking the important characteristics of the clumped-up proteins that cause neurodegenerative disorders.

The research has just been published in the journal Nature Communications

Detecting and tracking microscopic proteins in real time

The algorithm can spot protein clumps down to a billionth of a meter in microscopy images in real time. Their exact shape can vary depending on the disorder they trigger.

“Our new tool can let us see how these clumps are affected by whatever compounds we add. In this way, the model can help us work towards understanding how to potentially stop or transform them into less dangerous or more stable clumps,” explains Jacob Kæstel-Hansen from the Department of Chemistry, who co-led the research team behind the algorithm, in a press release.

“In the future, the algorithm will make it much easier to learn more about why clumps form so that we can develop new drugs and therapies to combat these disorders.”

New drugs

The researchers also see potential in using the tool to develop new drugs once the microscopic building blocks have been clearly identified. They hope that their work will kickstart the gathering of more comprehensive knowledge about the shapes and functions of proteins and molecules.

The team is also using the tool to conduct experiments with insulin molecules. As insulin molecules clump, their ability to regulate blood sugar weakens.

“As other researchers around the world begin to deploy the tool, it will help create a large library of molecule and protein structures related to various disorders and biology in general. This will allow us to better understand diseases and try to stop them,” concludes Nikos Hatzakis from the Department of Chemistry.

Written in python, the algorithm is freely available on Github.

Citation: Bender, S.W.B., Dreisler, M.W., Zhang, M. et al. SEMORE: SEgmentation and MORphological fingErprinting by machine learning automates super-resolution data analysis. Nat Commun 15, 1763 (2024). https://doi.org/10.1038/s41467-024-46106-0 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Study links key nutrients with slower brain aging

A new study published in the journal Nature Aging has found specific nutrients that may play a pivotal role in healthy aging of the brain. Combining state-of-the-art innovations in neuroscience and nutritional science, scientists identified a specific nutrient profile in study participants who performed better cognitively.

The researchers at University of Nebraska–Lincoln’s Center for Brain, Biology and Behavior, the University of Illinois at Urbana-Champaign University of Nebraska–Lincoln’s Center for Brain, Biology and Behavior and the University of Illinois at Urbana-Champaign enrolled 100 cognitively healthy participants, aged 65-75.

The participants completed a questionnaire with demographic information, body measurements and physical activity. Blood plasma was collected following a fasting period to analyze the nutrient biomarkers. Participants also underwent cognitive assessments and MRI scans.

The beneficial nutrient blood biomarkers in the study were a combination of fatty acids, antioxidants and carotenoids. This profile is correlated with nutrients found in the Mediterranean diet, previously associated with healthy brain aging.

Citation: Zwilling, C.E., Wu, J. & Barbey, A.K. Investigating nutrient biomarkers of healthy brain aging: a multimodal brain imaging study. npj Aging 10, 27 (2024). https://www.nature.com/articles/s41514-024-00150-8 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Algorithms to track space (and sea) objects

Cislunar space, which stretches from the Earth to just beyond the Moon’s orbit, is about to become heavily trafficked over the next 10 years. This traffic includes NASA’s planned Artemis missions and other countries joining in the cislunar space race.

So there’s a need for observing, tracking and predicting the orbit of objects like asteroids and satellites so they don’t collide with spacecraft (and each other). Space domain awareness (SDA), the process of detecting and observing space objects, faces challenges.

The current SDA infrastructure, which is mostly Earth-based, is not equipped to provide the needed coverage in cislunar space, according to Tarek Elgohary, an associate professor of aerospace engineering at the University of Central Florida and director of the university’s Astrodynamics and Space Robotics Laboratory .

Tracking space objects

Elgohary’s team plans to create a computational framework to rapidly and accurately track space objects. Using Oracle, a satellite developed by the U.S. Air Force Research Laboratory, the researchers will conduct experiments on space-object detection and tracking in cislunar space.

The algorithms will also allow Oracle and other spacecraft to operate autonomously without requiring intervention from Earth, according to the team.

The team will also develop a similar computational framework using algorithms to allow sea vessels to detect objects in real time and predict their future locations.

The work is supported by a $350,000 grant from the U.S. Air Force Office of Scientific Research Dynamic Data and Information Processing Program and a $150,000 grant from Lockheed Martin.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Do you have robot-phobia?

Some workers in the hospitality industry (such as hotels) apparently have “robot-phobia”—the fear that robots and technology will take human jobs.

Using more robots to close labor gaps in the hospitality industry may backfire and cause more human workers to quit, according to a Washington State University study.

Job insecurity and stress

The study, which involved more than 620 lodging and food service employees, found that human-like robot servers and automated robotic arms as well as self-service kiosks and tabletop devices increased workers’ job insecurity and stress.

That led to greater intentions to leave their jobs. The impact was more pronounced with employees who had real experience working with robotic technology.

Effects on hospitality workers

Published in the International Journal of Contemporary Hospitality Management, the study focuses on how the technology impacted hospitality workers. The researchers surveyed 321 lodging and 308 food service employees from across the U.S., asking a range of questions about their jobs and attitudes toward robots.

Having a higher robot-phobia was connected to greater feelings of job insecurity and stress, which were then correlated with “turnover intention” or workers’ plans to leave their jobs, they found.

The employees who viewed robots as being more capable and efficient also ranked higher in turnover intention.

Citation: Chen, C.-C.(B). and Cai, R. (30 April 2024). “Are robots stealing our jobs? Examining robot-phobia as a job stressor in the hospitality workplace. International Journal of Contemporary Hospitality Management10.1108/IJCHM-09-2023-1454 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Generating heat over 1,000 degrees Celsius with solar power to tackle climate change

Instead of burning fossil fuels to smelt steel and cook cement, what if we trapped solar energy directly from the Sun?

That’s what researchers at ETH Zurich, Switzerland are exploring. Their proof-of-concept study, published May 15 in the journal Device, uses synthetic quartz to trap solar energy at temperatures higher than 1,000°C (1,832°F). The research could lead to using clean energy for carbon-intensive industries, which currently account for about 25% of global energy consumption.

“To tackle climate change, we need to decarbonize energy in general,” says corresponding author Emiliano Casati of ETH Zurich, Switzerland, in a press release.

Researchers have prevsiously explored a clean-energy alternative using solar receivers, which concentrate and build heat with thousands of sun-tracking mirrors. But that technology has difficulties transferring solar energy efficiently above 1,000°C.

Light from 136 suns

To boost the efficiency of solar receivers, Casati turned to semitransparent materials such as quartz, which can trap sunlight—a phenomenon called the “thermal-trap effect.”

The team crafted a thermal-trapping device by attaching a synthetic quartz rod to an opaque silicon disk as an energy absorber. When they exposed the device to an energy flux equivalent to the light coming from 136 suns, the absorber plate reached 1,050°C (1,922°F), while the other end of the quartz rod remained at 600°C (1,112°F). 

“Previous research has only managed to demonstrate the thermal-trap effect up to 170°C (338°F),” says Casati. “Our research showed that solar thermal trapping works not just at low temperatures, but well above 1,000°C.”

Casati and his colleagues are now optimizing the thermal-trapping effect and investigating new applications for the method. By exploring other materials, such as different fluids and gases, they were able to reach even higher temperatures, noting that these semitransparent materials’ ability to absorb light or radiation is not limited to solar radiation.

Citation: Casati et al. Solar thermal trapping at 1,000ºC and above. Light from 136 suns. Device https://cell.com/device/fulltext/S2666-9986(24)00235-7 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter