What’s worse than global warming?

That’s the conclusion of a new study that finds climate change will likely generate upward spikes of ozone at ground level by 2050.

That could result in many parts of the United States (for example) falling out of compliance with air quality standards. That means increasing risks to public health, according to a statement by James East, first author of a paper on the study and a researcher at Harvard University. “Warmer temperatures are correlated with increases to ozone in polluted areas,” East said.

Higher air pollution

Ozone can naturally occur at ground level, but it’s often formed by volatile organic compounds (VOCs) interacting with nitrogen oxides (NOx)—both are air pollutants.

“That means areas that already have higher levels of air pollution of VOCs and NOx will likely see increases in ozone as average temperatures go up,” says East.

“That’s important because ground-level ozone can cause a wide array of health problems, including decreased lung function and inflammation of airways, contributing to hundreds of thousands of deaths each year.

“Even in the best-case scenario, we found that more variability in ozone levels is projected for 2050, meaning that we’d still expect to see an increase in the number of days where there is an exceptionally high increase in ozone, violating the air quality standard,” East said.

Worst-case scenario

In the worst-case scenario, the climate is highly sensitive to carbon dioxide. In that case, the study says the high end of ozone measurements would increase by more than 2.3 ppb (parts per billion). This would mean that many parts of the country would see a significant increase in the number of days when ozone levels exceed air quality standards.

“Our study finds that between 5 million and 13 million additional people will be exposed to dangerously high levels of ozone in 2050,” East says.

The paper, “Projecting Changes in the Frequency and Magnitude of Ozone Pollution Events Under Uncertain Climate Sensitivity,” is published in the open-access journal Earth’s Future.

The work was supported by the Natural Sciences and Engineering Research Council of Canada.

Citation: East, J. D., Monier, E., Saari, R. K., & Garcia-Menendez, F. (2024). Projecting Changes in the Frequency and Magnitude of Ozone Pollution Events Under Uncertain Climate Sensitivity. Earth’s Future, 12(6), e2023EF003941. https://doi.org/10.1029/2023EF003941 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Using AI to decode dog vocalizations

Have you ever wanted to understand what your (or someone else’s) dog is trying to say to you?

University of Michigan researchers are exploring the possibilities of using AI to do just that. They are developing tools to identify whether a dog’s bark conveys playfulness or aggression. The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex.

Models trained on human speech

The study, which was conducted in collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, found that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication. 

This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation.

These new models are trained to distinguish nuances in human speech, like tone, pitch and accent. They convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.

The researchers used a dataset of dog vocalizations (recorded from 74 dogs of varying breeds, ages and sex in various contexts) to modify a machine-learning and speech representation model called Wav2Vec2, which was originally trained on human speech data.

Animal welfare

According to the researchers, this study also has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to dogs’ emotional and physical needs, enhancing their care and preventing potentially dangerous situations.

Citation: Abzaliev, A., Espinosa, H. P., & Mihalcea, R. (2024). Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification. ArXiv. /abs/2404.18739, https://arxiv.org/abs/2404.18739 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model designed to improve human-machine understanding

Researchers in Finland have developed a model that they say enables computers to interpret and understand human emotions. The model could improve the interface between humans and smart technologies (including AI systems), making it more intuitive and responsive to user feelings.

The model could also be used by a computer in the future to predict that the user has feelings of happiness, boredom, irritation, rage, despair and anxiety, according to Jussi Jokinen, Associate Professor of Cognitive Science at the University of Jyväskylä in a statement. And It could also be integrated into AI systems.

Jokinen’s project uses mathematical psychology to solve the misalignment problem between intelligent computer systems and their users.

Next step: influence user emotions

The research is anchored in a theory postulating that emotions are generated when human cognition evaluates events from various perspectives.

Jokinen elaborates: “Consider a computer error during a critical task. This event is assessed by the user’s cognition as being counterproductive. An inexperienced user might react with anxiety and fear due to uncertainty about how to resolve the error, but an experienced user might feel irritated and annoyed at having to waste time resolving the issue. Our model predicts the user’s emotional response by simulating this cognitive evaluation process.”

Handling negative emotions

The next phase of this project will also explore potential applications of this emotional understanding. “With our model, a computer could preemptively predict user distress and attempt to mitigate negative emotions,” Jokinen suggests.

“This proactive approach could be utilized in various settings, from office environments to social media platforms, improving user experience by sensitively managing emotional dynamics … a glimpse into a future where computers are empathetic partners in user interaction.”

Citation: Jiayi Eurus Zhang, Bernhard Hilpert, Joost Broekens, Jussi P. P. Jokinen. 11 May 2024. Simulating Emotions With an Integrated Computational Model of Appraisal and Reinforcement Learning. CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems. May 2024. Article No.: 703Pages 1–12. https://doi.org/10.1145/3613904.3641908 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI helps medical professionals read confusing brain EEGs

Researchers at Duke University have developed an assistive-machine learning model that they say greatly improves the ability of medical professionals to read the brain electroencephalography (EEG) charts of intensive-care patients.

EEG readings are the only method for knowing when unconscious patients are in danger of suffering a seizure or are having seizure-like events, and the new computational tool could help save thousands of lives each year, the researchers say.

The results appear online May 23 in the New England Journal of Medicine AI.

Interpreting EEGs

EEGs use small sensors attached to the scalp to measure the brain’s electrical signals, producing a long line of up and down squiggles. When a patient is having a seizure, these lines jump up and down dramatically like a seismograph during an earthquake—a signal that is easy to recognize.

But other medically important anomalies called seizure-like events are much more difficult to discern, even by highly trained neurologists.  

“Interpretable” machine learning algorithms

To build a tool to help make these determinations, the doctors turned to colleagues specializing in developing “interpretable” machine learning algorithms. (Most machine-learning models are a “black box” that makes it impossible for a human to know how it’s reaching conclusions; interpretable machine learning models essentially must show their work.)

The research group started by gathering EEG samples from over 2,700 patients and having more than 120 experts pick out the relevant features in the graphs, categorizing them as either a seizure, one of four types of seizure-like events or “other.”

Patterns showing seizure-like events

When displayed visually, that continuum looks something like a multicolored starfish swimming away from a predator. Each differently colored arm represents one type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the surer it is of its decision, while those placed closer to the central body are less certain.

The algorithm also points to the patterns in the brainwaves that it used to make its determination and provides three examples of professionally diagnosed charts that it sees as being similar.

This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark, the researchers say. “Even if they’re not highly trained to read EEGs, they can make a much more educated decision.”

Testing the algorithm

Putting the algorithm to the test, the collaborative team had eight medical professionals with relevant experience categorize 100 EEG samples into the six categories, once with the help of AI and once without. The performance of all of the participants greatly improved with AI, with their overall accuracy rising from 47% to 71%.

This work was supported by the National Science Foundation, the National Institutes of Health, and the Nebraska Stem Cell Grant.

Citation: Alina Jade Barnett et al. “Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine Learning.” NEJM AI, May 23, 2024. DOI: 10.1056/AIoa2300331 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bio-inspired cameras and AI help drivers detect pedestrians and obstacles faster

University of Zurich (UZH) researchers have combined a novel bio-inspired camera with AI to develop a system that can detect obstacles around a car much more quickly than current systems.

Quicker detection using less computational power

Cameras currently used for driver assistance on cars typically capture 30 to 50 frames per second. An artificial neural network can be trained to recognize objects in their images. But if something happens during the 20 or 30 milliseconds between two snapshots, the camera may see it too late, according to the researchers.

The solution would be increasing the frame rate, but that translates into more data that needs to be processed in real-time and more computational power, the scientists note in an open-access Nature paper published today.

Combining the best of two camera types with AI to emulate human eyes

Event (“neuromorphic”) cameras are a recent innovation based on a different principle. Instead of a constant frame rate, they record information every time they detect fast movements, similar to how human eyes perceive images. But they can miss things that move slowly and their images are not easily converted into the kind of data that is used to train the AI algorithm.

The researches created a hybrid system that combines the best of both worlds: It includes a standard camera that collects 20 images per second (a relatively low frame rate). Its images are processed by a convolutional neural network, that is trained to recognize cars or pedestrians. The data from the event camera is coupled to a different type of AI system, called an asynchronous graph neural network, which is particularly apt for analyzing 3-D data that change over time.

Detections from the event camera are used to anticipate detections by the standard camera and also boost its performance. The result is a visual detector that can detect objects just as quickly as a standard camera taking 5,000 images per second would do but only requires the same bandwidth as a standard 50-frame-per-second camera.

One hundred times faster detections using less data

The team tested their system against the best cameras and visual algorithms currently on the automotive market, finding that it leads to one hundred times faster detections while reducing the amount of data that must be transmitted between the camera and the onboard computer as well as the computational power needed to process the images without affecting accuracy.

Crucially, the system can effectively detect cars and pedestrians that enter the field of view between two subsequent frames of the standard camera, providing additional safety for both the driver and traffic participants, which can make a huge difference, especially at high speeds.

According to the scientists, the method could be made even more powerful in the future by integrating cameras with LiDAR sensors, like the ones currently used on self-driving cars.

Citation: Gehrig, D., Scaramuzza, D. Low-latency automotive vision with event cameras. Nature 629, 1034–1040 (2024). https://doi.org/10.1038/s41586-024-07409-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI headphones let you listen to only a single person in a crowd

A University of Washington team has developed an AI system that lets a user wearing headphones look at a person speaking for three to five seconds and then listen only to that person (“enroll” them).

Their “Target Speech Hearing” app then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time, even if the listener moves around in noisy places and no longer faces the speaker.

How it works

To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously.

The headphones send that signal to an on-board embeded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more instant training data.

This work builds on the team’s previous “semantic hearing” research, which allowed users to select specific sound classes—such as birds or voices—that they wanted to hear, and automatically cancel other sounds in the environment.

The team plans to use the Target Speech Hearing app with earbuds and hearing aids in the future. The code for the proof-of-concept device is available for others to build on, but not commercially available.

Citation: Bandhav Veluri, Malek Itani, Tuochao Chen,Takuya Yoshioka, Shyamnath Gollakota. CHI ’24. Look Once to Hear: Target Speech Hearing with Noisy Examples. Proceedings of the CHI Conference on Human Factors in Computing Systems, May 2024 No.: 37 pages 1–16 https://doi.org/10.1145/3613904.3642057 (open source)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Euclid telescope reveals amazing images from the Universe’s distant past

Scientists have just released the first set of scientific data captured with the Euclid telescope, showing an exciting glimpse of the Universe’s distant past.

The mission seeks to unlock mysteries of dark matter and dark energy and reveal how and why the Universe looks as it does today.

Five never-before-seen images of the Universe

Early observations, described in a series of 10 scientific papers published today (May 23, 2024), include five never-before-seen images of the Universe: free-floating new-born planets, newly identified extragalactic star clusters, new low-mass dwarf galaxies in a nearby galaxy cluster, the distribution of dark matter and intracluster light in galaxy clusters, and very distant bright galaxies from the first billion years of the Universe.

Most precise map of our Universe over time

The images obtained by Euclid are at least four times sharper than those that can be taken from ground-based telescopes. They cover large patches of sky at unrivalled depth, looking far into the distant Universe using both visible and infrared light.

The Euclid telescope is designed to provide the most precise map of our Universe over time and demonstrates Euclid’s ability to unravel the secrets of the cosmos.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Machine-learning algorithm tracks dementia-related protein clumping in real time

More than 55 million people around the world live with dementia-related disorders such as Alzheimer’s and Parkinson’s and other neurodegenerative disorders.

Now chemists at the University of Copenhagen have developed a machine-learning algorithm that they say could lead to developing new drugs and therapies to combat these diseases, which are caused by proteins clumping together and destroying vital functions.

The algorithm can track clumping under a microscope in real-time, automatically mapping and tracking the important characteristics of the clumped-up proteins that cause neurodegenerative disorders.

The research has just been published in the journal Nature Communications

Detecting and tracking microscopic proteins in real time

The algorithm can spot protein clumps down to a billionth of a meter in microscopy images in real time. Their exact shape can vary depending on the disorder they trigger.

“Our new tool can let us see how these clumps are affected by whatever compounds we add. In this way, the model can help us work towards understanding how to potentially stop or transform them into less dangerous or more stable clumps,” explains Jacob Kæstel-Hansen from the Department of Chemistry, who co-led the research team behind the algorithm, in a press release.

“In the future, the algorithm will make it much easier to learn more about why clumps form so that we can develop new drugs and therapies to combat these disorders.”

New drugs

The researchers also see potential in using the tool to develop new drugs once the microscopic building blocks have been clearly identified. They hope that their work will kickstart the gathering of more comprehensive knowledge about the shapes and functions of proteins and molecules.

The team is also using the tool to conduct experiments with insulin molecules. As insulin molecules clump, their ability to regulate blood sugar weakens.

“As other researchers around the world begin to deploy the tool, it will help create a large library of molecule and protein structures related to various disorders and biology in general. This will allow us to better understand diseases and try to stop them,” concludes Nikos Hatzakis from the Department of Chemistry.

Written in python, the algorithm is freely available on Github.

Citation: Bender, S.W.B., Dreisler, M.W., Zhang, M. et al. SEMORE: SEgmentation and MORphological fingErprinting by machine learning automates super-resolution data analysis. Nat Commun 15, 1763 (2024). https://doi.org/10.1038/s41467-024-46106-0 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Study links key nutrients with slower brain aging

A new study published in the journal Nature Aging has found specific nutrients that may play a pivotal role in healthy aging of the brain. Combining state-of-the-art innovations in neuroscience and nutritional science, scientists identified a specific nutrient profile in study participants who performed better cognitively.

The researchers at University of Nebraska–Lincoln’s Center for Brain, Biology and Behavior, the University of Illinois at Urbana-Champaign University of Nebraska–Lincoln’s Center for Brain, Biology and Behavior and the University of Illinois at Urbana-Champaign enrolled 100 cognitively healthy participants, aged 65-75.

The participants completed a questionnaire with demographic information, body measurements and physical activity. Blood plasma was collected following a fasting period to analyze the nutrient biomarkers. Participants also underwent cognitive assessments and MRI scans.

The beneficial nutrient blood biomarkers in the study were a combination of fatty acids, antioxidants and carotenoids. This profile is correlated with nutrients found in the Mediterranean diet, previously associated with healthy brain aging.

Citation: Zwilling, C.E., Wu, J. & Barbey, A.K. Investigating nutrient biomarkers of healthy brain aging: a multimodal brain imaging study. npj Aging 10, 27 (2024). https://www.nature.com/articles/s41514-024-00150-8 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Algorithms to track space (and sea) objects

Cislunar space, which stretches from the Earth to just beyond the Moon’s orbit, is about to become heavily trafficked over the next 10 years. This traffic includes NASA’s planned Artemis missions and other countries joining in the cislunar space race.

So there’s a need for observing, tracking and predicting the orbit of objects like asteroids and satellites so they don’t collide with spacecraft (and each other). Space domain awareness (SDA), the process of detecting and observing space objects, faces challenges.

The current SDA infrastructure, which is mostly Earth-based, is not equipped to provide the needed coverage in cislunar space, according to Tarek Elgohary, an associate professor of aerospace engineering at the University of Central Florida and director of the university’s Astrodynamics and Space Robotics Laboratory .

Tracking space objects

Elgohary’s team plans to create a computational framework to rapidly and accurately track space objects. Using Oracle, a satellite developed by the U.S. Air Force Research Laboratory, the researchers will conduct experiments on space-object detection and tracking in cislunar space.

The algorithms will also allow Oracle and other spacecraft to operate autonomously without requiring intervention from Earth, according to the team.

The team will also develop a similar computational framework using algorithms to allow sea vessels to detect objects in real time and predict their future locations.

The work is supported by a $350,000 grant from the U.S. Air Force Office of Scientific Research Dynamic Data and Information Processing Program and a $150,000 grant from Lockheed Martin.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter