How to create ‘wired miniature brains’

Researchers at the University of California San Diego have created highly realistic brain cortical organoids—miniature artificial brains with functioning neural networks.

The new technique, published in Nature Protocols, will enable scientists to perform more advanced research, according to Alysson Muotri, senior author and director of the UC San Diego Sanford Stem Cell Institute (SSCI) Integrated Space Stem Cell Orbital Research Center, in a statement.

Neurological disorders

The new research ranges from autism to schizophrenia (and other neurological disorders in which electrical activity is altered), testing potentially therapeutic drugs and gene therapies before patient use and screening for efficacy and side effects.

Previous methods of creating brain organoids have not enabled researchers to study the brain’s electrical activity, says Muotri. The new method enables researchers to study neural networks created from the stem cells of patients with various neurodevelopmental conditions. These new tiny replicas of the human brain are so realistic they rival “the complexity of the fetal brain’s neural network,” he said.

Organoids in space

Muotri and researchers at the Federal University of Amazonas in Manaus, Amazonas, Brazil, are also teaming up to record and investigate Amazonian tribal remedies for Alzheimer’s disease.

In March, Muotri—in partnership with NASA—sent brain organoids to space. The organoids were made from the stem cells of patients with Alzheimer’s disease and ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease). The payload was returned in May 2024 for testing.

Microgravity

Using microgravity (experienced by astronauts in space) the researchers plan to mimic an accelerated version of Earth-based aging, allowing the researchers to witness the effects of several years of disease progression while studying the month-long mission’s payload, including potential changes in protein production, signaling pathways, oxidative stress and epigenetics. Other research possibilities for the brain organoids include disease modeling and understanding human consciousness, says Muotri.

This work was supported by the National Institutes of Health, California Institute for Regenerative Medicine (CIRM), a grant from the Department of Defense, and a “Humans in Space” grant by Boryung in Korea.

Citation: Fitzgerald, M. Q., Chu, T., Puppo, F., Blanch, R., Chillón, M., Subramaniam, S., & Muotri, A. R. (2024). Generation of “semi-guided” cortical organoids with complex neural oscillations. Nature Protocols, 1-27. https://doi.org/10.1038/s41596-024-00994-0

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Good global-warming news

A new study published in the journal Nature Climate Change has revealed “significant progress” in reducing levels in the atmosphere of chemicals that destroy Earth’s protective ozone layer.

Decline of harmful hydrochlorofluorocarbons (HCFCs)

The findings show, for the first time, a notable decline in the atmospheric levels of potent “ozone-depleting substances” (ODS) called hydrochlorofluorocarbons (HCFCs). These are harmful greenhouse gases, so a reduction should also lessen global warming.

The Montreal Protocol introduced controls on the production and usage of ODS, which were once widely used in manufacturing hundreds of products, including refrigerators, aerosol sprays, foams and packaging. HCFCs were developed as replacements for chlorofluorocarbons (CFCs) and CFC production has been banned globally since 2010.

Replacement with non-ozone-depleting compounds

According to lead author Luke Western, Marie Curie Research Fellow at the University of Bristol’s School of Chemistry, “production of HCFCs is currently being phased out globally, with a completion date slated for 2040. These are being replaced by non-ozone-depleting hydrofluorocarbons (HFCs) and other compounds. By enforcing strict controls and promoting the adoption of ozone-friendly alternatives, the protocol has successfully curbed the release and levels of HCFCs into the atmosphere.”

The results rely on high-precision measurements at globally distributed atmospheric observatories, using data from the Advanced Global Atmospheric Gases Experiment (AGAGE) and the National Atmospheric and Oceanic Administration (NOAA).

Two forms of ozone

The good: In the atmosphere, good O3 (ozone) protects us from the sun’s harmful UV rays. The famous hole in the ozone layer was created by the use of CFCs. These are now banned, so the hole is shrinking.

The bad: At ground level in urban environments, bad O3 (in the form of photochemical smog) is created by the reaction of urban pollution and sunlight. It becomes a powerful urban pollutant with negative health effects, as noted in What’s Worse Than Global Warming?

Citation: Western, L. M., Daniel, J. S., Vollmer, M. K., Clingan, S., Crotwell, M., Fraser, P. J., Ganesan, A. L., Hall, B., Harth, C. M., Krummel, P. B., Mühle, J., Salameh, P. K., Stanley, K. M., Reimann, S., Vimont, I., Young, D., Rigby, M., Weiss, R. F., Prinn, R. G., . . . Montzka, S. A. (2024). A decrease in radiative forcing and equivalent effective chlorine from hydrochlorofluorocarbons. Nature Climate Change, 1-3. https://doi.org/10.1038/s41558-024-02038-7

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Sugar substitute xylitol linked to increased risk of heart attack and stroke

Cleveland Clinic researchers have found higher amounts of the sugar alcohol xylitol are associated with an increased risk of cardiovascular events like heart attack and stroke.  

The team, led by Stanley Hazen, M.D., Ph.D., found an association in a large-scale patient analysis, preclinical research models and a clinical intervention study. Findings were published in the European Heart Journal  

Xylitol is a common sugar substitute used in sugar-free candy, gums, baked goods and oral products like toothpaste. Over the past decade, the use of sugar substitutes has increased significantly in processed foods that are promoted as healthy alternatives, said Hazen in a statement.

Investigating sugar alcohols and artificial sweeteners

“This study again shows the immediate need for investigating sugar alcohols and artificial sweeteners, especially as they continue to be recommended in combatting conditions like obesity or diabetes,” said Dr. Hazen, Chair of Cardiovascular and Metabolic Sciences at Cleveland Clinic’s Lerner Research Institute and Co-Section Head of Preventive Cardiology in the Heart, Vascular & Thoracic Institute.

Xylitol is a sugar substitute commonly used in sugar-free candy, gums, baked goods, and oral products like toothpaste. The use of sugar substitutes has increased significantly in processed foods as these products have been promoted as healthier alternatives to sugar.

Research limitation: association, not causation

The authors note that further studies assessing the long-term cardiovascular safety of xylitol are warranted. The research had several limitations, including the fact that the studies demonstrate association and not causation. They recommend talking to your doctor or a certified dietitian to learn more about healthy food choices and for personalized recommendations.   

The study was partly supported by the National Institutes of Health and the Office of Dietary Supplements.

Citation: Marco Witkowski, Ina Nemet, Xinmin S Li, Jennifer Wilcox, Marc Ferrell, Hassan Alamri, Nilaksh Gupta, Zeneng Wang, Wai Hong Wilson Tang, Stanley L Hazen, Xylitol is prothrombotic and associated with cardiovascular risk, European Heart Journal, 2024, ehae244, https://doi.org/10.1093/eurheartj/ehae244

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI-powered tool lets scientists rapidly analyze complex biological images

Scientists at Chan Zuckerberg Biohub San Francisco (CZ Biohub SF) have developed Omega, an open-source software tool that significantly advances the bioimage analysis field, according to the scientists.

Omega is integrated into large language models (LLMs), such as OpenAI’s ChatGPT, enabling scientists to process and analyze biological images using natural language conversations, rather than issuing formal commands or writing code.

Omega was created by Loïc A. Royer and his team, and documented in a paper published June 10, 2024 in Nature Methods.

A plug-in for napari, Omega is an open-source image viewer used worldwide in diverse scientific fields, especially in biomedical research. “Omega allows users to quickly generate and edit code to solve complex image processing tasks,” explained Royer, a senior group leader and director of imaging AI at CZ Biohub SF.

In this example, the user asks Omega to z-project 3D images.

Omega’s collaborative features include:

  • Interactive image analysis: Users can instruct Omega to perform specific tasks, such as segmenting cell nuclei, counting objects, and generating detailed reports, all through simple conversational prompts.
  • On-demand widget creation: Omega can create custom widgets tailored to user-defined tasks, facilitating specialized image filtering, transformations, and visualizations.
  • An AI-augmented code editor: Omega includes an intelligent code editor that enhances code management with automatic commenting, error detection, and correction features.
  • Multimodal capabilities: Beyond text, Omega can interpret visual data, integrating multiple data types to provide comprehensive

Scientific community members are already using Omega, which has been available for download from a GitHub repository since May 2023. Royer said regular updates have been posted since then.

Omega source code: GitHub repository.

Citation: Royer, L. A. (2024). Omega—Harnessing the power of large language models for bioimage analysis. Nature Methods, 1-3. https://doi.org/10.1038/s41592-024-02310-w

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Are you ready for embodiment with a bionic tool?

To explore this idea, researchers used virtual reality to test whether humans can feel embodiment in prosthetic “hands” resembling a pair of tweezers.

In the journal iScience, they report that participants felt an equal degree of embodiment with tweezer hands. They were also faster and more accurate in completing motor tasks in virtual reality than when they were equipped with a virtual human hand.

Merging with tools

“For our biology to merge seamlessly with tools, we need to feel that the tools are part of our body,” says first author and cognitive neuroscientist Ottavia Maddaluno in a statement.

“Our findings demonstrate that humans can experience a grafted tool as an integral part of their own body.” Maddaluno and a team conducted the work at the Sapienza University of Rome and the Santa Lucia Foundation IRCCS.

Tweezers vs. native hand

To investigate this possibility, the researchers conducted a series of experiments in VR with healthy participants. They had either a human-like hand or a “bionic tool” resembling a large pair of tweezers grafted onto the end of their wrist.

For example, participants were asked to pop bubbles of a specific color by pinching them with either their tweezers or between their index finger and thumb. Result: participants were faster and more accurate at popping virtual bubbles when they had tweezer-hands.

Next, the team used a “cross-modal congruency task” to compare implicit or unconscious embodiment for the virtual hand and bionic tool. During this test, the researchers applied small vibrations to the participants’ fingertips and asked them to identify their stimulated fingers.

At the same time, a flickering light was displayed on the virtual reality screen, either on the same finger as the tactile stimulus or on a different finger.

By comparing the participants’ accuracy and reaction times during trials with matched and mismatched stimuli, the researchers could assess how distracted they were by the visual stimulus.

Simplicity rules

There was a bigger difference between matched and mismatched trials when participants had tweezers rather than human hands, indicating that the non-anthropomorphic prosthesis resulted in an even greater sense of embodiment.

The researchers speculate that this is due to the tweezer-hands’ relative simplicity compared to a human-like hand, which might make it easy for the brain to compute and accept.

The researchers also note that this could relate to the “uncanny valley” hypothesis since the virtual human hands might have been too eerily similar yet distinct for perfect embodiment.

Robotics and prosthetic limbs

The researchers say that this study could inform robotics and prosthetic limb design. “The next step is to study if these bionic tools could be embodied in patients that have lost limbs,” says Maddaluno. “And we also want to investigate the plastic changes that this kind of bionic tool can induce in the brains of both healthy participants and amputees.”

This research was supported by the European Research Council.

Citation: Matteo Marucci et al. June 6, 2024. Rewiring the evolution of the human hand: How the embodiment of a virtual bionic tool improves behavior. iScience https://doi.org/10.1016/j.isci.2024.109937 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

What’s worse than global warming?

That’s the conclusion of a new study that finds climate change will likely generate upward spikes of ozone at ground level by 2050.

That could result in many parts of the United States (for example) falling out of compliance with air quality standards. That means increasing risks to public health, according to a statement by James East, first author of a paper on the study and a researcher at Harvard University. “Warmer temperatures are correlated with increases to ozone in polluted areas,” East said.

Higher air pollution

Ozone can naturally occur at ground level, but it’s often formed by volatile organic compounds (VOCs) interacting with nitrogen oxides (NOx)—both are air pollutants.

“That means areas that already have higher levels of air pollution of VOCs and NOx will likely see increases in ozone as average temperatures go up,” says East.

“That’s important because ground-level ozone can cause a wide array of health problems, including decreased lung function and inflammation of airways, contributing to hundreds of thousands of deaths each year.

“Even in the best-case scenario, we found that more variability in ozone levels is projected for 2050, meaning that we’d still expect to see an increase in the number of days where there is an exceptionally high increase in ozone, violating the air quality standard,” East said.

Worst-case scenario

In the worst-case scenario, the climate is highly sensitive to carbon dioxide. In that case, the study says the high end of ozone measurements would increase by more than 2.3 ppb (parts per billion). This would mean that many parts of the country would see a significant increase in the number of days when ozone levels exceed air quality standards.

“Our study finds that between 5 million and 13 million additional people will be exposed to dangerously high levels of ozone in 2050,” East says.

The paper, “Projecting Changes in the Frequency and Magnitude of Ozone Pollution Events Under Uncertain Climate Sensitivity,” is published in the open-access journal Earth’s Future.

The work was supported by the Natural Sciences and Engineering Research Council of Canada.

Citation: East, J. D., Monier, E., Saari, R. K., & Garcia-Menendez, F. (2024). Projecting Changes in the Frequency and Magnitude of Ozone Pollution Events Under Uncertain Climate Sensitivity. Earth’s Future, 12(6), e2023EF003941. https://doi.org/10.1029/2023EF003941 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Using AI to decode dog vocalizations

Have you ever wanted to understand what your (or someone else’s) dog is trying to say to you?

University of Michigan researchers are exploring the possibilities of using AI to do just that. They are developing tools to identify whether a dog’s bark conveys playfulness or aggression. The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex.

Models trained on human speech

The study, which was conducted in collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, found that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication. 

This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation.

These new models are trained to distinguish nuances in human speech, like tone, pitch and accent. They convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.

The researchers used a dataset of dog vocalizations (recorded from 74 dogs of varying breeds, ages and sex in various contexts) to modify a machine-learning and speech representation model called Wav2Vec2, which was originally trained on human speech data.

Animal welfare

According to the researchers, this study also has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to dogs’ emotional and physical needs, enhancing their care and preventing potentially dangerous situations.

Citation: Abzaliev, A., Espinosa, H. P., & Mihalcea, R. (2024). Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification. ArXiv. /abs/2404.18739, https://arxiv.org/abs/2404.18739 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model designed to improve human-machine understanding

Researchers in Finland have developed a model that they say enables computers to interpret and understand human emotions. The model could improve the interface between humans and smart technologies (including AI systems), making it more intuitive and responsive to user feelings.

The model could also be used by a computer in the future to predict that the user has feelings of happiness, boredom, irritation, rage, despair and anxiety, according to Jussi Jokinen, Associate Professor of Cognitive Science at the University of Jyväskylä in a statement. And It could also be integrated into AI systems.

Jokinen’s project uses mathematical psychology to solve the misalignment problem between intelligent computer systems and their users.

Next step: influence user emotions

The research is anchored in a theory postulating that emotions are generated when human cognition evaluates events from various perspectives.

Jokinen elaborates: “Consider a computer error during a critical task. This event is assessed by the user’s cognition as being counterproductive. An inexperienced user might react with anxiety and fear due to uncertainty about how to resolve the error, but an experienced user might feel irritated and annoyed at having to waste time resolving the issue. Our model predicts the user’s emotional response by simulating this cognitive evaluation process.”

Handling negative emotions

The next phase of this project will also explore potential applications of this emotional understanding. “With our model, a computer could preemptively predict user distress and attempt to mitigate negative emotions,” Jokinen suggests.

“This proactive approach could be utilized in various settings, from office environments to social media platforms, improving user experience by sensitively managing emotional dynamics … a glimpse into a future where computers are empathetic partners in user interaction.”

Citation: Jiayi Eurus Zhang, Bernhard Hilpert, Joost Broekens, Jussi P. P. Jokinen. 11 May 2024. Simulating Emotions With an Integrated Computational Model of Appraisal and Reinforcement Learning. CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems. May 2024. Article No.: 703Pages 1–12. https://doi.org/10.1145/3613904.3641908 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI helps medical professionals read confusing brain EEGs

Researchers at Duke University have developed an assistive-machine learning model that they say greatly improves the ability of medical professionals to read the brain electroencephalography (EEG) charts of intensive-care patients.

EEG readings are the only method for knowing when unconscious patients are in danger of suffering a seizure or are having seizure-like events, and the new computational tool could help save thousands of lives each year, the researchers say.

The results appear online May 23 in the New England Journal of Medicine AI.

Interpreting EEGs

EEGs use small sensors attached to the scalp to measure the brain’s electrical signals, producing a long line of up and down squiggles. When a patient is having a seizure, these lines jump up and down dramatically like a seismograph during an earthquake—a signal that is easy to recognize.

But other medically important anomalies called seizure-like events are much more difficult to discern, even by highly trained neurologists.  

“Interpretable” machine learning algorithms

To build a tool to help make these determinations, the doctors turned to colleagues specializing in developing “interpretable” machine learning algorithms. (Most machine-learning models are a “black box” that makes it impossible for a human to know how it’s reaching conclusions; interpretable machine learning models essentially must show their work.)

The research group started by gathering EEG samples from over 2,700 patients and having more than 120 experts pick out the relevant features in the graphs, categorizing them as either a seizure, one of four types of seizure-like events or “other.”

Patterns showing seizure-like events

When displayed visually, that continuum looks something like a multicolored starfish swimming away from a predator. Each differently colored arm represents one type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the surer it is of its decision, while those placed closer to the central body are less certain.

The algorithm also points to the patterns in the brainwaves that it used to make its determination and provides three examples of professionally diagnosed charts that it sees as being similar.

This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark, the researchers say. “Even if they’re not highly trained to read EEGs, they can make a much more educated decision.”

Testing the algorithm

Putting the algorithm to the test, the collaborative team had eight medical professionals with relevant experience categorize 100 EEG samples into the six categories, once with the help of AI and once without. The performance of all of the participants greatly improved with AI, with their overall accuracy rising from 47% to 71%.

This work was supported by the National Science Foundation, the National Institutes of Health, and the Nebraska Stem Cell Grant.

Citation: Alina Jade Barnett et al. “Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine Learning.” NEJM AI, May 23, 2024. DOI: 10.1056/AIoa2300331 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bio-inspired cameras and AI help drivers detect pedestrians and obstacles faster

University of Zurich (UZH) researchers have combined a novel bio-inspired camera with AI to develop a system that can detect obstacles around a car much more quickly than current systems.

Quicker detection using less computational power

Cameras currently used for driver assistance on cars typically capture 30 to 50 frames per second. An artificial neural network can be trained to recognize objects in their images. But if something happens during the 20 or 30 milliseconds between two snapshots, the camera may see it too late, according to the researchers.

The solution would be increasing the frame rate, but that translates into more data that needs to be processed in real-time and more computational power, the scientists note in an open-access Nature paper published today.

Combining the best of two camera types with AI to emulate human eyes

Event (“neuromorphic”) cameras are a recent innovation based on a different principle. Instead of a constant frame rate, they record information every time they detect fast movements, similar to how human eyes perceive images. But they can miss things that move slowly and their images are not easily converted into the kind of data that is used to train the AI algorithm.

The researches created a hybrid system that combines the best of both worlds: It includes a standard camera that collects 20 images per second (a relatively low frame rate). Its images are processed by a convolutional neural network, that is trained to recognize cars or pedestrians. The data from the event camera is coupled to a different type of AI system, called an asynchronous graph neural network, which is particularly apt for analyzing 3-D data that change over time.

Detections from the event camera are used to anticipate detections by the standard camera and also boost its performance. The result is a visual detector that can detect objects just as quickly as a standard camera taking 5,000 images per second would do but only requires the same bandwidth as a standard 50-frame-per-second camera.

One hundred times faster detections using less data

The team tested their system against the best cameras and visual algorithms currently on the automotive market, finding that it leads to one hundred times faster detections while reducing the amount of data that must be transmitted between the camera and the onboard computer as well as the computational power needed to process the images without affecting accuracy.

Crucially, the system can effectively detect cars and pedestrians that enter the field of view between two subsequent frames of the standard camera, providing additional safety for both the driver and traffic participants, which can make a huge difference, especially at high speeds.

According to the scientists, the method could be made even more powerful in the future by integrating cameras with LiDAR sensors, like the ones currently used on self-driving cars.

Citation: Gehrig, D., Scaramuzza, D. Low-latency automotive vision with event cameras. Nature 629, 1034–1040 (2024). https://doi.org/10.1038/s41586-024-07409-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter