How the brain turns waves of light into experiences of color

Columbia University neuroscientists have identifed, for the first time, brain-cell circuitry in fruit flies that converts raw sensory signals into color perceptions (which can guide behavior), in a paper published in the journal Nature Neuroscience today.

(“Colors” are perceptions your brain constructs as it makes sense of the longer and shorter wavelengths of light detected by your eyes, the researchers explain.)

Networks of neurons in fruit flies

The research team reports the discovery of specific networks of neurons in fruit flies that respond selectively to various hues (perceived colors) associated with specific combinations of wavelengths of light). These hue-selective neurons lie within the brain area responsible for vision.

(Some people could perceive a wavelength as violet and others as ultraviolet (not detectable by most humans.) Detecting UV hues is important for the survival of some creatures, such as bees and perhaps fruit flies. Many plants, for example, possess ultraviolet patterns that can help guide insects to pollen.) 

Fly-brain connectome

The recent availability of a fly-brain connectome has proven helpful here, say the researchers. This intricate map details how some 130,000 neurons and 50 million synapses in a fruit-fly’s poppy seed-sized brain are interconnected.

With the connectome serving as a reference, the researchers used their observations of brain cells to develop a diagram they suspected represents the neuronal circuitry behind hue selectivity. The scientists then portrayed these circuits as mathematical models to simulate and probe the circuits’ activities and capabilities. 

Brain circuitry involved in color perception identified

The modeling revealed that these circuits can host activity required for hue selectivity. It also pointed to a type of cell-to-cell interconnectivity, known as recurrence, without which hue-selectivity cannot happen. In a neural circuitry with recurrence, outputs of the circuit circle back in to become inputs.

And that suggested yet another experiment: “When we used a genetic technique to disrupt part of this recurrent connectivity in the brains of fruit flies, the neurons that previously showed hue-selective activity lost that property. This reinforced our confidence that we really had discovered brain circuitry involved in color perception.”

Citation: Christenson, M.P., Sanz Diez, A., Heath, S.L. et al. Hue selectivity from recurrent circuitry in DrosophilaNat Neurosci (2024) https://doi.org/10.1038/s41593-024-01640-4 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to get a complete understanding of the brain

It starts with zooming into the tiniest-visible subcellular level of the brain: a cubic millimeter (about the size of a grain of rice) of human temporal cortex (located on the right and left side of the your brain, near your temples).

The Harvard and Google researchers could see 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses—1,400 terabytes of data in vivid detail for the first time, they report in the journal Science.

A complete map of the mouse brain

Their ultimate goal, supported by the National Institutes of Health BRAIN Initiative: create a high-resolution map of a whole mouse brain’s neural wiring—about 1,000 times the amount of data.

A Harvard team led by Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and newly appointed dean of science, has co-created with Google researchers the largest synaptic-resolution, 3D reconstruction of a piece of human brain to date, showing each cell.

Lichtman’s field is “connectomics,” which seeks to create comprehensive catalogues of brain structure, down to individual cells and wiring. Such completed maps would light the way toward new insights into brain function and disease, about which scientists still know very little.

AI-enhanced

Google’s state-of-the-art AI algorithms take it a step forward, allowing for reconstruction and mapping of brain tissue in three dimensions. The team has also developed a suite of publicly available tools that researchers can use to examine and annotate the connectome.

Next: the team will tackle the mouse hippocampal formation, which is important to neuroscience for its role in memory and neurological disease.

Citation: Alexander Shapson-Coe et al. (21 authors). 10 May 2024. A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution. Vol 384, Issue 6696. Science. https://www.science.org/doi/10.1126/science.adk4858

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Would you trust a robot to look after your cat?

New research found it takes more than a carefully designed robot to care for your cat. Their environment is also vital, as well as human interaction.

“Cat Royale” is a collaboration between computer scientists from the University of Nottingham and artists at Blast Theory, who worked together to create a multispecies world centered around a enclosure in which three cats and a robot arm coexist for six hours a day, as part of an artist-led project.

Designing the “world,” not just the tech

The open-access research paper “Designing Multispecies Worlds for Robots, Cats, and Humans” suggests that designing the technology and its interactions is not sufficient. It’s equally important to consider the design of the “world” in which the technology operates and human involvement.

To do that, the researcher used a robot arm offering activities to make the cats happier, like dragging a “mouse” toy along the floor, raising a feather “bird” into the air, and even offering them treats to eat. The team then trained an AI to learn what games the cats liked best so that it could personalize their experiences.

A designed world

The researrchers found it had to design the robot to pick up toys and deploy them in ways that excited the cats while it learned which games each cat liked. They also designed the entire world in which the cats and the robot lived, providing safe spaces for the cats to observe the robot and (from which to sneak up on it) and decorating it so that the robot had the best chance of spotting the approaching cats. 

The implication: designing robots involves interior design as well as engineering and AI. It includes the enclosure, the robot and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats. 

Eike Schneiders et al. 11 May 2024. Designing Multispecies Worlds for Robots, Cats, and Humans. CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3613904.3642115 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

GPT-4 fails at heart risk assessment

In a new study involving thousands of simulated cases of patients with chest pain, GPT-4 provided inconsistent conclusions, returning different heart-risk assessment levels for the same patient data.

Despite GPT-4’s reported ability to pass medical exams, it also failed to match the traditional methods physicians use to judge a patient’s cardiac risk.

These findings were published in the journal PLOS ONE.

“ChatGPT was not acting in a consistent manner,” said lead author Dr. Thomas Heston, a researcher with Washington State University’s Elson S. Floyd College of Medicine. “Given the exact same data, ChatGPT would give a score of low risk, then next time an intermediate risk, and occasionally, it would go as far as giving a high risk.”

Weakness: Built-in randomness

The authors believe the problem is likely due to the level of randomness built into the current version of the software, which helps it vary its responses to simulate natural language. However, this same randomness does not work well for healthcare uses that require a single, consistent answer, Heston said.

“We found there was a lot of variation, and that variation in approach can be dangerous,” he said. “It can be a useful tool, but I think the technology is going a lot faster than our understanding of it, so it’s critically important that we do a lot of research, especially in these high-stakes clinical situations.”

Chest pains are common complaints in emergency rooms, requiring doctors to rapidly assess the urgency of a patient’s condition. Some very serious cases are easy to identify by their symptoms, but lower-risk ones can be trickier, Heston notes, especially when determining whether someone should be hospitalized for observation or sent home and receive outpatient care.

Potential in healthcare

Despite the negative findings of this study, Heston sees great potential for generative AI in healthcare. “ChatGPT could be excellent at creating a differential diagnosis and that’s probably one of its greatest strengths,” he said.

“If you don’t quite know what’s going on with a patient, you could ask it to give the top five diagnoses and the reasoning behind each one. So it could be good at helping you think through a problem, but it’s not good at giving the answer.”

Citation: Thomas F. Heston, Lawrence M. Lewis. April 16, 2024. ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain. PLOS. 10.1371/journal.pone.0301854 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to control a robot with your mind

Brain-computer interfaces (BCIs) can make your life easier if you have movement or speech disorders. BCIs could als0 allow you to manipulate prosthetic limbs and use computers, and also play games.

And maybe one day you could control robots to do your bidding.

But unless you want to have an invasive Neuralink-type device drill into your brain (or via other methods), you’re currently stuck with wearing an EEG (electroencephalogram) device, like the guy above. Yes, it could analyze your brain waves, but for limited uses.

AI to the rescue

So Bin He in the Department of Biomedical Engineering at Carnegie Mellon University and colleagues turned to deep-learning decoders to hopefully improve a BCI’s performance.

The idea was to enable continuous and sustained movement of a virtual object. Twenty-eight adult participants were instructed to imagine moving their right hand to move the cursor right and their left hand to move the cursor left, moving both hands simultaneously to move upwards, and moving neither hand to go down.

The authors evaluated two different deep-learning architectures and a traditional decoder over seven BCI sessions.

A step towards neuro-assistive robotics

Both deep-learning decoders improved throughout the study and outperformed the traditional decoder by the final session.

The participants were able to control a fast, continuously moving computer cursor—tracking randomly moving objects with a high level of performance without moving a muscle.

Citation: Forenzo, D., Zhu, H., Shanahan, J., Lim, J., & He, B. (2024). Continuous tracking using deep learning-based decoding for noninvasive brain–computer interface. PNAS Nexus, 3(4). https://doi.org/10.1093/pnasnexus/pgae145 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Software for Surveillance Videos

Computer vision can be a valuable tool if you’re tasked with analyzing hours of footage. It can speed up the process of identifying individuals, such as criminal suspects.

Privacy issues

However, there are privacy concerns in this process, so Yogesh Rawat, Assistant Professor at the University of Central Florida (UCF) Center for Research in Computer Vision, is developing advanced software to handle privacy, to be installed in video cameras.

His work is supported by $200,000 in funding from the National Science Foundation’s Accelerating Research Translation (NSF ART) program.

“Automation allows us to watch a lot of footage, which is not possible by humans, such as identifiable information that we can visually interpret,” Rawat says. He also aims to make the technology available in edge devices (not requiring an outside server) and in real time.

This also requires developing algorithms that can process the data as quickly as possible, so graphics processing units (GPUs) and central processing units (CPUs) can handle the workload of analyzing footage as it is captured.

Speed and size

“We want to do this very efficiently and very quickly in real time,” Rawat says. “We also don’t want to require a lot of computing power. We are not working with large computers there, but very small devices.”

The funding will also allow Rawat to identify potential users of the technology, including nursing homes, childcare centers and authorities using surveillance cameras.

His work builds on several previous projects, including work that allows analysis of untrimmed security videos, training artificial intelligence models to operate on a smaller scale, and a patent on software that allows for the detection of multiple actions, persons and objects of interest.

Funding sources for these works also include $3.9 million from the IARPA Biometric Recognition and Identification at Altitude and Range program, $2.8 million from Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analysis, and $475,000 from the U.S Combating Terrorism Technical Support Office.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A glimpse into an immersive holographic future

Researchers are a step closer to using an ordinary pair of eyeglasses to view both real and virtual high-definition 3D holographic images, according to a study led by Princeton University researchers.

The optical elements required to create these images are tiny and could potentially fit on a regular pair of glasses—no headset required, says Felix Heide, assistant professor of computer science and senior author on a paper published April 22 in Nature Communications.

Radical optical improvement

Holographic images are currently created by a small chip-like device called a “spatial light modulator.” These modulators could only create images that are too small or large but fuzzy.

So the researchers created a new “étendue expander” device to improve image quality. The new device filters the light from the spatial light modulator to expand the field of view while preserving the stability and fidelity of the larger image, with only a minimal drop in quality, according to the researchers.

A pattern etched into frosted glass is the key. Designed using AI and optical techniques, the etched surface scatters light created by the spatial light modulator in a very precise way. This improves the quality of the holographic image and expands the field of view.

Improving virtual and augmented reality displays

The image quality isn’t yet perfect ,and the fabrication process for the optical elements needs to be improved. “A lot of technology has to come together to make this feasible. But this research shows a path forward,” said Heide.

“Holography could make virtual and augmented reality displays easily usable, wearable and ultrathin,” he said “They could transform how we interact with our environments—everything from getting directions while driving to monitoring a patient during surgery.”

The research team also includes Meta Research and the Visual Computing Center at King Abdullah University of Science and Technology.

Citation: Tseng, E., Kuo, G., Baek, SH. et al. Neural étendue expander for ultra-wide-angle high-fidelity holographic display. Nat Commun 15, 2907 (2024). https://doi.org/10.1038/s41467-024-46915-3 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Researchers create artificial cells that look and act like living cells

University of North Carolina-Chapel Hill researchers have manipulated DNA and proteins to create new functional cells. This accomplishment has implications for efforts in regenerative medicine, drug delivery systems, and diagnostic tools, the researchers say.

Bridging the gap between synthetic and living materials

Without using natural proteins, UNC-Chapel Hill researcher Ronit Freeman and her colleagues built cells in the Freeman lab that created functional cytoskeletons that can change shape and react to their surroundings. To achieve this, they used a new programmable peptide-DNA technology that directs peptides, the building blocks of proteins, and repurposed genetic material to work together to form a cytoskeleton.

“With this discovery, we can think of engineering fabrics or tissues that can behave in dynamic ways,” says Freeman, whose lab is in the Applied Physical Sciences Department of the UNC College of Arts and Sciences.

Programmed biomaterial

The resulting cytoskeleton allows cells to be flexible, both in shape and in response to changes in their environment.

“DNA does not normally appear in a cytoskeleton,” Freeman says. “We reprogrammed sequences of DNA so that it acts as an architectural material, binding the peptides together. Once this programmed material was placed in a droplet of water, the structures took shape.”

Surpassing biology

The ability to program DNA in this way means scientists can now create cells to serve specific functions and even fine-tune a cell’s response to external stressors. “The synthetic cells were stable even at 122 degrees Fahrenheit, opening up the possibility of manufacturing cells with extraordinary capabilities in environments normally unsuitable to human life,” Freeman says.

The new materials are made to perform a specific function and then modify themselves to serve a new function, customized by adding different peptide or DNA designs to program cells in materials like fabrics or tissues. These new materials can integrate with other synthetic cell technologies—all with potential applications that could revolutionize fields like biotechnology and medicine.

“This research helps us understand what makes life,” Freeman says. “This synthetic cell technology will not just enable us to reproduce what nature does, but also make materials that surpass biology.”

Citation: Daly, M.L., Nishi, K., Klawa, S.J. et al. Designer peptide–DNA cytoskeletons regulate the function of synthetic cells. Nat. Chem. (2024). 10.1038/s41557-024-01509-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Protecting brain cells with cannabinol

Scientists at the Salk Institute have begun exploring the clinical potential of a cannabinoid called CBN (cannabinol). Cannabinoids are compounds derived from the cannabis plant, like THC, (tetrahydrocannabinol), and CBD (cannabidiol).

These compounds may help people who develop an age-related neurological disorder like Alzheimer’s or Parkinson’s. In an article in the journal Redox Biology, the scientists explain how CBN especially protects the brain against aging and neurodegeneration.

And they found that four CBN-inspired compounds were more neuroprotective than other CBN molecules.

Treating traumatic brain injury

“Many neurological disorders involve the death of neurons, due to the dysfunction of their power-generating mitochondria,” says Research Professor Pamela Maher, senior author of the study. “CBN achieves its neuroprotective effect by preventing this mitochondrial dysfunction.”

One of the CBN analogs was especially effective in treating traumatic brain injury, producing the highest survival rate after the onset of the condition.

“We’re excited to see how effective these compounds might be in protecting the brain from further damage,” says Maher. “Could we one day give this CBN analog to football players the day before a big game, or to car accident survivors as they arrive in the hospital?”

Citation: Zhibin Liang, et al. 29 Mar 2024. Fragment-based drug discovery and biological evaluation of novel cannabinol-based inhibitors of oxytosis/ferroptosis for neurological disorders. Redox Biology. 10.1016/j.redox.2024.103138 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

‘Skyrmions’ move at record speeds: a step toward future computing

An international research team led by scientists from the CNRS French laboratories has discovered that magnetic nanobubbles known as skyrmions can be moved by electrical currents, attaining record speeds up to 900 meters/second.

Anticipated as future bits in computer memory, the tiny size of these nanobubbles (a few nanometers) may provide great computing and information storage capacity and low energy consumption.

Speedup

Until now, these nanobubbles moved no faster than 100 m/s, which is too slow for computing applications. But thanks to the use of an antiferromagnetic material as a medium, the scientists successfully made the skyrmions move 10 times faster.

These results, which were published in the journal Science, offer new prospects for developing higher-performance and less- energy-intensive computing devices.

This study is part of the SPIN national research program launched in January, which supports innovative research in spintronics, with a view to helping develop a more agile and enduring digital world.

Citation: Pham, V. T., Sisodia, N., Manici, I. D., Urrestarazu-Larrañaga, J., Bairagi, K., Pelloux-Prayer, J., Guedas, R., Buda-Prejbeanu, L. D., Auffret, S., Locatelli, A., Menteş, T. O., Pizzini, S., Kumar, P., Finco, A., Jacques, V., Gaudin, G., & Boulle, O. (2024). Fast current-induced skyrmion motion in synthetic antiferromagnets. Science. https://www.science.org/doi/10.1126/science.add5751

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter