Potential evolutionary dead ends for humanity and ways to avoid them

Scientists have identified 14 “evolutionary traps” for human societies at large, ranging from global climate tipping points to misaligned artificial intelligence, chemical pollution, and accelerating infectious diseases.

The Anthropocene—the geological epoch shaped by us humans—is showing more and more cracks, says Peter Søgaard Jorgensen, researcher at the Stockholm Resilience Centre at Stockholm University and the Royal Swedish Academy of Sciences.

Multiple global crises, such as the Covid-19 pandemic, climate change, food insecurity, financial crises, and conflicts have started to occur simultaneously in a “polycrisis.”

Evolutionary traps

“The human species has been too successful and, in some ways, too smart for its own future good,” says Jørgensen, lead author of a new landmark study published today in the open-access journal Philosophical Transactions of the Royal Society B, with insights from a wide range of different scientific disciplines across the natural and social sciences and humanities.

The new study shows how humanity could get stuck in “evolutionary traps”—dead ends that occur from initially successful innovations. The study identifes 14 of these, including the simplification of agriculture, economic growth that does not deliver benefits for humans or the environment, the instability of global cooperation, climate tipping points, and artificial intelligence.

Simplifying agricultural systems is an example of such a trap—relying on a few highly productive crops such as wheat, rice, maize, and soya—means that calories produced have skyrocketed over the past century. But the food system has become very vulnerable to environmental change, such as weather extremes or new diseases. 

On the verge of getting stuck

Of the 14 evolutionary traps, 12 are in an advanced state, meaning that humankind is on the verge of getting stuck to a degree where it becomes very difficult to get out. These evolutionary traps tend to reinforce one other.

The researchers suggest that two dead ends that are currently kess advanced are the autonomy of technology (AI/robotics) and loss of social capital through digitalization, the researchers suggest.

Evading the traps

The new assessment also looks into why societies struggle so hard to move out of these traps. “The evolutionary forces that created the Anthropocene do not work well in today’s global systems, where social and environmental problems grow in places that seem distant to the societies that could prevent them.

Also, “addressing them often requires global collaboration on a scale that many evolutionary forces often do not align well with,” says co-author Lan Wang-Erlandsson, researcher at the Stockholm Resilience Centre at Stockholm University and the Royal Swedish Academy of Sciences’ Anthropocene laboratory.

Transforming societies

This doesn’t mean that humanity is doomed to fail, argue the researchers. But we must start to transform our societies actively.

“It’s time for humans to become aware of the new reality and to collectively move where we want to as a species,” explains Jørgensen. “Our creativity, and our power to innovate and collaborate equip us with the perfect tools to actively design our future. We can break out of dead ends and business-as-usual, but for that, we must nurture the capacity for collective human agency and design settings where it can flourish.”

Citation: Peter Søgaard Jørgensen et al. Evolution of the polycrisis: Anthropocene traps that challenge global sustainability. Philosophical Transactions of the Royal Society B: Biological Sciences. Vol. 379, Issue 1893. https://doi.org/10.1098/rstb.2022.0261 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New AI noise-canceling headphone tech could let you pick which sounds you hear

If you’ve used noise-canceling headphones, you know that hearing the right noise at the right time can be vital. You might want to block car horns when working indoors, but not when walking along busy streets. This feature is not currently available.

So a team led by researchers at the University of Washington has developed deep-learning algorithms that could let you pick which sounds filter through your headphones in real time. By using voice commands or a smartphone app, you could select which sounds you want to hear in 20 classes, such as sirens, baby cries, speech, vacuum cleaners and bird chirps.

The team presented its findings Nov. 1 at UIST ’23 in San Francisco. The researchers plan to develop a commercial version of the system.

“Understanding what a bird sounds like and extracting it from all other sounds in an environment requires real-time intelligence that today’s noise-canceling headphones haven’t achieved,” explained senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering.

Audio-visual synchronization

“The sounds headphone wearers hear need to sync with their visual senses. You can’t be hearing someone’s voice two seconds after they talk to you. This means the neural algorithms must process sounds in under a hundredth of a second.”

So instead of cloud servers, a local device such as a connected smartphone is required. Also, sounds from different directions arrive in your ears at different times, so the system must preserve these delays and other spatial cues.

Training on real-world data

Tested in environments such as offices, streets and parks, the system was able to extract sirens, bird chirps, alarms and other target sounds while removing all other real-world noise. When 22 participants rated the system’s audio output for the target sound, they said that on average the quality improved, compared to the original recording.

In some cases, the system struggled to distinguish between sounds that share many properties, such as vocal music and human speech. The researchers note that training the models on more real-world data might improve these outcomes.

Additional co-authors on the paper were Bandhav Veluri and Malek Itani, both UW doctoral students in the Allen School; Justin Chan, who completed this research as a doctoral student in the Allen School and is now at Carnegie Mellon University; and Takuya Yoshioka, director of research at AssemblyAI.

Citation: Bandhav Veluri, Malek Itani,Justin Chan, Takuya Yoshioka, Shyamnath Gollakota. 29 October 2023. Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables. UIST ’23: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology https://doi.org/10.1145/3586183.3606779 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Brain implant may enable fast translation of a patient’s thoughts into voiced words

A speech prosthetic developed by Duke neuroscientists, neurosurgeons, and engineers can quickly translate a person’s brain signals into voiced words they’re trying to say.

In a paper published Nov. 6 in the open-access journal Nature Communications, the researchers explained how the new technology works and might one day help people who are unable to talk (due to neurological disorders) regain the ability to communicate quickly through a brain-computer interface.

High-density brain sensors

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

Viventi and his team packed 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. (Neurons can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.)

Testing implants with patients

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited to 15 minutes to test-drive their device in the operating room.

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

The research clocked the study to 78 words a second (a major improvement over current brain-machine systems and sufficient for a real-time conversation). But people speak around 150 words per minute, so this is a work in progress.

Acquiring speech data in seconds vs. hours

Afterwards, the researchers fed the neural and speech data from the surgery suite into a machine-learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

Overall, the decoder was accurate 40% of the time, they found. (The speech decoding algorithm Duraivel used had only 90 seconds of spoken data from the 15-minute test.)

“We’re at the point where it’s still much slower than natural speech,” Viventi said, “but you can see the trajectory where you might be able to get there.

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or “locked-in syndrome,” that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Why? Because the lag between spoken and decoded speech rates is partially due to the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain in current machines, the researchers explain. Fewer sensors means less information available to decode.

Wireless version

Duraivel and his mentors next plan to make a wireless version of the device, funded by a recent $2.4M grant from the National Institutes of Health. It will allow patients to move around freely.

This work was supported by grants from the National Institutes for Health, Department of Defense, Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.

Citation: “High-resolution Neural Recordings Improve the Accuracy of Speech Decoding,” Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi, Gregory B. Cogan. Nature Communications, November 06 2023. DOI: 10.1038/s41467-023-42555-1 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New gene therapy restores natural hearing for deafness at birth

Several Chinese children with deafness at birth have had their natural hearing pathway restored with new gene therapy, MIT Technology Review reported last week.

“Shu Yilai, a surgeon and scientist at Fudan University in Shanghai, who is leading the experiment, explains that the new treatment applies only to one specific cause of deafness at birth: a defect in a gene that produces a protein called otoferlin. … “In an attempt to restore this function, researchers injected harmless viruses carrying DNA for a working copy of the otoferlin gene into the children’s ears.”

Hair cells + otoferlin relay sounds to the brain

“The inner ear contains about 16,000 hair cells, vibrating at different frequencies of sound. Without otoferlin, these cells cannot transmit the chemicals that relay information to the brain,” MIT Technology Review explains.

“The new treatment is designed to add a working copy of the otoferlin gene. Yilai injects the loaded viruses deep into a fluid-filled chamber in a part of the ears called the cochlea. … The treatment reaches maybe 60% to 65% of normal hearing,’ says Yilai.”

However…

Today (Monday, November 6), MIT Technology Review clarified that “gene therapy remains relatively expensive and inaccessible. … Nova Liu, the founder of Shanghai Refreshgene Therapeutics, which financially backed the Chinese research on this gene therapy, estimated that a commercialized product could cost between $125,000 and $250,000.” 

Also, “this particular gene therapy works only when the deafness is due to a lack of otoferlin. … Only around 1% to 5% of cases of inborn deafness are caused by lack of otoferlin, so many more deaf children aren’t able to benefit from this therapy yet.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First spaceship-airplane announced today

Tenacity (credit: Sierra Space)

Sierra Space announced today “Dream Chaser,” the world’s first combined spaceship and airplane.

Dream Chaser can land at compatible commercial airports and deliver (in the future, crew) to and from low-Earth orbit (LEO), according to the company.

Described as “the world’s first runway-capable spaceplane,” Dream Chaser is supported by a contract with NASA to resupply the International Space Station. The construction is now complete and it will ship to NASA’s Neil A. Armstrong Test Facility in Ohio for environmental testing in coming weeks, the company says.

Cargo missions to the ISS

The inaugural Dream Chaser spaceplane will participate in seven initial cargo missions to and from the International Space Station, carrying essential supplies such as food, water and scientific experiments. One of Dream Chaser’s groundbreaking features is its capability to safely return critical cargo to Earth at fewer than 1.5g’s, ensuring the preservation of invaluable payloads, Sierra Space notes.

The launch is planned to occur from NASA’s Kennedy Space Center. Sierra Space’s launch provider, United Launch Alliance (ULA), will carry Dream Chaser into orbit on the second Vulcan Centaur rocket. Tenacity’s maiden resupply mission will culminate with the vehicle’s return, landing at the historic NASA Space Shuttle Landing Facility.

Dream Chaser spaceplane CRS-2 Mission Profile

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Physical neural network learns and remembers ‘on the fly’ like neurons for low-energy machine intelligence

Scientists have developed a low-energy machine intelligence for complex, real-world learning and memory tasks. It opens a pathway for developing efficient, low-energy machine intelligence for more complex, real-world learning and memory tasks.

Published today in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

“The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data,” said Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics.

Nanowire networks

The system uses nanowire networks that arrange themselves into patterns mimicking physical neural networks, like those in a brain. The networks can be used to perform specific memory and learning tasks, using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap.

This “resistive memory switching” function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in a brain.

In this study, researchers used the network to recognize and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Benchmark image recognition

The network was used to perform a benchmark image-recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning. The nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images.

The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Out-performing conventional artificial neural networks

The nanowire neural network can learn and remember “on the fly,”sensor for instance,  sample by sample, extracting data online and thus avoiding heavy memory and energy usage.

“If the data is being streamed continuously, such as it would be from a sensor, for instance, machine learning that relied on conventional artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimized for,” Zhu said.

Citation: Zhu, R., Lilak, S., Loeffler, A., Lizier, J., Stieg, A., Gimzewski, J., & Kuncic, Z. (2023). Online dynamical learning and sequence memory with neuromorphic nanowire networks. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-42470-5 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Non-invasive technology maps brain activity to investigate behavior changes in neurological disease

A research team led by Cleveland Clinic and Oregon Health and Science University (OHSU) has developed a new method for mapping how the parts of the brain “speak” to each other. This is critical to understanding behavior changes in patients with neurological disease, according to the researchers.

“Effects on behavior and personality in Alzheimer’s disease and related disorders are caused by changes in brain function,” says Cleveland Clinic’s Hod Dana, PhD. “If we can understand exactly how the changes occur, we may figure out how to slow down the process or to stop it.

Mapping brain activity noninvasingly

Recording brain activity patterns that underlie behavioral changes is the first step to bridging the gap, say the researchers.

The team recently published results in Nature Communications on using a calcium sensor system called CaMPARI (Calcium-modulated photoactivatable ratiometric integrator) to map brain activity in preclinical models while completing cognitive tasks.

Current technologies are unable to map the whole brain while still identifying the single cells. The researchers were able to do cortex-wide activity recording without any head fixation, tethering, or attachment of a miniaturized device to the mouse’s head. Multiple cortical regions were recorded while the mouse was performing a battery of behavioral and cognitive tests. 

How Alzheimer’s-related genes affect learning and memory

The goal is to see how Alzheimer’s-related genes affect the way our neurons signal through our brains in learning and memory. The researchers hope to take what they learn from their results to develop tests and interventions that can improve the quality of life for patients, providing better treatment options.

“We now have the capability to study the relationship between brain activation and cognitive performance at an unprecedented level,” says Jacob Raber, PhD, an OHSU behavioral neuroscientist.

“These are the first steps in developing strategies to reverse those changes and improve cognitive performance in those affected by neurological conditions. The future of behavioral and cognitive neuroscience looks bright.”

Citation: Das, A., Holden, S., Borovicka, J., Icardi, J., Chaklai, A., Patel, D., Patel, R., Kaech Petrie, S., Raber, J., & Dana, H. (2023). Large-scale recording of neuronal activity in freely-moving mice at cellular resolution. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-42083-y (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Could AI machines become conscious?

Large language models (LLMs) can produce text that leaves the impression that you may be interacting with a human-like, intelligent conscious agent.

But in a paper in the journal Trends in Neurosciences, three neuroscientists—Jaan Aru at University of Tartu in Estonia; Matthew Larkum of Institute of Biology, Humboldt University; and James M. Shine of The University of Sydney—argue that present-day LLMs are “text-centric, whereas the phenomenological umwelt [perceived world] of living organisms is multifaceted and integrated.”

Three ways LLMs differ from human brains

Left: a schematic depicting the basic architecture of a large language model, which can have tens or even more than a 100 decoder blocks arranged in a feed-forward fashion. Right: a heuristic map of the thalamocortical system, which generates complex activity patterns thought to underlie consciousness. (credit: Mac Shine, Jaan Aru)

“Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing,” say the neuroscientists. “Currently, such processes are not implemented in large language models (LLMs). The organizational complexity of living systems has no parallel in present-day AI tools.

“The inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us.

“The evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.”

Citation: Jaan Aru, Matthew Larkum, and James M. Shine. October 18, 2023. The feasibility of artificial consciousness through the lens of neuroscience. DOI: https://doi.org/10.1016/j.tins.2023.09.009

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Funding for next-generation telescopes to capture light from the Big Bang and search for gravitational waves announced

The National Science Foundation announced Oct. 26 that it has awarded $3.7 million for final designs of next-generation telescopes to capture the microwave light from the earliest moments of the cosmic microwave background—light still traveling across the universe from the earliest moments after the Big Bang.

To capture that light, astrophysicists at the University of Chicago and Lawrence Berkeley National Laboratory plan to build telescopes and infrastructure in both Antarctica and Chile.

The researchers will also search for “primordial” gravitational waves—the physical vibrations from the Big Bang—and will investigate the mystery known as dark matter.

Biggest question

The research goal is to answer the biggest question: did our universe begin with a burst of expansion at the dawn of time (known as inflation), which would have stretched minuscule quantum-mechanical fluctuations into the initial seeds of the structure in the universe today?

The entire project, known as CMB-S4, is proposed to be jointly funded by the National Science Foundation and the U.S. Department of Energy. It is expected to cost on the order of $800 million and to come fully online in the early 2030s. The collaboration currently involves 450 scientists from more than 100 institutions, spanning 20 countries.

“With these telescopes we will be testing our theory of how our entire universe came to be, but also looking at physics that are at the most extreme scales in a way we simply cannot do with particle physics experiments on Earth,” said John Carlstrom, the Subrahmanyan Chandrasekhar Distinguished Service Professor of Astronomy and Astrophysics and Physics, who serves as the project scientist for CMB-S4.

This in a current snapshot of the cosmic microwave background, the oldest light in our universe, from when the cosmos was just 380,000 years old. The colors of the map represent small temperature fluctuations that ultimately resulted in the galaxies we see today. (Credit: ESA and the Planck Collaboration)

The South Pole advantage

The CMB-S4 project would involve two large telescopes in the mountains of Chile and nine smaller ones in Antarctica. “The South Pole is the only location that allows a telescope to look at one place in the sky continuously, because it’s at the pole where the rest of the Earth spins around,” explained Jeff Zivick, deputy project manager for CMB-S4.

CMBS4 1
The South Pole telescope (credit: U. of Chicago)

The South Pole Telescope, which has been operating since 2007, is equipped with the most powerful cosmic microwave background (CMB) camera.

CMB-S4 is expected to have nearly 500,000 superconducting detectors, a significant increase over all precursor cosmic microwave background experiments combined. The project is also expected to be funded by the U.S. Department of Energy. The National Science Foundation portion of the project is led by the University of Chicago; the Department of Energy’s portion is led by Lawrence Berkeley National Laboratory.

Primordial gravitational waves

This funding would also look for evidence of “primordial gravitational waves”—the ripples in space-time that would have been created if the universe really did explode into being from a space much smaller than a single subatomic particle. These ripples would interact with the cosmic microwave background, creating a distinct but extremely faint signature.

Primordial gravitational waves would also be evidence to connect the force of gravity with the laws of quantum mechanics, said Jim Strait, a physicist at Lawrence Berkeley National Laboratory and the project director for CMB-S4.

“The mismatch between the two theories—one that applies at the very largest scales in the universe and the other at the very smallest—has been plaguing scientists for decades.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Smart glasses with new ‘acoustic touch’ feature may help blind or low-visioned persons recognize and reach objects

Wearable smart glasses technology is becoming popular in the assistive technologies industry. The tech aids computer vision and other sensory information to translate the wearer’s surrounding into computer-synthesized speech.

Acoustic touch

The researchers explored the potential of a new technique known as “acoustic touch.” It is similar to human “echolocation*”—providing a “wearable spatial audio solution for assisting people who are blind in finding objects,” they explain.

Unlike traditional vision-aid systems, this new acoustic touch technique uses smart glasses to “sonify” (convert objects into distinct auditory icons) when the object enters the device’s field of view. This draws inspiration from human echolocation*.

This innovative approach employs head scanning and auditory icons to represent objects in the user’s field of view. It leverages head tracking and binaural (a type of spatial audio) rendering technology to offer high-quality auditory cues that assist users in pinpointing the location and identity of objects.

(It’s conceivable that generative AI methods could further increase the quality of audio representation of visual data.)

Evaluation study

“We developed a wearable ‘Foveated Audio Device’ to study the efficacy and usability of using acoustic touch to search, memorize, and reach items,” say researchers from the University of Technology Sydney and the University of Sydney, together with Sydney start-up ARIA Research.

“Our evaluation study involved 14 participants—7 blind or low-visioned and 7 blindfolded sighted (as a control group) participants,” they said. “We compared the wearable device to two idealized conditions: a verbal clock face description and a sequential audio presentation through external speakers. We found that the wearable device can effectively aid the recognition and reaching of an object.”

The researchers also observed that the device “does not significantly increase the user’s cognitive workload. These promising results suggest that acoustic touch can provide a wearable and effective method of sensory augmentation.”

* Watch How This Blind Man Uses Echo Location To Ride a Bike

Citation: Zhu, H. Y., Hossain, S. N., Jin, C., Singh, A. K., Duc Nguyen, M. T., Deverell, L., Nguyen, V., Gates, F. S., Fernandez, I. G., Melencio, M. V., Bell, R., & Lin, T. (2023). An investigation into the effectiveness of using acoustic touch to assist people who are blind. PLOS ONE, 18(10), e0290431. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0290431

Funding: This work was supported by the Australian Cooperative Research Centres Projects (CRC-P) Round 11 CRCPXI000007, the ARIA research, the University of Technology Sydney, and the University of Sydney. Received by C.J, V.N and C.L. Website: https://business.gov.au/grants-and-programs/cooperative-research-centres-crc-grants.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter