First spaceship-airplane announced today

Tenacity (credit: Sierra Space)

Sierra Space announced today “Dream Chaser,” the world’s first combined spaceship and airplane.

Dream Chaser can land at compatible commercial airports and deliver (in the future, crew) to and from low-Earth orbit (LEO), according to the company.

Described as “the world’s first runway-capable spaceplane,” Dream Chaser is supported by a contract with NASA to resupply the International Space Station. The construction is now complete and it will ship to NASA’s Neil A. Armstrong Test Facility in Ohio for environmental testing in coming weeks, the company says.

Cargo missions to the ISS

The inaugural Dream Chaser spaceplane will participate in seven initial cargo missions to and from the International Space Station, carrying essential supplies such as food, water and scientific experiments. One of Dream Chaser’s groundbreaking features is its capability to safely return critical cargo to Earth at fewer than 1.5g’s, ensuring the preservation of invaluable payloads, Sierra Space notes.

The launch is planned to occur from NASA’s Kennedy Space Center. Sierra Space’s launch provider, United Launch Alliance (ULA), will carry Dream Chaser into orbit on the second Vulcan Centaur rocket. Tenacity’s maiden resupply mission will culminate with the vehicle’s return, landing at the historic NASA Space Shuttle Landing Facility.

Dream Chaser spaceplane CRS-2 Mission Profile

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Physical neural network learns and remembers ‘on the fly’ like neurons for low-energy machine intelligence

Scientists have developed a low-energy machine intelligence for complex, real-world learning and memory tasks. It opens a pathway for developing efficient, low-energy machine intelligence for more complex, real-world learning and memory tasks.

Published today in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

“The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data,” said Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics.

Nanowire networks

The system uses nanowire networks that arrange themselves into patterns mimicking physical neural networks, like those in a brain. The networks can be used to perform specific memory and learning tasks, using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap.

This “resistive memory switching” function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in a brain.

In this study, researchers used the network to recognize and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Benchmark image recognition

The network was used to perform a benchmark image-recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning. The nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images.

The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Out-performing conventional artificial neural networks

The nanowire neural network can learn and remember “on the fly,”sensor for instance,  sample by sample, extracting data online and thus avoiding heavy memory and energy usage.

“If the data is being streamed continuously, such as it would be from a sensor, for instance, machine learning that relied on conventional artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimized for,” Zhu said.

Citation: Zhu, R., Lilak, S., Loeffler, A., Lizier, J., Stieg, A., Gimzewski, J., & Kuncic, Z. (2023). Online dynamical learning and sequence memory with neuromorphic nanowire networks. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-42470-5 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Non-invasive technology maps brain activity to investigate behavior changes in neurological disease

A research team led by Cleveland Clinic and Oregon Health and Science University (OHSU) has developed a new method for mapping how the parts of the brain “speak” to each other. This is critical to understanding behavior changes in patients with neurological disease, according to the researchers.

“Effects on behavior and personality in Alzheimer’s disease and related disorders are caused by changes in brain function,” says Cleveland Clinic’s Hod Dana, PhD. “If we can understand exactly how the changes occur, we may figure out how to slow down the process or to stop it.

Mapping brain activity noninvasingly

Recording brain activity patterns that underlie behavioral changes is the first step to bridging the gap, say the researchers.

The team recently published results in Nature Communications on using a calcium sensor system called CaMPARI (Calcium-modulated photoactivatable ratiometric integrator) to map brain activity in preclinical models while completing cognitive tasks.

Current technologies are unable to map the whole brain while still identifying the single cells. The researchers were able to do cortex-wide activity recording without any head fixation, tethering, or attachment of a miniaturized device to the mouse’s head. Multiple cortical regions were recorded while the mouse was performing a battery of behavioral and cognitive tests. 

How Alzheimer’s-related genes affect learning and memory

The goal is to see how Alzheimer’s-related genes affect the way our neurons signal through our brains in learning and memory. The researchers hope to take what they learn from their results to develop tests and interventions that can improve the quality of life for patients, providing better treatment options.

“We now have the capability to study the relationship between brain activation and cognitive performance at an unprecedented level,” says Jacob Raber, PhD, an OHSU behavioral neuroscientist.

“These are the first steps in developing strategies to reverse those changes and improve cognitive performance in those affected by neurological conditions. The future of behavioral and cognitive neuroscience looks bright.”

Citation: Das, A., Holden, S., Borovicka, J., Icardi, J., Chaklai, A., Patel, D., Patel, R., Kaech Petrie, S., Raber, J., & Dana, H. (2023). Large-scale recording of neuronal activity in freely-moving mice at cellular resolution. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-42083-y (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Could AI machines become conscious?

Large language models (LLMs) can produce text that leaves the impression that you may be interacting with a human-like, intelligent conscious agent.

But in a paper in the journal Trends in Neurosciences, three neuroscientists—Jaan Aru at University of Tartu in Estonia; Matthew Larkum of Institute of Biology, Humboldt University; and James M. Shine of The University of Sydney—argue that present-day LLMs are “text-centric, whereas the phenomenological umwelt [perceived world] of living organisms is multifaceted and integrated.”

Three ways LLMs differ from human brains

Left: a schematic depicting the basic architecture of a large language model, which can have tens or even more than a 100 decoder blocks arranged in a feed-forward fashion. Right: a heuristic map of the thalamocortical system, which generates complex activity patterns thought to underlie consciousness. (credit: Mac Shine, Jaan Aru)

“Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing,” say the neuroscientists. “Currently, such processes are not implemented in large language models (LLMs). The organizational complexity of living systems has no parallel in present-day AI tools.

“The inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us.

“The evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.”

Citation: Jaan Aru, Matthew Larkum, and James M. Shine. October 18, 2023. The feasibility of artificial consciousness through the lens of neuroscience. DOI: https://doi.org/10.1016/j.tins.2023.09.009

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Funding for next-generation telescopes to capture light from the Big Bang and search for gravitational waves announced

The National Science Foundation announced Oct. 26 that it has awarded $3.7 million for final designs of next-generation telescopes to capture the microwave light from the earliest moments of the cosmic microwave background—light still traveling across the universe from the earliest moments after the Big Bang.

To capture that light, astrophysicists at the University of Chicago and Lawrence Berkeley National Laboratory plan to build telescopes and infrastructure in both Antarctica and Chile.

The researchers will also search for “primordial” gravitational waves—the physical vibrations from the Big Bang—and will investigate the mystery known as dark matter.

Biggest question

The research goal is to answer the biggest question: did our universe begin with a burst of expansion at the dawn of time (known as inflation), which would have stretched minuscule quantum-mechanical fluctuations into the initial seeds of the structure in the universe today?

The entire project, known as CMB-S4, is proposed to be jointly funded by the National Science Foundation and the U.S. Department of Energy. It is expected to cost on the order of $800 million and to come fully online in the early 2030s. The collaboration currently involves 450 scientists from more than 100 institutions, spanning 20 countries.

“With these telescopes we will be testing our theory of how our entire universe came to be, but also looking at physics that are at the most extreme scales in a way we simply cannot do with particle physics experiments on Earth,” said John Carlstrom, the Subrahmanyan Chandrasekhar Distinguished Service Professor of Astronomy and Astrophysics and Physics, who serves as the project scientist for CMB-S4.

This in a current snapshot of the cosmic microwave background, the oldest light in our universe, from when the cosmos was just 380,000 years old. The colors of the map represent small temperature fluctuations that ultimately resulted in the galaxies we see today. (Credit: ESA and the Planck Collaboration)

The South Pole advantage

The CMB-S4 project would involve two large telescopes in the mountains of Chile and nine smaller ones in Antarctica. “The South Pole is the only location that allows a telescope to look at one place in the sky continuously, because it’s at the pole where the rest of the Earth spins around,” explained Jeff Zivick, deputy project manager for CMB-S4.

CMBS4 1
The South Pole telescope (credit: U. of Chicago)

The South Pole Telescope, which has been operating since 2007, is equipped with the most powerful cosmic microwave background (CMB) camera.

CMB-S4 is expected to have nearly 500,000 superconducting detectors, a significant increase over all precursor cosmic microwave background experiments combined. The project is also expected to be funded by the U.S. Department of Energy. The National Science Foundation portion of the project is led by the University of Chicago; the Department of Energy’s portion is led by Lawrence Berkeley National Laboratory.

Primordial gravitational waves

This funding would also look for evidence of “primordial gravitational waves”—the ripples in space-time that would have been created if the universe really did explode into being from a space much smaller than a single subatomic particle. These ripples would interact with the cosmic microwave background, creating a distinct but extremely faint signature.

Primordial gravitational waves would also be evidence to connect the force of gravity with the laws of quantum mechanics, said Jim Strait, a physicist at Lawrence Berkeley National Laboratory and the project director for CMB-S4.

“The mismatch between the two theories—one that applies at the very largest scales in the universe and the other at the very smallest—has been plaguing scientists for decades.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Smart glasses with new ‘acoustic touch’ feature may help blind or low-visioned persons recognize and reach objects

Wearable smart glasses technology is becoming popular in the assistive technologies industry. The tech aids computer vision and other sensory information to translate the wearer’s surrounding into computer-synthesized speech.

Acoustic touch

The researchers explored the potential of a new technique known as “acoustic touch.” It is similar to human “echolocation*”—providing a “wearable spatial audio solution for assisting people who are blind in finding objects,” they explain.

Unlike traditional vision-aid systems, this new acoustic touch technique uses smart glasses to “sonify” (convert objects into distinct auditory icons) when the object enters the device’s field of view. This draws inspiration from human echolocation*.

This innovative approach employs head scanning and auditory icons to represent objects in the user’s field of view. It leverages head tracking and binaural (a type of spatial audio) rendering technology to offer high-quality auditory cues that assist users in pinpointing the location and identity of objects.

(It’s conceivable that generative AI methods could further increase the quality of audio representation of visual data.)

Evaluation study

“We developed a wearable ‘Foveated Audio Device’ to study the efficacy and usability of using acoustic touch to search, memorize, and reach items,” say researchers from the University of Technology Sydney and the University of Sydney, together with Sydney start-up ARIA Research.

“Our evaluation study involved 14 participants—7 blind or low-visioned and 7 blindfolded sighted (as a control group) participants,” they said. “We compared the wearable device to two idealized conditions: a verbal clock face description and a sequential audio presentation through external speakers. We found that the wearable device can effectively aid the recognition and reaching of an object.”

The researchers also observed that the device “does not significantly increase the user’s cognitive workload. These promising results suggest that acoustic touch can provide a wearable and effective method of sensory augmentation.”

* Watch How This Blind Man Uses Echo Location To Ride a Bike

Citation: Zhu, H. Y., Hossain, S. N., Jin, C., Singh, A. K., Duc Nguyen, M. T., Deverell, L., Nguyen, V., Gates, F. S., Fernandez, I. G., Melencio, M. V., Bell, R., & Lin, T. (2023). An investigation into the effectiveness of using acoustic touch to assist people who are blind. PLOS ONE, 18(10), e0290431. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0290431

Funding: This work was supported by the Australian Cooperative Research Centres Projects (CRC-P) Round 11 CRCPXI000007, the ARIA research, the University of Technology Sydney, and the University of Sydney. Received by C.J, V.N and C.L. Website: https://business.gov.au/grants-and-programs/cooperative-research-centres-crc-grants.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Future soft optical fibers could allow for exploring new treatments to block peripheral nerve pain

Engineers at MIT have developed experimental soft, implantable optical fibers that could deliver light to major nerves to explore causes and potential treatments for peripheral nerve disorders in animal models.

(Peripheral nerve pain—from sciatica, motor neuron disease, and general numbness and pain—can occur when nerves outside the brain and spinal cord are damaged, resulting in tingling, numbness, and pain in affected limbs. Peripheral neuropathy is estimated to affect more than 20 million people in the United States.) 

“Current devices used to study nerve disorders are made of stiff materials that constrain movement, so that we can’t really study spinal cord injury and recovery if pain is involved,” says Siyuan Rao, assistant professor of biomedical engineering at the University of Massachusetts.

Optogenetics

The study is based on optogenetics, an animal research technique (originally developed at MIT) that genetically modifies neurons in the brain to respond to light. Neuroscientists have applied optogenetics in animals to precisely trace the neural pathways underlying a range of brain disorders, information that has led to targeted therapies for these conditions. 

But peripheral nerves experience constant pushing and pulling from the surrounding muscles and tissues. Rigid silicon devices would constrain an animal’s natural movement and potentially cause tissue damage.  

Testing transparent hydrogel fiber

Their new design is a soft, stretchable, transparent fiber made from hydrogel—a rubbery, biocompatible mix of polymers and water (a more Jell-O-like solution). 

The team tested the optical fibers in mice whose nerves were genetically modified to respond to blue light (to excite neural activity) or yellow light (to inhibit sciatic pain). 

“Now, people have a tool to study the diseases related to the peripheral nervous system, in very dynamic, natural, and unconstrained conditions,” said Xinyue Liu, an assistant professor at Michigan State University (MSU).

“We hope to help dissect mechanisms underlying pain in the peripheral nervous system. With time, our technology may help identify novel mechanistic therapies for chronic pain and other debilitating conditions such as nerve degeneration or injury.”

This research was supported, in part, by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, the McGovern Institute for Brain Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang Brain-Body Center, and the Brain and Behavior Research Foundation.

Details of the team’s new fibers are reported today (Oct. 19, 2023), in a study appearing in Nature Methods

Citation: Liu, X., Rao, S., Chen, W., Felix, K., Ni, J., Sahasrabudhe, A., Lin, S., Wang, Q., Liu, Y., He, Z., Xu, J., Huang, S., Hong, E., Yau, T., Anikeeva, P., & Zhao, X. (2023). Fatigue-resistant hydrogel optical fibers enable peripheral nerve optogenetics during locomotion. Nature Methods, 1-8. c9 https://doi.org/10.1038/s41592-023-02020-9

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Wearable device could read or write data by just bending your finger

Researchers have invented an experimental wearable device using a material that generates power to create and store data by bending a finger—a possible, promising step towards health monitoring and other uses.

Multifunctional devices normally require several materials in layers, requiring nanomaterials with high precision, the researchers note.

The team, led by RMIT University and the University of Melbourne in collaboration with other Australian and international institutions, created the proof-of-concept device using a safe oxide of a low-temperature metal called bismuth.

Senior lead researcher Ali Zavabeti said the invention could be developed to create medical wearables that monitor vital signs and store personal data.

“The innovation was used in our experiments to write, erase and re-write images in nanoscale, so it could feasibly be developed to one day encode bank notes, original art or authentication services,” said Zavabeti, an engineer from RMIT and the University of Melbourne.

The team’s research is published in the journal Advanced Functional Materials.

“We tested natural motion behavior with the device attached to a finger joint, with an average output peak of about 1 volt,” Zavabeti said.

Memory functions

Lead author and PhD student Xiangyang Guo from RMIT said the team can print layers of bismuth oxide in just a few seconds.

“We fundamentally investigated this instant-printing technique for the first time, using low-melting point liquid metals,” said Guo.

The device was able to perform the memory functions of “read,” “write” and “erase” for various images. The device, which was not worn by a user during these memory experiments, wrote and stored the logo and symbol in a space that could fit 20 times within the width of a human hair.  

Other possible uses include sensing and energy harvesting, he said. “The material can act as a semiconductor, meaning it can be used for computation. It is a nanogenerator, meaning it’s energy efficient, with a green-energy supply from environmental vibrations and mechanical movements.”

Guo also said bismuth oxide was likely to cause less irritation to skin, compared with silicon, and it was durable, so it was stretchable and can be integrated into wearable technologies.

The researchers plan to adapt their approach to other low-temperature liquid and solid metals and alloys that could be developed for personalized wearables.

The Australian Research Council and the National Computational Infrastructure funded the research. The team included researchers from the University of Toronto, Western Sydney University, University of Sydney, University of New South Wales and Australian National University.

The researchers’ peer-reviewed article, “Multi-Functional Atomically Thin Oxides from Bismuth Liquid Metal,” was published in the journal Advanced Functional Materials.

Citation: Guo, X., Nguyen, C. K., Syed, N., Ravindran, A., Islam, M. A., Filleter, T., Cao, K., Wang, Y., Mazumder, A., Xu, C., Walia, S., Ghasemian, M. B., Kalantar-Zadeh, K., Scholten, S. C., Robertson, I. O., Healey, A. J., Tetienne, P., Lu, T., Liu, Y., . . . Zavabeti, A. Multi-Functional Atomically Thin Oxides from Bismuth Liquid Metal. Advanced Functional Materials, 2307348. https://onlinelibrary.wiley.com/doi/full/10.1002/adfm.202307348 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

When you walk, you intuitively avoid puddles or pavement cracks—now robots are about to catch up with you

There’s a biological component that allows humans and other mammals to navigate complex environments: Our human “central pattern generators” (CPG) are neural networks that produce rhythmic patterns of control signals for limbs, using simple environmental cues.

University of Pittsburg engineers have received a $1,606,454 award from the National Science Foundation to lead a two-year project to engineer these patterns and signals in neural networks in robots.

Fully functional robots with biomimetic sensorimotor control

Neuromorphic engineering—computing inspired by the human brain—will be key to achieving efficient, adaptive sensorimotor control in these robots, says Rajkumar Kubendran, principal investigator and assistant professor of electrical and computer engineering at Pitt.

“We aim to demonstrate a fully functional quadropod or hexapod robot that can learn to move, using principles informed by neuroscience, leading to biomimetic sensorimotor control for energy-efficient locomotion, and using learning algorithms running on bio-realistic neural networks,” Kubendran said. 

Critical uses include disaster response

“Agile robots that can explore unknown and treacherous terrains have the potential to enable autonomous navigation for commercial transport, enhance disaster response during floods and earthquakes or to remote and unsafe areas like malfunctioning nuclear plants or space exploration,” he said. 

The project, set to begin in 2024, is part of a larger $45 million initiative by the NSF to invest in the future of semiconductors. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New material could reconnect severed nerves or stimulate them remotely

Rice University neuroengineer Jacob Robinson and his team have designed a new magnetic-to-electric conversion material that is 120 times faster than similar materials. It can stimulate neurons remotely or bridge the gap in a broken sciatic nerve (tested using a rat model).

In a study published in the journal Nature Materials, the researchers showed that the new material could allow for neurostimulation treatments (such as transcranial magnetic stimulation) with less-invasive procedures. Instead of implanting a neurostimulation device, tiny amounts of the material could simply be injected at the desired site.

Stimulating 120-times-faster neural activity

The researchers started with a magnetoelectric material made up of a piezoelectric (generating electrical current from shape changes) layer of lead zirconium titanate sandwiched between two layers of metallic glass alloys. This material could be rapidly magnetized and demagnetized.

research illustration
Schematic of neural response for linear magnetic-to-electric conversion (top two conversions) versus nonlinear (bottom). (credit: Josh Chen/Rice University)

The researchers stacked layered platinum, hafnium oxide and zinc oxide on top of the original magnetoelectric film. This made it 120 times faster at stimulating neural activity, compared to previous magnetic materials and with a layer thiner than 200 nanometers (so in the future it could be injectable).

Proof of concept in neuroprosthetics

The researchers used the material with rats to stimulate peripheral nerves, restore function in a severed nerve, and prove fast electric signal speeds.

According to the researchers, the new metamaterial overcomes many challenges in neurotechnology, and this framework for advanced material design can be applied toward other applications, like sensing and memory in electronics.

The research was supported by the National Science Foundation (2023849) and the National Institutes of Health (U18EB029353).

Citation: Chen, J. C., Bhave, G., Alrashdan, F., Dhuliyawalla, A., Hogan, K. J., Mikos, A. G., & Robinson, J. T. (2023). Self-rectifying magnetoelectric metamaterials for remote neural stimulation and motor function restoration. Nature Materials, 1-8. https://doi.org/10.1038/s41563-023-01680-4

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter