Machine-learning algorithm indentifies drugs that shouldn’t be prescribed together

Using a machine-learning algorithm, researchers can now predict interactions that could interfere with a drug’s effectiveness, according to a study by researchers at MIT, Brigham and Women’s Hospital, and Duke University.

Any drug taken orally must pass through the lining of the digestive tract. Proteins called “transporters,” found on cells that line the GI tract, help with this process. But for many drugs, it’s unknown which of those transporters the drugs use to exit the digestive tract. So they can interfere with each other and should not be prescribed together.

Identifying transporters

The researchers have developed a way to identify the transporters used by different drugs, using tissue models and machine-learning algorithms, according to Giovanni Traverso, an associate professor of mechanical engineering at MIT. He is also a gastroenterologist at Brigham and Women’s Hospital and the senior author of the study, which appears in Nature Biomedical Engineering.

Learning more about which transporters help drugs pass through the digestive tract could also help drug developers improve the absorbability of new drugs by adding excipients that enhance their interactions with transporters.

Model training

For this study, Traverso and his colleagues adapted a tissue model they had developed in 2020 to measure a given drug’s absorbability. This experimental setup, based on pig intestinal tissue grown in the laboratory, can be used to systematically expose tissue to different drug formulations and measure how well they are absorbed.

The researchers tested 23 commonly used drugs using this system, allowing them to identify transporters used by each of those drugs. Then they trained a machine-learning model on that data, as well as data from several drug databases. The model learned to make predictions of which drugs would interact with which transporters, based on similarities between the chemical structures of the drugs.

Two million predictions of potential drug interactions

Using this model, the researchers analyzed a new set of 28 currently used drugs, as well as 1,595 experimental drugs. This screen yielded nearly 2 million predictions of potential drug interactions. Among them was the prediction that doxycycline, an antibiotic, could interact with warfarin, a commonly prescribed blood-thinner. and with digoxin (which is used to treat heart failure), levetiracetam, an antiseizure medication, and tacrolimus, an immunosuppressant.

That data confirmed the model’s predictions that the absorption of doxycycline is affected by digoxin, levetiracetam, and tacrolimus. Only one of those drugs, tacrolimus, had been previously suspected to interact with doxycycline.

Helping drug development

This approach could also be applied to drugs now in development. Drug developers could tune the formulation of new drug molecules to prevent interactions with other drugs or improve their absorbability. Vivtex, a biotech company co-founded in 2018 by former MIT postdoc Thomas von Erlach, MIT Institute Professor Robert Langer, and Traverso to develop new oral drug delivery systems, is now pursuing that kind of drug-tuning.

The research was funded in part by the National Institutes of Health, the Department of Mechanical Engineering at MIT, and the Division of Gastroenterology at Brigham and Women’s Hospital.

Citation: Shi, Y., Reker, D., Byrne, J.D. et al. Screening oral drugs for their interactions with the intestinal transportome via porcine tissue explants and machine learning. Nat. Biomed. Eng (2024). https://doi.org/10.1038/s41551-023-01128-9

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Radical new light-wave chip design enables AI computing at speed of light

University of Pennsylvania engineers have developed a new chip that radically accelerates processing by using light waves rather than electricity to perform the complex math essential to training AI.

The silicon-photonic (SiPh) chip has the potential to accelerate the processing speed of computers while also reducing their energy consumption, according to the researchers.

Vector-matrix math at speed of light

The innovation, based on Professor Nader Engheta’s pioneering research in manipulating materials at the nanoscale to perform mathematical computations using light, is a platform for performing vector-matrix multiplication. This is a core mathematical operation in the development and function of neural networks, the computer architecture that powers today’s AI tools.

In a paper in Nature Photonics, Engheta’s group, together with that of Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering, describe the development of the new chip. They explain that instead of using a silicon wafer of uniform height, the chip uses variations in height to control the propagation of light through the chip. That causes light to scatter in specific patterns, allowing the chip to perform mathematical calculations at the speed of light.

Potential use in GPUs

The design could potentially be adapted for use in graphics processing units (GPUs), the demand for which has skyrocketed with the widespread interest in developing new AI systems.

In addition to faster speed and less energy consumption, many computations can happen simultaneously, so there will be no need to store sensitive information in a computer’s working memory, rendering a future computer powered by such technology virtually unhackable.

This study was conducted at the University of Pennsylvania School of Engineering and Applied science and supported in part by a grant from the U.S. Air Force Office of Scientific Research’s (AFOSR) Multidisciplinary University Research Initiative (MURI) and a grant from the U.S. Office of Naval Research (ONR).

Citation: Nikkhah, V., Pirmoradi, A., Ashtiani, F. et al. Inverse-designed low-index-contrast structures on a silicon photonics platform for vector–matrix multiplication. Nat. Photon. (2024). https://doi.org/10.1038/s41566-024-01394-2

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How can humans protect the Earth from devastating asteroid and comet impacts?

Currently, we depend on early warning by NASA’s Goldstone Solar System Radar. Located in the desert near Barstow, California, it is part of NASA’s Deep Space Network (DSN) (also used to explore other planets).

Next generation RADARon Green Bank Telescope

However, to expand on these capabilities, the National Radio Astronomy Observatory (NRAO) has developed a new instrument concept called the “next generation RADAR” (ngRADAR) system. It will use the National Science Foundation’s Green Bank Telescope (GBT) and other current and future facilities for planetary defense.

Today (Saturday, February 17) scientists will showcase recent results obtained with ground-based radar systems at the American Association for the Advancement of Science’s annual conference in Denver, Colorado.

The future of space radar

“There are many applications for the future of radar, from substantially advancing our knowledge of the Solar System, to informing future robotic and crewed spaceflight, and characterizing hazardous objects that stray too close to Earth,” says Tony Beasley, NRAO’s director.

Most recently, the GBT helped confirm the success of NASA’s DART mission (also see Mindplex Nuke That Asteroid! Is That A Good Idea?), the first test to see if humans could successfully alter the trajectory of an asteroid, “ says NRAO scientist and ngRADAR project director Patrick Taylor.

“With the support of Raytheon Technologies, ngRADAR pilot tests on the GBT—using a low-power transmitter with less output than a standard microwave oven—have produced the highest-resolution images of the Moon ever taken from Earth. Imagine what we could do with a more powerful transmitter,” he said.

The GBT is the world’s largest fully steerable radio telescope. The maneuverability of its 100-meter dish enables it to observe 85 percent of the celestial sphere, allowing it to quickly track objects across its field of view.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

MIT researchers remotely map food crops, using machine learning and satellites

Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food.

Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But only a handful of countries have the resources to create accurate maps.

So MIT engineers used a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, without requiring in-person assessments of every single farm.

First nationwide crop map of Thailand

The researchers used this technique to generate the first nationwide crop map of Thailand’s four major crops—rice, cassava, sugarcane, and maize—with an accuracy of 93 percent.

That’s comparable to on-the-ground mapping efforts in high-income, big-farm countries, says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering.

Mapping the millions of India smallholder farmers

The researchers are now moving to map crops across India, where roadside images have recently become available from Google Street View and other services . “There are over 150 million smallholder farmers in India,” says Wang.

“India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

The mapping method

Wang, along with the Institute for Data, Systems, and Society (IDSS), graduate students, and Jagtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

“What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.”

Citation: Soler, J. L., Friedel, T., & Wang, S. (2023). Combining Deep Learning and Street View Imagery to Map Smallholder Crop Types. arXiv https://arxiv.org/abs/2309.05930 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Shape-shifting soft machines

Researchers at Lawrence Livermore National Laboratory have developed an enhanced new type of soft material that can change shape in response to light. It could create “soft machines” for a variety of fields from robotics to medicine.

Liquid crystals and 3D-printing

The novel liquid crystal elastomer (LCE) material is made by incorporating liquid crystals into the molecular structure of a stretchable material.

Adding gold nanorods to the LCE material, scientists and engineers created photo-responsive inks and 3D-printed structures that could be made to bend, crawl and move when exposed to a laser that causes localized heating in the material.

The results were recently published online by the journal Matter.

As described in the paper, the LLNL team, along with their collaborators from Harvard University, North Carolina State University and the University of Pennsylvania, used a direct ink-writing printing technique to build a variety of light-responsive objects. These include cylinders that could roll, asymmetric “crawlers” that could go forward, and lattice structures that oscillated.

Hierarchical lattices

By combining shape morphing with photoresponsivity, the new material could change the way people think about machines and materials.

“At LLNL, we’ve focused on developing static materials and architectures for some time,” said principal investigator Caitlyn Krikorian Cook. “We’ve made these complex types of structures like hierarchical lattices, and we’ve even started exploring more responsive materials, like shape memory polymers, that have a one-time shape memory response. ”

Soft machines

The researchers said the new material could be used to create a “soft machine” made from these flexible LCE composite materials—capable of responding to external stimuli and even mimicking the movements and behaviors of living organisms.

Soft robots made of the shape morphing material could crawl, swim or fly, and explore environments that are too difficult or dangerous for humans to access, like caves or outer space.

Soft machines could also be used in medical applications, such as implantable devices that can adapt to the body’s movements, or prosthetic limbs that move like natural limbs, and other applications that aren’t possible with machines made from rigid materials, like metal or plastic.

“Rigid robots maybe wouldn’t be ideal for humans to interact with, so we need systems and materials that are more compliant,” said the paper’s lead author Michael Ford, a postdoc at Carnegie Mellon University.

“You start with components that make up our robots, and one of those components is an actuator. That’s where these materials come in; they could potentially be an actuator. It reduces computational complexity; you’re making a material that gets rid of onboard electronics and replacing them with a single material that can do all those things.

That will allow you to put more computational complexity into another component or drive power to other sensors that you wouldn’t have been able to do with traditional rigid materials.”

Photothermal actuation

Researchers said the movement of the LCE material is driven primarily by a process known as photothermal actuation, which involves converting light energy into thermal energy, resulting in a mechanical response from the material.

Driven by the interaction between light, gold nanorods and the LCE matrix, the process enables the printed structures to exhibit dynamic and reversible movements in response to external stimuli.

In the study, researchers used a computer vision system, involving cameras and a tracking software, to control the movement of a printed cylinder.

The tracking system monitored the position of the rolling cylinder and continuously adjusted the position of the laser to raster the edge of the cylinder. This continuous tracking and adjustment allowed for the cylinder to maintain its rolling motion in a controlled manner.

By leveraging computer vision with the photothermal actuation of the cylinder, the researchers achieved a sophisticated level of manipulation of the soft machine’s movement, showcasing the potential for advanced control systems in the field of soft robotics and soft machines.

The team also showed that responsivity could be controlled so the soft machines could perform useful tasks, such as a moving cylinder carrying a wire.

“[Lead author Ford] did some awesome work in using computer vision to control the locomotion of the printed cylinder and using a rastering laser to force it to move,” said co-author Elaine Lee.

 “But once you start to get into much more complex motion—like using various rastering speeds and light intensities on a printed lattice, causing it to move in various different modes.

Those were actually outside of what our high performance computing (HPC) simulations were able to predict, because those codes are expecting a uniform heating or stimuli on that lattice.

So, using computer vision and machine learning to learn the actuation speeds, and what doses of light can cause locomotion from that printed architecture, will push us a lot further in understanding how our materials will respond.”

New control systems and computer algorithms also could enable soft machines to move and interact with their environment in a more intelligent and autonomous way, they said.

Sentient materials

Cook said the team is looking at incorporating responses to different types of stimuli, beyond thermal and light stimuli, into areas like humidity and energy absorption, and conditions that the material might experience in space.

The team is looking at starting a new Strategic Initiative at the Lab to focus on autonomous materials and “move the needle” towards sentient materials that can sense, respond, be programmed, learn, decide and communicate,”

“These liquid crystal elastomers are responsive materials — they’re able to sense a stimuli and respond, and will respond repeatedly every time — but it doesn’t have a sense of memory or a way to learn the repeated stimuli and respond accordingly. These are really the materials that we’re striving towards, and this might be a five to 10-year timespan of effort.”

Funding for the research came from the Laboratory Directed Research and Development (LDRD) project “Shape Changing of Responsive Elastomer Structures,” and a grant from the National Science Foundation.

Citation: Michael J. Ford et al. February 2, 2024. Movement with light: Photoresponsive shape morphing of printed liquid crystal elastomers. Matter. https://www.cell.com/matter/abstract/S2590

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Fusion research experiments yield new high-energy record

The Joint European Torus (JET), one of the world’s largest and most powerful fusion machines, has demonstrated the ability to reliably generate fusion energy and has set a world record in energy output—a significant milestone in the field of fusion science and engineering.

In JET’s final deuterium-tritium experiments (DTE3), high fusion power was consistently produced for 5 seconds, resulting in a ground-breaking record of 69 megajoules, using a mere 0.2 milligrams of fuel.

10 times hotter than the core of the Sun

JET is a “tokamak”—a design that uses powerful magnetic fields to confine a plasma in the shape of a doughnut. Fusion, the process that powers stars like our Sun, promises a clean baseload source of heat and electricity for the long term, using small amounts of fuel that can be sourced worldwide from inexpensive materials.

When a mix of two forms of hydrogen (deuterium and tritium) is heated to form a controlled plasma at extreme temperatures—10 times hotter than the core of the Sun—they fuse together to create helium and release energy that can be harnessed to produce electricity.

Sustainable fusion energy

More than 300 scientists and engineers from EUROfusion—a consortium of researchers across Europe—contributed to these landmark experiments at the UK Atomic Energy Authority (UKAEA) site in Oxford. The results solidify JET’s pivotal role in advancing safe, low-carbon, and sustainable fusion energy, according to the researchers.

More information: https://www.gov.uk/ukaea. Social Media: @UKAEAofficial

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

South Pole Telescope scientists publish new observations of the afterglow of the Big Bang

For more than five years, scientists at the South Pole Telescope in Antarctica have been observing the sky with an upgraded camera, picking up remnant light from the universe’s early formation—an afterglow of the Big Bang. They have now analyzed an initial batch of data, and published details in the journal Physical Review D.

The telescope at the Amundsen-Scott South Pole Station, operated by the National Science Foundation, received a new camera known as SPT-3G in 2017. Equipped with 16,000 detectors—10 times more than its predecessor—the SPT-3G is central to multi-institutional research led in part by the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

Afterglow of the Big Bang

The goal: measure faint light known as the cosmic microwave background (CMB)—the afterglow of the Big Bang, when the universe burst forth from a single point of energy nearly 14 billion years ago.

“The CMB is a treasure map for cosmologists,” said Zhaodi Pan, the paper’s lead author and a Maria Goeppert Mayer fellow at Argonne.  “Its minuscule variations in temperature and polarization provide a unique window into the universe’s infancy.

Gravitational lensing—warping of the fabric of space-time

The paper in Physical Review D offers the first CMB gravitational lensing measurements from the SPT-3G. Gravitational lensing happens when the universe’s vast web of matter distorts the CMB as it travels across space. Albert Einstein described this warping in the fabric of space-time in his theory of general relativity.

Measurements of that distortion hold clues about the early universe and mysteries like dark matter, an invisible component of the cosmos. “Dark matter is tricky to detect, because it doesn’t interact with light or other forms of electromagnetic radiation, said Zhaodi Pan, Maria Goeppert Mayer fellow at Argonne. “Currently, we can only observe it through gravitational interactions.”

Ideal location

The dry, stable atmosphere and remote location of the South Pole Telescope create as little interference as possible when hunting for CMB patterns. The team used a dedicated cluster of computers at the Argonne Laboratory Computing Resource Center to run some of the calculations for the research.

This work was funded by the National Science Foundation’s Office of Polar Programs and the DOE Office of Science’s High Energy Physics program.

Citation: Z. Pan et al. Measurement of gravitational lensing of the cosmic microwave background using SPT-3G 2018 data, 12 December 2023, Phys. Rev. D 108, 10.1103/PhysRevD.108.122005 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Inexpensive carbon-neutral biofuels finally possible

Introducing a simple renewable chemical to the pretreatment step can finally make next-generation biofuel production both cost-effective and carbon neutral, a new study finds.

The Department of Energy’s Bioenergy Technology Office has awarded researchers a $2 million grant to build a small-scale pilot plant at the University of California – Riverside. lt could lead to larger-scale investment in the technology, as harnessing energy from fossil fuels adds to global warming and hurts the planet.

The secret: using lignin from plant cell walls

For biofuels to compete with petroleum, biorefinery operations must be designed to better utilize lignin, one of the main components of plant cell walls. It provides plants with greater structural integrity and resiliency from microbial attacks. However, these natural properties of lignin also make it difficult to extract and utilize from the plant matter, also known as biomass.

A billion tons per year of biomass could replace 30% of our petroleum consumption

To overcome the lignin hurdle, UC Riverside Associate Research Professor Charles Cai invented CELF (co-solvent enhanced lignocellulosic fractionation), an innovative biomass pretreatment technology. “CELF uses tetrahydrofuran (THF) to supplement water and dilute acid during biomass pretreatment. It improves overall efficiency and adds lignin extraction capabilities,” Cai said. “Best of all, THF itself can be made from biomass sugars.”

A Energy & Environmental Science paper details the degree to which a CELF biorefinery offers economic and environmental benefits over both petroleum-based fuels and earlier biofuel production methods.

The paper is a collaboration between Cai’s research team at UCR, the Center for Bioenergy Innovation managed by Oak Ridge National Laboratories, and the National Renewable Energy Laboratory, with funding provided by the U.S. Department of Energy’s Office of Science.

Non-edible plant biomass as feedstocks

First-generation biofuel operations use food crops like corn, soy, and sugarcane as raw materials, or feedstocks. Because these feedstocks divert land and water away from food production, using them for biofuels is not ideal. 

Instead, second-generation operations use non-edible plant biomass as feedstocks, such as wood residues from milling operations, sugarcane bagasse, or corn stover, all of which are abundant low-cost byproducts of forestry and agricultural operations. 

According to the Department of Energy, up to a billion tons per year of biomass could be made available for the manufacture of biofuels and bioproducts in the US alone, capable of displacing 30% of our petroleum consumption while also creating new domestic jobs. 

Because a CELF biorefinery can more fully utilize plant matter than earlier second-generation methods, the researchers found that a heavier, denser feedstock like hardwood poplar is preferable over less carbon-dense corn stover for yielding greater economic and environmental benefits. 

A break-even $3.15 per gallon of gasoline equivalent

Using poplar in a CELF biorefinery, the researchers demonstrate that sustainable aviation fuel could be made at a break-even price as low as $3.15 per gallon of gasoline equivalent. The current average cost for a gallon of jet fuel in the U.S. is $5.96. 

The U.S. government issues credits for biofuel production in the form of renewable identification number credits, a subsidy meant to bolster domestic biofuel production. The tier of these credits issued for second-generation biofuels, the D3 tier, is typically traded at $1 per gallon or higher. At this price per credit, the paper demonstrates that one can expect a rate of return of over 20% from the operation. 

“Spending a little more for a more carbon-rich feedstock like poplar still yields more economic benefits than a cheaper feedstock like corn stover, because you can make more fuel and chemicals from it,” Cai said.

The paper also illustrates how lignin utilization can positively contribute to overall biorefinery economics while keeping the carbon footprint as low as possible. In older biorefinery models, where biomass is cooked in water and acid, the lignin is mostly unusable for more than its heating value. 

“The older models would elect to burn the lignin to supplement heat and energy for these biorefineries because they could mostly only leverage the sugars in the biomass—a costly proposition that leaves a lot of value off the table,” said Cai. 

Renewable chemicals

In addition to better lignin utilization, the CELF biorefinery model proposes to produce renewable chemicals. These chemicals could be used as building blocks for bioplastics and food and drink flavoring compounds. These chemicals take up some of the carbon in the plant biomass that would not get released back into the atmosphere as CO2.

“Adding THF helps reduce the energy cost of pretreatment and helps isolate lignin, so you wouldn’t have to burn it anymore. On top of that, we can make renewable chemicals that help us achieve a near-zero global warming potential,” Cai said. “I think this moves the needle from Gen 2 biofuels to Gen 2+.”

Citation: Bruno Colling Klein,  ab   Brent Scheidemantle,bcd   Rebecca J. Hanes,  be   Andrew W. Bartling,ab   Nicholas J. Grundl,ab   Robin J. Clark,af   Mary J. Biddy,ab   Ling Tao,  ab   Cong T. Trinh,bg   Adam M. Guss,bh   Charles E. Wyman,  bcd   Arthur J. Ragauskas,  bgi   Erin G. Webb,bf   Brian H. Davison  bh  and  Charles M. Cai  *bcd.  07 February 2024, Economics and global warming potential of a commercial-scale delignifying biorefinery based on co-solvent enhanced lignocellulosic fractionation to produce alcohols, sustainable aviation fuels, and co-products from biomass. Energy & Environmental Science Issue 3, Page 827 to 1296. https://pubs.rsc.org/en/content/articlelanding/2024/ee/d3ee02532b (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First 3D-printed brain tissue to grow and function like typical brain tissue

University of Wisconsin–Madison scientists have developed the first 3D-printed brain tissue that can grow and function like typical brain tissue.

“This could be a hugely powerful model to help us understand how brain cells and parts of the brain communicate in humans,” says Su-Chun Zhang, professor of neuroscience and neurology at UW–Madison’s Waisman Center. “It could change the way we look at stem cell biology, neuroscience, and the pathogenesis of many neurological and psychiatric disorders.

“It’s an achievement with important implications for scientists studying the brain and working on treatments for a broad range of neurological and neurodevelopmental disorders, such as Alzheimer’s and Parkinson’s disease.”

Printing methods have had limited success in previous attempts to print brain tissue, according to Zhang and Yuanwei Yan, a scientist in Zhang’s lab. The group behind the new 3D-printing process described their method today in the journal Cell Stem Cell (open-access).

Horizontal printing, grown from induced pluripotent stem cells

Instead of using the traditional 3D-printing approach—stacking layers vertically—the researchers went horizontally. They also situated neurons, grown from induced pluripotent stem cells, in a softer “bio-ink” gel than previous attempts had employed.

“The tissue still has enough structure to hold together but it is soft enough to allow the neurons to grow into each other and start talking to each other,” Zhang says. The cells are laid next to each other like pencils laid next to each other on a tabletop.

“Our tissue stays relatively thin and this makes it easy for the neurons to get enough oxygen and enough nutrients from the growth media,” Yan says.

Forming networks comparable to human brains

The printed cells reach through the medium to form connections inside each printed layer as well as across layers, forming networks comparable to human brains. The neurons communicate, send signals, interact with each other through neurotransmitters, and even form proper networks with support cells that were added to the printed tissue.

“We printed the cerebral cortex and the striatum and what we found was quite striking,” Zhang says. “Even when we printed different cells belonging to different parts of the brain, they were still able to talk to each other in a very special and specific way.”

The printing technique offers precision control over the types and arrangement of cells, which is not found in brain organoids (miniature organs used to study brains, which grow with less organization and control).

“Our lab is very special in that we are able to produce pretty much any type of neurons at any time. Then we can piece them together at almost any time and in whatever way we like,” Zhang says. “Because we can print the tissue by design, we can have a defined system to look at how our human brain network operates. We can look very specifically at how the nerve cells talk to each other under certain conditions because we can print exactly what we want.”

Studying brain diseases

That specificity provides flexibility. The printed brain tissue could be used to study signaling between cells in Down syndrome, interactions between healthy tissue and neighboring tissue affected by Alzheimer’s, testing new drug candidates, or even watching the brain grow.

“In the past, we have often looked at one thing at a time, which means we often miss some critical components. Our brain operates in networks. We want to print brain tissue this way because cells do not operate by themselves. They talk to each other. This is how our brain works and it has to be studied all together like this to truly understand it,” Zhang says.

“Our brain tissue could be used to study almost every major aspect of what many people at the Waisman Center are working on. It can be used to look at the molecular mechanisms underlying brain development, human development, developmental disabilities, neurodegenerative disorders, and more.”

Accessible to other labs

The new printing technique should also be accessible to many labs. It does not require special bio-printing equipment or culturing methods to keep the tissue healthy, and can be studied in depth with microscopes, standard imaging techniques and electrodes already common in the field.

The researchers say they would like to explore the potential of specialization, further improving their bio-ink and refining their equipment to allow for specific orientations of cells within their printed tissue.

“Right now, our printer is a benchtop commercialized one,” Yan says. “We can make some specialized improvements to help us print specific types of brain tissue on-demand.”

This study was supported in part by NIH-NINDS (NS096282, NS076352, NS086604), NICHD (HD106197, HD090256), the National Medical Research Council of Singapore (MOH-000212, MOH-000207), Ministry of Education of Singapore (MOE2018-T2-2-103), Aligning Science Across Parkinson’s (ASAP-000301), the Bleser Family Foundation, and the Busta Foundation.

Citation: Yuanwei Yan et al. 3D bioprinting of human neural tissues with functional connectivity. Cell Stem Cell. https://doi.org/10.1016/j.stem.2023.12.009 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Study discovers neurons in the human brain that can predict what you are going to say and help you say it

By using advanced brain recording techniques, a new study demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech.

These findings, led by researchers from Massachusetts General Hospital (MGH), provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production.

Treatment for speech and language disorders

The work, published in the journal Nature, reveals insights into the brain’s neurons that enable language production, and could lead to improvements in the understanding and treatment of speech and language disorders.

“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech—including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations,” says senior author Ziv Williams, MD, an associate professor in Neurosurgery at MGH and Harvard Medical School.

The researchers used a cutting-edge technology called Neuropixels, using probes to record the activities of single neurons in the prefrontal cortex. Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening.

By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables.

Artificial prosthetics or brain-machine interfaces

With their technology, the scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients.

The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.

This work was supported by the National Institutes of Health.

Citation: Khanna, A.R., Muñoz, W., Kim, Y.J. et al. Single-neuronal elements of speech production in humans. Nature (2024). https://doi.org/10.1038/s41586-023-06982-w

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter