Using generative AI to create better, more potent medicines

Researchers at The Ohio State University have created an AI framework called G2Retro that automatically generates reactions for any given molecule. 

G2Retro covers an enormous range of possible chemical reactions and can accurately and quickly determine which reactions might work best.

The study was published May 30 in the journal Communications Chemistry.

The researchers trained G2Retro on a dataset that contains 40,000 chemical reactions collected between 1976 and 2016. The framework “learns” from graph-based representations of given molecules, and uses deep neural networks to generate possible reactant structures that could be used to synthesize them.

Hundreds of new reaction predictions

Given a molecule, G2Retro can come up with hundreds of new reaction predictions in only a few minutes, said Xia Ning, PhD, lead author of the study, an associate professor of computer science and engineering at Ohio State and an associate professor of biomedical informatics in the College of Medicine.

“G2Retro is able to supply multiple synthesis routes and options, as well as a way to rank different options for each molecule,” said Dr. Ning. “This is not going to replace current lab-based experiments, but it will offer more and better drug options so experiments can be prioritized and focused much faster.”

This research was supported by the National Science Foundation and Ohio State’s President’s Research Excellence Program.

Citation: Chen, Z., Ayinde, O. R., Fuchs, J. R., Sun, H., & Ning, X. (2023). G2Retro as a two-step graph generative models for retrosynthesis prediction. Communications Chemistry, 6(1), 1-19. https://doi.org/10.1038/s42004-023-00897-3 (open-access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ultra-low-cost blood-pressure monitor clips on a smartphone

Engineers at the University of California San Diego have developed a simple, low-cost clip that uses a smartphone’s camera and flash to instantly monitor blood pressure at the user’s fingertip.

The clip works with a custom smartphone app and could cost only 10 cents apiece when manufactured at scale.

“Because of their low cost, these clips could be handed out to anyone who needs them but cannot go to a clinic regularly,” said study senior author Edward Wang, a professor of electrical and computer engineering at UC San Diego and director of the Digital Health Lab. “A blood pressure monitoring clip could be given to you at your checkup, much like how you get a pack of floss and toothbrush at your dental visit.”

The research was published today in Nature Scientific Reports.

When the user presses on the clip, the smartphone’s flash lights up the fingertip. That light is then projected through a pinhole-sized channel to the camera as an image of a red circle. The smartphone app extracts the circle, measures the amount that the user’s fingertip applies, and its algorithm converts this data into systolic and diastolic blood pressure readings (credit: Digital Health Lab / UC San Diego)

Citation: Xuan, Y., Barry, C., De Souza, J., Wen, J. H., Antipa, N., Moore, A. A., & Wang, E. J. (2023). Ultra-low-cost mechanical smartphone attachment for no-calibration blood pressure measurement. Nature Scientific Reports, 13(1), 1-11. https://doi.org/10.1038/s41598-023-34431-1 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A wearable brain scanner

Researchers at the University of Nottingham’s School of Physics in the UK have developed a breakthrough method for understanding and diagnosing a range of neurological problems that affect movement, such as Parkinson’s Disease, stroke and concussion.

The device uses small LEGO-brick-sized sensors called “optically pumped magnetometers” (MEG). These are incorporated into a lightweight magnetometer (MEG) helmet to measure magnetic fields generated by cellular activity in the brain. It requires a special magnetically shielded room (shown in image above) that contains additional equipment that allows precise control of magnetic fields at a level 50,000 times smaller than the Earth’s magnetic field.

The wearable system has been installed in a number of research institutions across the globe. The applications span a huge area, from basic neuroscientific questions like how do young children learn to walk, to clinical challenges like why are older people prone to falling, according to the researchers.

Citation: Holmes, N., Rea, M., Hill, R. M., Leggett, J., Edwards, L. J., Hobson, P. J., Boto, E., Tierney, T. M., Rier, L., Rivero, G. R., Shah, V., Osborne, J., Fromhold, T. M., Glover, P., Brookes, M. J., & Bowtell, R. (2023). Enabling ambulatory movement in wearable magnetoencephalography with matrix coil active magnetic shielding. NeuroImage, 274, 120157. https://doi.org/10.1016/j.neuroimage.2023.120157 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

You are about to get a coded message from ET. Can you decode it?

On May 24, 2023, at 19:00 UTC / 12:00 pm PDT, the European Space Agency’s ExoMars Trace Gas Orbiter (TGO) in orbit around Mars will transmit an encoded message to Earth to simulate receiving a signal from an extraterrestrial intelligence. 

This will trigger the start of “A Sign in Space” — a global theater project by artist Daniela de Paulis in collaboration with the SETI Institute, the European Space Agency, the Green Bank Observatory and INAF, the Italian National Institute for Astrophysics.

“Receiving a message from an extraterrestrial civilization would be a profoundly transformational experience for all humankind,” de Paulis explains. “A Sign in Space offers the unprecedented opportunity to tangibly rehearse and prepare for this scenario through global collaboration, fostering an open-ended search for meaning across all cultures and disciplines.” 

Three radio astronomy observatories will detect the encoded message: the SETI Institute’s Allen Telescope Array (ATA), the Green Bank Telescope (GBT, not to be confused with GPT) at the Green Bank Observatory, and the Medicina Radio Astronomical Station observatory.

The public is invited to help decode this ET message on a form on the project’s website. More on Facebook, Reddit, Instagram, Twitter, and Discord.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New brain-organoid research helps develop treatments for brain diseases

As we reported in March, scientists are planning to create biocomputers powered by lab-grown human brain cells called “organoids.” They could serve as “biological hardware” to achieve “unprecedented advances in computing speed, processing power, data efficiency, and storage capabilities with lower energy needs.”

But now we’re looking at the original purpose of organoids: research that focuses on understanding the role of genes in brain development. The goal: develop treatments for serious brain diseases by either “knocking out” or activating individual genes, and then drawing conclusions about the the role of these genes in brain development. And to avoid animal experiments as far as possible, brain organoids are used as an alternative to monkeys.

Stem cells reprogrammed as neurons

Brain organoids are grown in the laboratory from induced pluripotent stem cells. These cells are usually derived from skin or blood cells that are “reprogrammed” so that they regress to stem cells and can then differentiate into any other cell type, such as neurons.

“We are particularly interested in the genetic factors underlying brain development in primates,” explains Michael Heide, head of the Junior Research Group Brain Development and Evolution at DPZ and author of the study. “The brain organoids allow us to reproduce these processes in the Petri dish. To do that, however, we need to genetically modify them,” he explained in a statement.

Faster brain-disease research procedure

Until now, these procedures were sometimes very labor-intensive and took several months. The team of researchers led by Michael Heide has now developed a fast, cost-effective method.

“We use microinjection and electroporation for our method,” said Heide. “In this process, genetic material is injected into the organoids with a very thin cannula and introduced into the cells with the help of a small electrical pulse. It takes only a few minutes, and the brain organoids can be analyzed after a few days.”

“The method is equally suitable for brain organoids from humans, chimpanzees, rhesus macaques and common marmosets,” says Heide. “This allows us to perform comparative studies on physiological and evolutionary brain development in primates and is also an effective tool to simulate genetically caused neurological malformations without having to use monkeys in animal experiments.”

Citation: Tynianskaia, L., Eşiyok, N., Huttner, W. B., Heide, M. Targeted Microinjection and Electroporation of Primate Cerebral Organoids for Genetic Modification. J. Vis. Exp. (193), e65176, doi:10.3791/65176 https://www.jove.com/t/65176/targeted-microinjection-electroporation-primate-cerebral-organoids (23).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

It’s 10 PM. Do you know where your DNA is?

Probably not. Signs of human life can be found nearly everywhere, short of isolated islands and remote mountaintops, according to a new University of Florida study, noting that this is both a scientific boon and an ethical dilemma.

The UF researchers collected high-quality “environmental DNA” (eDNA) from footprints made by one of the researchers on an uninhabited island that was otherwise devoid of human DNA. Sequencing the DNA revealed identifiable information about the participant’s genome.

Human DNA as genetic bycatch from pathogen and wildlife eDNA studies (credit: David J. Duffy and https://creativecommons.org/licenses/by/4.0 — no changes made)

Almost equivalent to if you took a sample from a person”

The DNA was of such high quality that the scientists could identify mutations associated with disease and determine the genetic ancestry of nearby populations. They could even match genetic information to individual participants who had volunteered to have their errant DNA recovered.

“We’ve been consistently surprised throughout this project at how much human DNA we find and the quality of that DNA,” said David J. Duffy, Department of Biology, College of Liberal Arts and Sciences, University of Florida. “In most cases, the quality is almost equivalent to if you took a sample from a person.”

Even in the ocean and rivers

The team found quality human DNA in the ocean and rivers surrounding the Whitney Lab, both near town and far from human settlement, as well as in sand from isolated beaches. Duffy also tested the technique in his native Ireland. Tracing along a river that winds through town on its way to the ocean, Duffy found human DNA everywhere except the remote mountain stream where the river starts, far from civilization.

Because of the ability to potentially identify individuals, the researchers say ethical guardrails are necessary for this kind of research.

… and in the sky*

Citation: Whitmore, L., McCauley, M., Farrell, J. A., Stammnitz, M. R., Koda, S. A., Mashkour, N., Summers, V., Osborne, T., Whilde, J., & Duffy, D. J. (2023). Inadvertent human genomic bycatch and intentional capture raise beneficial applications and ethical concerns with environmental DNA. Nature Ecology & Evolution, 1-16. https://doi.org/10.1038/s41559-023-02056-2 (open-access).

*In a recent paper, Dr. Kimberly Metris, a faculty member at Clemson University and lead investigator, reports that eDNA also extends to the sky. Using a light aircraft with a sampling probe and high-throughput metagenomic sequencing, the researchers discovered widespread presence of allergens and pathogens, including bacteria eDNA, in the atmosphere, reaching 8,500 feet above the ground in the southeastern US.

Citation: Métris KL, Métris J. Aircraft surveys for air eDNA: probing biodiversity in the sky. PeerJ. 2023 Apr 14;11:e15171. doi: 10.7717/peerj.15171. PMID: 37077310; PMCID: PMC10108859.  https://peerj.com/articles/15171 (open-access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Recent human-like robot breakthroughs

Robot researchers have recently achieved two advanced developments at extremes of human-like touch and movement.

At Columbia University, researchers have trained robotic fingers to dexterously manipulate complex objects by touch alone, without dropping them. Their paper has been accepted to the Robotics: Science and Systems 2023 conference.

And at Google Deep Mind, CBS 60 Minutes’ Scott Pelley found that robots have learned themselves to play soccer (br. football) — “told only that the object was to score.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI ‘semantic decoder’ method can reveal hidden stories in patients’ minds, researchers say

Neuroscientists have developed a new “semantic decoder” AI transformer method to help patients who have lost the ability to speak. The non-surgical system can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text.

Current methods require implants and brain surgery, or else are limited to a few words, say the researchers at the University of Texas at Austin.

Semantic reconstruction

The new method is based instead on decoding words and “semantic reconstruction” of words while in an MRI machine, using “functional magnetic resonance imaging” (fMRI). The fMRI responses (blood flow and oxygen, known as BOLD), associated with specific words) were recorded while the subject listened to 16  hours of narrative stories. An encoding AI model was estimated for each subject to predict brain responses from semantic features of stimulus words.

The system is not currently practical for use outside of the laboratory because of the time needed on an fMRI machine. But the researchers are looking at using portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” study leader Jerry Tang, a doctoral student in computer science, said in a statement. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

Citation: Tang, J., LeBel, A., Jain, S., & Huth, A. G. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, 26(5), 858-866. https://doi.org/10.1038/s41593-023-01304-9

Also see: J. Tang, Societal implications of brain decoding, Medium.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Researchers predict what a mouse sees by decoding its brain signals

A research team has created AI neural network models that decoded what a mouse saw while watching a movie.

The machine-learning algorithm, CEBRA (pronounced “zebra”), predicted movie frames directly from brain signals, after an initial mouse training period.

(Credit: EPFL/Hillary Sancutary/Alain Herzog/Allen Institute/Roddy Grieves)

“This work is just one step towards the theoretically backed algorithms that are needed in neurotechnology to enable high-performance BMIs [brain-machine interfaces],” said principal investigator Mackenzie Mathis, Bertarelli Chair of Integrative Neuroscience at EPFL (École Polytechnique Fédérale de Lausanne] in a statement.

How it works

The researchers used video-decoding data from the Ho Institute in Seattle. The brain signals were obtained directly by measuring brain activity via electrode probes inserted into the visual cortex area of the mouse’s brain. The activated optogenetic mice neurons were genetically engineered to glow green. During the training period, CEBRA learned to map the brain activity to specific video frames, using less than 1% of neurons in the mouse visual cortex (which consists of about 0.5 million neurons).

CEBRA is based on “contrastive learning,” a technique that can be used to infer hidden relationships and structure in the data. It enables researchers to jointly consider neural data and behavioral labels, including measured movements, abstract labels like “reward,” or sensory features such as colors or textures of images.

According to the researchers, the broad goal of CEBRA is to uncover structures in complex systems and provide insight into how the brain processes information. It could also serve as a platform for discovering new principles in neuroscience by combining data across animals, and even species, with possible clinical applications.

Citation: Schneider, S., Lee, J.H. & Mathis, M.W. Learnable latent embeddings for joint behavioural and neural analysis. Nature (2023). https://doi.org/10.1038/s41586-023-06031-6 (open access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astronomers observe dying star engulfing a planet in real time

Astronomers have observed the first direct evidence of a dying star expanding to engulf one of its planets, seen in an outburst from a star in the Milky Way about 13,000 light-years from Earth. This event — happening in real time over a few months during an observation — likely presages the ultimate fate of Mercury, Venus, and Earth when our Sun begins its death throes in about five billion years.

“These observations provide a new perspective on finding and studying the billions of stars in our Milky Way that have already consumed their planets,” says Ryan Lau, NOIRLab astronomer and co-author on this study, published in the journal Nature.

How it happens

According to the researchers at the US National Science Foundation’s National Optical-Infrared Astronomy Research Laboratory, for most of its life, a Sun-like star fuses hydrogen into helium in its hot, dense core, which allows the star to push back against the crushing weight of its outer layers. When hydrogen in the core runs out, the star begins fusing helium into carbon, and hydrogen fusion migrates to the star’s outer layers, causing them to expand, and changing the Sun-like star into a red giant.

Such a transformation is bad news for any inner-system planets. When the star’s surface eventually expands to engulf one of its planets, their interaction would trigger a spectacular outburst of energy and material. This process would also put the brakes on the planet’s orbital velocity, causing it to plunge into the star.

The researchers used the Gemini South Adaptive Optics Imager (GSAOI) at the International Gemini Observatory, in Cerro Pachón in Chile, operated by the U.S. National Science Foundation (NSF).

Citation: De, K., MacLeod, M., Karambelkar, V. et al. An infrared transient from a star engulfing a planet. Nature 617, 55–60 (2023). https://doi.org/10.1038/s41586-023-05842-x

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter