New brain-organoid research helps develop treatments for brain diseases

As we reported in March, scientists are planning to create biocomputers powered by lab-grown human brain cells called “organoids.” They could serve as “biological hardware” to achieve “unprecedented advances in computing speed, processing power, data efficiency, and storage capabilities with lower energy needs.”

But now we’re looking at the original purpose of organoids: research that focuses on understanding the role of genes in brain development. The goal: develop treatments for serious brain diseases by either “knocking out” or activating individual genes, and then drawing conclusions about the the role of these genes in brain development. And to avoid animal experiments as far as possible, brain organoids are used as an alternative to monkeys.

Stem cells reprogrammed as neurons

Brain organoids are grown in the laboratory from induced pluripotent stem cells. These cells are usually derived from skin or blood cells that are “reprogrammed” so that they regress to stem cells and can then differentiate into any other cell type, such as neurons.

“We are particularly interested in the genetic factors underlying brain development in primates,” explains Michael Heide, head of the Junior Research Group Brain Development and Evolution at DPZ and author of the study. “The brain organoids allow us to reproduce these processes in the Petri dish. To do that, however, we need to genetically modify them,” he explained in a statement.

Faster brain-disease research procedure

Until now, these procedures were sometimes very labor-intensive and took several months. The team of researchers led by Michael Heide has now developed a fast, cost-effective method.

“We use microinjection and electroporation for our method,” said Heide. “In this process, genetic material is injected into the organoids with a very thin cannula and introduced into the cells with the help of a small electrical pulse. It takes only a few minutes, and the brain organoids can be analyzed after a few days.”

“The method is equally suitable for brain organoids from humans, chimpanzees, rhesus macaques and common marmosets,” says Heide. “This allows us to perform comparative studies on physiological and evolutionary brain development in primates and is also an effective tool to simulate genetically caused neurological malformations without having to use monkeys in animal experiments.”

Citation: Tynianskaia, L., Eşiyok, N., Huttner, W. B., Heide, M. Targeted Microinjection and Electroporation of Primate Cerebral Organoids for Genetic Modification. J. Vis. Exp. (193), e65176, doi:10.3791/65176 https://www.jove.com/t/65176/targeted-microinjection-electroporation-primate-cerebral-organoids (23).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

It’s 10 PM. Do you know where your DNA is?

Probably not. Signs of human life can be found nearly everywhere, short of isolated islands and remote mountaintops, according to a new University of Florida study, noting that this is both a scientific boon and an ethical dilemma.

The UF researchers collected high-quality “environmental DNA” (eDNA) from footprints made by one of the researchers on an uninhabited island that was otherwise devoid of human DNA. Sequencing the DNA revealed identifiable information about the participant’s genome.

Human DNA as genetic bycatch from pathogen and wildlife eDNA studies (credit: David J. Duffy and https://creativecommons.org/licenses/by/4.0 — no changes made)

Almost equivalent to if you took a sample from a person”

The DNA was of such high quality that the scientists could identify mutations associated with disease and determine the genetic ancestry of nearby populations. They could even match genetic information to individual participants who had volunteered to have their errant DNA recovered.

“We’ve been consistently surprised throughout this project at how much human DNA we find and the quality of that DNA,” said David J. Duffy, Department of Biology, College of Liberal Arts and Sciences, University of Florida. “In most cases, the quality is almost equivalent to if you took a sample from a person.”

Even in the ocean and rivers

The team found quality human DNA in the ocean and rivers surrounding the Whitney Lab, both near town and far from human settlement, as well as in sand from isolated beaches. Duffy also tested the technique in his native Ireland. Tracing along a river that winds through town on its way to the ocean, Duffy found human DNA everywhere except the remote mountain stream where the river starts, far from civilization.

Because of the ability to potentially identify individuals, the researchers say ethical guardrails are necessary for this kind of research.

… and in the sky*

Citation: Whitmore, L., McCauley, M., Farrell, J. A., Stammnitz, M. R., Koda, S. A., Mashkour, N., Summers, V., Osborne, T., Whilde, J., & Duffy, D. J. (2023). Inadvertent human genomic bycatch and intentional capture raise beneficial applications and ethical concerns with environmental DNA. Nature Ecology & Evolution, 1-16. https://doi.org/10.1038/s41559-023-02056-2 (open-access).

*In a recent paper, Dr. Kimberly Metris, a faculty member at Clemson University and lead investigator, reports that eDNA also extends to the sky. Using a light aircraft with a sampling probe and high-throughput metagenomic sequencing, the researchers discovered widespread presence of allergens and pathogens, including bacteria eDNA, in the atmosphere, reaching 8,500 feet above the ground in the southeastern US.

Citation: Métris KL, Métris J. Aircraft surveys for air eDNA: probing biodiversity in the sky. PeerJ. 2023 Apr 14;11:e15171. doi: 10.7717/peerj.15171. PMID: 37077310; PMCID: PMC10108859.  https://peerj.com/articles/15171 (open-access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Recent human-like robot breakthroughs

Robot researchers have recently achieved two advanced developments at extremes of human-like touch and movement.

At Columbia University, researchers have trained robotic fingers to dexterously manipulate complex objects by touch alone, without dropping them. Their paper has been accepted to the Robotics: Science and Systems 2023 conference.

And at Google Deep Mind, CBS 60 Minutes’ Scott Pelley found that robots have learned themselves to play soccer (br. football) — “told only that the object was to score.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI ‘semantic decoder’ method can reveal hidden stories in patients’ minds, researchers say

Neuroscientists have developed a new “semantic decoder” AI transformer method to help patients who have lost the ability to speak. The non-surgical system can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text.

Current methods require implants and brain surgery, or else are limited to a few words, say the researchers at the University of Texas at Austin.

Semantic reconstruction

The new method is based instead on decoding words and “semantic reconstruction” of words while in an MRI machine, using “functional magnetic resonance imaging” (fMRI). The fMRI responses (blood flow and oxygen, known as BOLD), associated with specific words) were recorded while the subject listened to 16  hours of narrative stories. An encoding AI model was estimated for each subject to predict brain responses from semantic features of stimulus words.

The system is not currently practical for use outside of the laboratory because of the time needed on an fMRI machine. But the researchers are looking at using portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” study leader Jerry Tang, a doctoral student in computer science, said in a statement. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

Citation: Tang, J., LeBel, A., Jain, S., & Huth, A. G. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, 26(5), 858-866. https://doi.org/10.1038/s41593-023-01304-9

Also see: J. Tang, Societal implications of brain decoding, Medium.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Researchers predict what a mouse sees by decoding its brain signals

A research team has created AI neural network models that decoded what a mouse saw while watching a movie.

The machine-learning algorithm, CEBRA (pronounced “zebra”), predicted movie frames directly from brain signals, after an initial mouse training period.

(Credit: EPFL/Hillary Sancutary/Alain Herzog/Allen Institute/Roddy Grieves)

“This work is just one step towards the theoretically backed algorithms that are needed in neurotechnology to enable high-performance BMIs [brain-machine interfaces],” said principal investigator Mackenzie Mathis, Bertarelli Chair of Integrative Neuroscience at EPFL (École Polytechnique Fédérale de Lausanne] in a statement.

How it works

The researchers used video-decoding data from the Ho Institute in Seattle. The brain signals were obtained directly by measuring brain activity via electrode probes inserted into the visual cortex area of the mouse’s brain. The activated optogenetic mice neurons were genetically engineered to glow green. During the training period, CEBRA learned to map the brain activity to specific video frames, using less than 1% of neurons in the mouse visual cortex (which consists of about 0.5 million neurons).

CEBRA is based on “contrastive learning,” a technique that can be used to infer hidden relationships and structure in the data. It enables researchers to jointly consider neural data and behavioral labels, including measured movements, abstract labels like “reward,” or sensory features such as colors or textures of images.

According to the researchers, the broad goal of CEBRA is to uncover structures in complex systems and provide insight into how the brain processes information. It could also serve as a platform for discovering new principles in neuroscience by combining data across animals, and even species, with possible clinical applications.

Citation: Schneider, S., Lee, J.H. & Mathis, M.W. Learnable latent embeddings for joint behavioural and neural analysis. Nature (2023). https://doi.org/10.1038/s41586-023-06031-6 (open access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astronomers observe dying star engulfing a planet in real time

Astronomers have observed the first direct evidence of a dying star expanding to engulf one of its planets, seen in an outburst from a star in the Milky Way about 13,000 light-years from Earth. This event — happening in real time over a few months during an observation — likely presages the ultimate fate of Mercury, Venus, and Earth when our Sun begins its death throes in about five billion years.

“These observations provide a new perspective on finding and studying the billions of stars in our Milky Way that have already consumed their planets,” says Ryan Lau, NOIRLab astronomer and co-author on this study, published in the journal Nature.

How it happens

According to the researchers at the US National Science Foundation’s National Optical-Infrared Astronomy Research Laboratory, for most of its life, a Sun-like star fuses hydrogen into helium in its hot, dense core, which allows the star to push back against the crushing weight of its outer layers. When hydrogen in the core runs out, the star begins fusing helium into carbon, and hydrogen fusion migrates to the star’s outer layers, causing them to expand, and changing the Sun-like star into a red giant.

Such a transformation is bad news for any inner-system planets. When the star’s surface eventually expands to engulf one of its planets, their interaction would trigger a spectacular outburst of energy and material. This process would also put the brakes on the planet’s orbital velocity, causing it to plunge into the star.

The researchers used the Gemini South Adaptive Optics Imager (GSAOI) at the International Gemini Observatory, in Cerro Pachón in Chile, operated by the U.S. National Science Foundation (NSF).

Citation: De, K., MacLeod, M., Karambelkar, V. et al. An infrared transient from a star engulfing a planet. Nature 617, 55–60 (2023). https://doi.org/10.1038/s41586-023-05842-x

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Could ET civilizations detect us by the radiation from our cell-phone transmission towers?

A research team at Mauritius University in Africa, Manchester University in England, and SETI in Mountain View, California decided to find out.

Ramiro Saide at the Allen Telescope Array in Hat Creek, California (credit: Ramiro Saide)

In previous SETI studies, scientists have calculated the radio-frequency power from television towers. In this new study, the team instead focused on higher-power radio leakage to space generated by mobile (cell-phone) towers on the Earth as it rotates on its axis.

They calculated the peak power (~4 gigawatts, the equivalent of about 12 million photovoltaic panels, according to Energy.gov) generated by these towers and what power levels could be received by three different viewing points in our galaxy: Barnard’s star, HD 95735, and Alpha Centauri.

The study found that any technically advanced civilizations — within 10 light-years of the Earth and receiving systems with a sensitivity similar to the Green Bank Telescope (GBT) [not to be confused with Chat GPT] — could not currently detect radio leakage from Earth. 

More powerful future Earth radio-frequency radiation

However, the team plans to expand their research. They estimate that by the end of this decade, there will be more powerful civilian and military radars, new digital broadcast systems, and more powerful Wi-Fi networks and individual mobile devices.

There will also be added satellite constellations, with the [questionable, problematic] planned launch of more than 100,000 satellites in low Earth orbit and higher, such as Elon Musk’s Starlink system, they estimate.

Potential risks

The idea here is to increase our visibility to aliens, with the implicit speculation that aliens are “advanced and altruistic,” says David Brin, PhD, in Shouting At the Cosmos,” and yet [the aliens] are choosing to remain silent… should we not consider following their example and doing likewise? At least for a little while? Is it possible that they are silent because they know something we don’t know?”

Brin also notes in Mindplex Podcast: Episode 1 The History And Future Of AGI: “I think that there’s reason to believe that if we are in contact with aliens, that we’re a reality show. I wouldn’t be at all surprised to find out many decades from now that they were purposely provoking us to have material for something like keeping up with the cro-magnon this or Oh, Those Humans.”

Chat GPT on risks with alien cultures

Interestingly, GPT-4 agrees with Brin’s warning, offering (after three anodyne pros — discovery, collaboration and understanding), this response to a “Should we communicate with alien cultures in the universe?” prompt:

  • Potential risks: Communication with alien cultures may pose risks to humanity. If they are technologically advanced, they might pose a threat to our existence, either directly or by introducing advanced technologies that could destabilize our society.
  • Cultural contamination: Interactions with alien cultures could lead to the loss of unique human cultural aspects, as well as the potential for the spread of harmful ideologies or practices.
  • Miscommunication: The potential for miscommunication is high, given the likelihood that alien languages and communication methods may be vastly different from our own. Miscommunication could lead to misunderstandings, conflicts, or other negative consequences. …

“In summary,” GPT-4 adds, “the decision to communicate with alien cultures should be made carefully, considering both the potential benefits and risks. It is crucial to develop strategies and guidelines for such communication to minimize potential dangers while maximizing the opportunities for discovery and collaboration.”

Your comments?

Citation: Ramiro C. Saide, M.A. Garret, N. Heeralall-Issur, Simulation of the Earth’s radio-leakage from mobile towers as seen from selected nearby stellar systems, Monthly Notices of the Royal Astronomical Society, Volume 522, Issue 2, June 2023, Pages 2393–2402, https://doi.org/10.1093/mnras/stad378 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Turning Back the Clock: Genetic Engineers Rewire Cells for an 82% Increase in Lifespan

A team of University of California San Diego (UCSD) researchers has developed a biosynthetic genetic “clock” that significantly extends cellular lifespan.

As described on April 27, 2023 in the journal Science, the researchers are using synthetic biology to engineer a solution that keeps cells from reaching their normal levels of deterioration associated with aging. Cells of yeast, plants, animals, and humans all contain gene regulatory circuits that are responsible for many physiological functions, including aging.

Rewiring gene circuits for an 82% increase in lifespan

The new synthetic biology achievement has the potential to reconfigure scientific approaches to age delay. The researchers genetically rewired the gene regulatory circuit that controls cell aging. This oscillator periodically switches the cell between two detrimental aged states, thereby preventing prolonged commitment to either one, and thus slowing cell degeneration.

Cell aging oscillations: A movie of an engineered cell that ages with oscillating abundance of a master aging regulator (credit: Hao Lab, UC San Diego)

The team used yeast cells in their study, achieving an 82% increase in lifespan compared to control cells. Instead of traditional chemical and genetic attempts to force cells into artificial states of “youth,” the new research provides evidence that slowing the ticks of the aging clock is possible by actively preventing cells from committing to a pre-destined path of decline and death.

“Our work represents a proof-of-concept example, demonstrating the successful application of synthetic biology to reprogram the cellular aging process,” the authors wrote, “and may lay the foundation for designing synthetic gene circuits to effectively promote longevity in more complex organisms.”

The team is currently expanding its research to the aging of diverse human cell types, including stem cells and neurons.

Citation: “Engineering longevity—design of a synthetic gene oscillator to slow cellular aging” by Zhen Zhou, Yuting Liu, Yushen Feng, Stephen Klepin, Lev S. Tsimring, Lorraine Pillus, Jeff Hasty and Nan Hao, 27 April 2023, Science. DOI: 10.1126/science.add7631

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Unlocking the power of exascale supercomputers

Leading research organizations and computer manufacturers in the U.S. are collaborating on construction of some of the world’s fastest supercomputers. These exascale systems can perform more than a billion billion (a quintillion or 1018) operations per second — about the number of neurons in ten million human brains.

Exascale is about 1,000 times faster and more powerful than the fastest supercomputers today, which solve problems at the lower petascale (more than one quadrillion or 1015 operations per second). The new exascale machines will better enable scientists and engineers to answer difficult questions about the universe, advanced healthcare, national security and more, according to the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP).

Supercomputer uses in deep learning

Meanwhile, the applications and software that will run on supercomputers are also being developed by ECP developers, which recently published a paper (open-access) highlighting their progress in using supercomputers in deep learning.

“The environment will really allow individual researchers to scale up their use of DOE supercomputers on deep learning in a way that’s never been done before,” said Rick Stevens, Argonne associate laboratory director for Computing, Environment and Life Sciences.

DOE’s Argonne National Laboratory, future home to the Aurora exascale system, is a key partner in the ECP. Its researchers are involved in developing applications and co-designing the software needed to enable applications to run efficiently.

Simulating “virtual universeswith Exasky

One exciting application is simulation of “virtual universes” on demand and at high fidelities to investigate how the universe evolved from its early beginnings. Example: an ECP project known as ExaSky, using cosmological simulation codes.

Researchers are also adding capabilities within their codes that didn’t exist before. “We’re able to include atomic physics, gas dynamics and astrophysical effects in our simulations, making them significantly more realistic,” said Salman Habib, director of Argonne’s Computational Science division. ​

Online data analysis and reduction

Researchers are also co-designing the software needed to efficiently manage the data they create. Today, HPC applications already output huge amounts of data, far too much to efficiently store and analyze in its raw form. So data needs to be reduced or compressed.

One efficient solution to this is to analyze data at the same time simulations are running, a process known as online data analysis or in situ analysis.

An ECP center known as the Co-Design Center for Online Data Analysis and Reduction (CODAR) is developing both online data analysis methods, as well as data reduction and compression techniques for exascale applications. CODAR works closely with a variety of application teams to develop data compression methods, which store the same information but use less space, and reduction methods, which remove data that is not relevant.

Among the solutions the CODAR team has developed is Cheetah, a system that enables researchers to compare their co-design approaches. Another is Z-checker, a system that lets users evaluate the quality of a compression method from multiple perspectives.

Deep learning and precision medicine for cancer treatment

Exascale computing also has important applications in healthcare, and the DOE, National Cancer Institute (NCI) and the National Institutes of Health (NIH) are taking advantage of it to understand cancer and the key drivers impacting outcomes. The Exascale Deep Learning Enabled Precision Medicine for Cancer project is developing a framework called CANDLE (CANcer Distributed Learning Environment) to address key research challenges in cancer and other critical healthcare areas.

CANDLE uses neural networks to find patterns in large datasets. CANDLE is being developed for three pilot projects geared toward understanding key protein interactions, predicting drug response and automating the extraction of patient information to inform treatment strategies.

Scaling up deep neural networks

Unlocking these problems is at different scale — molecular, patient and population levels — but all are supported by the same scalable deep learning environment in CANDLE. The CANDLE software suite includes a collection of deep neural networks that capture and represent the three problems, a library of code adapted for exascale-level computing and a component that orchestrates how work will be distributed across the computing system.

“The environment will allow individual researchers to scale up their use of DOE supercomputers on deep learning in a way that’s never been done before,” said Rick Stevens, Argonne associate laboratory director for Computing, Environment and Life Sciences.

Applications such as these are just the tipping point. Once these systems come online, the potential for new capabilities will be endless.

Citations (open-access): “Exascale applications: skin in the game,” in Philosophical Transactions of the Royal Society A. and Wozniak, Justin M., et al. 2018 and “CANDLE/Supervisor: A Workflow Framework for Machine Learning Applied to Cancer Research.” BMC Bioinformatics 19 (18): 491. https://doi.org/10.1186/s12859-018-2508-4.

Organizations: The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. Laboratory partners involved in ExaSky include Argonne, Los Alamos and Lawrence Berkeley National Laboratories. Collaborators working on CANDLE include Argonne, Lawrence Livermore, Los Alamos and Oak Ridge National Laboratories, NCI and the NIH.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astronomers solve the 60-year mystery of quasars, the most powerful objects in the Universe

Scientists have unlocked one of the biggest mysteries of quasars — the brightest, most powerful objects in the Universe — discovering that they are ignited by galaxies colliding.

Quasars can shine as brightly as a trillion stars packed into a volume the size of our Solar System, but until now it has remained a mystery what could trigger such powerful activity,

Using deep imaging observations from the Isaac Newton Telescope in La Palma, Canary Islands, scientists at the universities of Sheffield and Hertfordshire observed 48 galaxies that host quasars and compared them to more than 100 non-quasar galaxies.

The future of our own Milky Way galaxy

When two galaxies collide, gravitational forces push huge amounts of gas towards supermassive black holes. The ignition of a quasar can have dramatic consequences for entire galaxies. It can drive the rest of the gas out of the galaxy, which prevents it from forming new stars for billions of years into the future. the centre of the remnant galaxy system that results from the collision. Just before the gas is consumed by the black hole, it releases extraordinary amounts of in the form of radiation, resulting in a quasar.

“Quasars are one of the most extreme phenomena in the Universe, and what we see is likely to represent the future of our own Milky Way galaxy when it collides with the Andromeda galaxy in about five billion years,” said Professor Clive Tadhunter, from the University of Sheffield’s Department of Physics and Astronomy.

Beacons to the history (and future) of the universe

“Quasars are important to astrophysicists because, due to their brightness, they stand out at large distances and therefore act as beacons to the earliest epochs in the history of the Universe,” said Dr. Jonny Pierce, Post-Doctoral Research Fellow at the University of Hertfordshire.

“It’s an area that scientists around the world are keen to learn more about. One of the main scientific motivations for NASA’s James Webb Space Telescope was to study the earliest galaxies in the Universe, and Webb is capable of detecting light from even the most distant quasars, emitted nearly 13 billion years ago. Quasars play a key role in our understanding of the history of the Universe, and possibly also the future of the Milky Way.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter