Major cities on the U.S. Atlantic coast are sinking

In some cases, they’ve sunk as much as 5 millimeters per year—a decline at the ocean’s edge that outpaces global sea level rise, confirms new research from Virginia Tech and the U.S. Geological Survey.

Particularly hard-hit population centers such as New York City, Long Island, Baltimore, and Virginia Beach and Norfolk are seeing areas of rapid “subsidence,” or sinking land, alongside more slowly sinking or relatively stable ground, increasing the risk to roadways, runways, building foundations, rail lines, and pipelines, according to a study published in Proceedings of the National Academies of Sciences.

“Continuous unmitigated subsidence on the U.S. East Coast should cause concern,” say researchers at Virginia Tech’s Earth Observation and Innovation Lab. “This is particularly in areas with a high population and property density and a historical complacency toward infrastructure maintenance.”

The research includes a vast collection of data points measured by space-based radar satellites and using this highly accurate information to build digital terrain maps that show exactly where sinking landscapes present risks to the health of vital infrastructure.

How deep?

These groundbreaking new maps show that a large area of the East Coast is sinking at least 2 mm per year, with several areas sinking more than 5 mm per year, more than the current 4 mm per year global rate of sea level rise.

This affects more than 2 million people and 800,000 properties on the East Coast and several cities along the East Coast. And multiple critical infrastructures such as roads, railways, airports, and levees are affected by differing subsidence rates, the researchers found.

The new findings appear in “Slowly but surely: Exposure of communities and infrastructure to subsidence on the US east coast” in the open-access journal PNAS Nexus

How much has sunk?

Virginia Tech and U.S. Geological Survey (USGS) scientists measured how much the land along the East Coast has sunk and which areas, populations, and critical infrastructure within 100 km of the coast are at risk of land subsidence. Subsidence can undermine building foundations; damage roads, gas, and water lines; cause building collapse; and exacerbate coastal flooding—especially when paired with sea level rise caused by climate change.

Citation: Ohenhen, L. O., Shirzaei, M., & Barnard, P. L. (2023). Slowly but surely: Exposure of communities and infrastructure to subsidence on the US east coast. PNAS Nexus, 3(1). https://doi.org/10.1093/pnasnexus/pgad426

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New AI tool detects cancer quicker, with sharper focus

Researchers have developed a new AI tool that interprets medical images with unprecedented clarity and could speed up disease diagnosis and image interpretation.

The tool, called iStar (Inferring Super-Resolution Tissue Architecture), was developed by researchers at the Perelman School of Medicine at the University of Pennsylvania.

The researchers say the iStat tool would allow doctors and researchers to see cancer cells that might otherwise have been virtually invisible. It could also determine whether safe margins were achieved through cancer surgeries and automatically provide annotation for microscopic images, paving the way for molecular disease diagnosis at that level.

A paper on the method was published today in Nature Biotechnology.

Deciding when immunotherapy is needed

Mingyao Li, PhD, a professor of Biostatistics and Digital Pathology, said iStar can automatically detect critical anti-tumor immune formations called “tertiary lymphoid structures,” a patient’s likely survival and favorable response to immunotherapy. This means, Li said, that iStar could be a powerful tool for determining which patients would benefit most from immunotherapy.

Li and her colleagues trained a machine-learning tool called the “Hierarchical Vision Transformer” on standard tissue images. It breaks down images into different stages, and these images are used to predict gene activities, often at near-single-cell resolution.

213 times faster than other AI tools

To test the efficacy of the tool, Li and her colleagues evaluated iStar on many different types of cancer tissue, including breast, prostate, kidney, and colorectal cancers, mixed with healthy tissues. iStar was able to automatically detect tumor and cancer cells that were hard to identify just by eye. iStar was also 213 times faster than other AI tools.

This research was funded by the National Institutes of Health (R01GM125301 and R01HG013185).

Citation: Zhang, D., Schroeder, A., Yan, H. et al. Inferring super-resolution tissue architecture by integrating spatial transcriptomics with histology. Nat Biotechnol (2024). https://doi.org/10.1038/s41587-023-02019-9

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A carbon-lite atmosphere could be a sign of water and life on a terrestrial planet

Scientists at MIT, the University of Birmingham, and elsewhere say that astronomers’ best chance of finding liquid water, and even life on other planets, is to look for the absence, rather than the presence, of carbon dioxide in their atmospheres. 

This new signature is within the sights of NASA’s James Webb Space Telescope (JWST). The team says this new signature of relatively depleted carbon dioxide is the only sign of habitability that is detectable now.

“Now we have a way to find out if there’s liquid water on another planet. And it’s something we can get to in the next few years,” said Julien de Wit, assistant professor of planetary sciences at MIT.

The team’s findings will appear in Nature Astronomy.

Liquid water

Astronomers have so far detected more than 5,200 worlds beyond our solar system. But there’s been no way to directly confirm whether a planet is indeed habitable, meaning that liquid water exists on its surface. 

Venus, Earth, and Mars share similarities: all three are rocky and inhabit a relatively temperate region with respect to the sun. But Earth is the only planet among the three that currently hosts liquid water, and it has significantly less carbon dioxide in its atmosphere. 

“We assume that these planets were created in a similar fashion, and if we see one planet with much less carbon now, it must have gone somewhere,” Triaud says. “The only process that could remove that much carbon from an atmosphere is a strong water cycle involving oceans of liquid water. 

The team reasoned that if a similar depletion of carbon dioxide were detected in a far-off planet, relative to its neighbors, this would be a reliable signal of liquid oceans and life on its surface.

A roadmap to life

Once astronomers determine that multiple planets in a system host atmospheres, they can move on to measure their carbon dioxide content, to see whether one planet has significantly less than the others. If so, the planet is likely habitable, meaning that it hosts significant bodies of liquid water on its surface. 

To see whether life might actually exist, the team proposes that astronomers look for another feature in a planet’s atmosphere: ozone. Lifeforms emit oxygen, which reacts with the sun’s photons to transform into ozone—a molecule that is far easier to detect than oxygen itself. 

The researchers say that if a planet’s atmosphere shows signs of both ozone and depleted carbon dioxide, it likely is a habitable, and inhabited world, which could lead to discoveries within the next few years.

Citation: Triaud, A. H., De Wit, J., Klein, F., Turbet, M., Rackham, B. V., Niraula, P., Glidden, A., Jagoutz, O. E., Peč, M., Petkowski, J. J., Seager, S., & Selsis, F. (2023). Atmospheric carbon depletion as a tracer of water oceans and biomass on temperate terrestrial exoplanets. Nature Astronomy, 1-13. https://doi.org/10.1038/s41550-023-02157-9

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New AI model could predict your lifespan, job, income and more

Researchers have created an AI model that uses sequences of life events—such as health history, education, job, and income—to predict everything from a person’s personality to their own mortality—for an entire country.

Built using transformer models, which power large language models (LLMs) like ChatGPT, the new tool, life2vec, is trained on a data set pulled from the entire population of Denmark—6 million people. The data set was made available to the researchers by the Danish government.

The tool is capable of predicting the future, with an accuracy that exceeds state-of-the-art models, says Tina Eliassi-Rad, professor of computer science and the inaugural President Joseph E. Aoun Professor at Northeastern University in Boston.

Unique human-centered model

“Even though we’re using prediction to evaluate how good these models are, the tool shouldn’t be used for prediction on real people,” says Chenru Duan, lead author of a paper recently published in Nature Computational Science.

“It is a prediction model based on a specific data set of a specific population. These tools allow you to see into your society in a different way: the policies, rules and regulations. You can think of it as a scan of what is happening on the ground.”

By involving social scientists in the process of building this tool, the team hopes it brings a human-centered approach to AI development, one that doesn’t lose sight of the humans amid the massive data set their tool has been trained on. 

Confidential training data

A massive data set was used to train the life2vec model. The data is held by Statistics Denmark, the central authority on Danish statistics, and is tightly controlled because it includes a detailed registry of every Danish citizen. Although tightly regulated, it can be accessed by some members of the public, including researchers, according to the researchers.

The researchers hope the model can kickstart a public conversation about the power of these tools and how they should (and shouldn’t) be used.

Citation: Duan, C., Du, Y., Jia, H. et al. Accurate transition state generation with an object-aware equivariant elementary reaction diffusion model. Nat Comput Sci 3, 1045–1055 (2023). Also available: open-access draft arXiv version

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Listening to binaural beats stimulates cognitive performance

New research conducted at The University of Texas at Dallas investigates the impact of binaural beats (BB) on cognitive performance on language skills. BB is a sound that occurs from two tones (one in each ear) with slightly different frequencies.

There is a growing interest in using BB as a non-invasive neuromodulation to enhance cognitive performance, according to the researchers.

The experiment

Researchers in the Speech, Language and Music (SLAM) Lab at the Center for BrainHealth investigated how BB stimulation at beta and gamma frequencies effects people. The research was recently published in Cerebral Cortex

In the experiment, 60 participants’ electroencephalography (EEG) signals were recorded. Each participant was randomly assigned to one of three listening groups of differing frequencies (18-Hz beta BB, 40-Hz gamma BB, or pure-tone baseline) embedded in music, followed by a language comprehension task phase. 

18-Hz binaural beats showed best performance

Results showed that participants exposed to 18-Hz (beta-frequency) binaural beats had significantly higher accuracy and faster response times during the language comprehension task, particularly for complex sentences. They also had increased neural entrainment.

“This is the first neuroimaging study that elucidates the effect of BB on language processing at the neural level, said lead investigator Dr. Yune Lee, Assistant Professor at the School of Behavioral and Brain Sciences and Director of the Speech, Language, and Music (SLAM) Laboratory.

Higher accuracy and faster language response times

Minimal exposure to beta-frequency binaural beats resulted in significantly higher accuracy and faster language response times compared to control sounds. These results have implications for the potential use of BB in treating developmental language disorders.”

Citation: Kim, J., Kim, H., Kovar, J., & Lee, Y. S. Neural consequences of binaural beat stimulation on auditory sentence comprehension: An EEG study. Cerebral Cortex. https://doi.org/10.1093/cercor/bhad459

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New energy-efficient brain-like transistor mimics human intelligence

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

The researchers at Northwestern University, Boston College and MIT have developed a device that processes and also stores information, like the human brain. This transistor goes beyond simple machine-learning tasks. It can categorize data and perform associative learning.

Previous transistor devices cannot function outside cryogenic temperatures (-3l0 degrees Fahrenheit or lower). The new device works at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study was published today (Dec. 20) in the journal Nature.

Co-locating brain, memory, and information processing

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, consuming a lot of energy and creating a bottleneck when attempting to perform multiple tasks at the same time.

With the new architecture, “the brain, memory and information processing are instead co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain, he said.

High power consumption

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. 

Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy-costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits.”

But this comes “at the cost of high power consumption, especially in the current era of big data, where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

Based on moiré patterns

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer, even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

Higher-level “associative learning”

To test the transistor, Hersam and his team trained it to recognize similar—but not identical—patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition, known as “associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns—it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT. He is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering and chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology.

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Citation: Yan, X., Zheng, Z., Sangwan, V.K. et al. Dec. 12, 2023. Moiré synaptic transistor with room-temperature neuromorphic functionality. Nature 624, 551–556 (2023). https://doi.org/10.1038/s41586-023-06791-1

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nuke that asteroid! Is that a good idea?

Situation: An asteroid is heading toward Earth. Scientists calculate that deflecting the asteroid to defend the planet against catastrophic impact with a conventional kinetic impact will not work. Why? Limitations in the mass that can be lifted to space.

Solution: Researchers at Lawrence Livermore National Laboratory (LLNL) say they have developed a modeling tool for assessing the potential use of a nuclear device. The research, published today (Dec. 19, 2023) in Planetary Science Journal, introduces a novel approach to simulating the energy deposition from a nuclear device on an asteroid’s surface.

This new tool improves our understanding of the nuclear deflection’s radiation interactions on the asteroid’s surface. It also opens the door to new research on the shockwave dynamics affecting the inner asteroid.

This model will allow researchers to build upon the insights gained from NASA’s recent Double Asteroid Redirection Test (DART) mission in Sept. 2022, when they deliberately crashed a nuclear device into an asteroid to alter its trajectory.

DART Animated Infographic

Nuclear devices have the highest ratio of energy density per unit of mass of any human technology, making them an invaluable tool in mitigating asteroid threats, said LLNL physicist Mary Burkey, who led the research.

A long-range decision: to smash or deflect?

“If we have enough warning time, we could potentially launch a nuclear device, sending it millions of miles away to an asteroid that is headed toward Earth,” Burkey said. “We would then detonate the device and either deflect the asteroid, keeping it intact but providing a controlled push away from Earth, or we could disrupt the asteroid, breaking it up into small, fast-moving fragments that would also miss the planet.”

Accurate predictions for the effectiveness of nuclear deflection missions rely on sophisticated multiphysics simulations, Burkey said, explaining that LLNL simulation models cover a wide range of physical factors, which makes them complex and computationally demanding.

Asteroid x-rays

The paper also introduces an efficient and accurate library of X-ray energy deposition functions. High-fidelity simulations tracked photons penetrating surfaces of asteroid-like materials such as rock, iron, and ice, while accounting for more complex processes, such as reradiation.

The model also considers a diverse set of initial conditions, including different porosities, source spectra, radiation fluences, source durations, and angles of incidence. This comprehensive approach makes the model applicable to a wide range of potential asteroid scenarios. 

High-fidelity simulation modeling

Should a real planetary defense emergency arise, high-fidelity simulation modeling will be critical in providing decision-makers with actionable, risk-informed information that could prevent asteroid impact, protect essential infrastructure and save lives, explained Megan Bruck Syal, LLNL’s planetary defense project lead.

“While the probability of a large asteroid impact during our lifetime is low, the potential consequences could be devastating,” Bruck Syal said.

Citation: Mary T. Burke, Robert A. Managan, Nicholas A. Gentile, Megan Bruck Syal, Kirsten M. Howley and Joseph V. Wasem. December 19, 2023. Planetary Science Journal, Vol. 4, Number 12. https://iopscience.iop.org/article/10.3847/PSJ/ad0838 (open access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI’s memory-forming mechanism found to be strikingly similar to the brain’s

Researchers at the Institute for Basic Science (IBS) in South Korea have discovered a striking similarity between AI memory processing of transformer models and the hippocampus of the human brain.

The finding provides a novel perspective on memory consolidation, a process that transforms short-term memories into long-term ones in AI systems.

Hippocampus role in memory

The team focused on memory consolidation through the NMDA receptor in the hippocampus, which facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation, but a magnesium ion acts as a small gatekeeper blocking the door. When this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories.

Low-cost, high-performance AI systems

The team discovered that large learning models, such as ChatGPT, seem to use a gatekeeping process similar to the brain’s NMDA receptor.

In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in the transformer can be improved by mimicking the NMDA receptor. Similar to how changing magnesium levels affect memory strength, tweaking the transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model.

CHA Meeyoung, a data scientist in the team, notes that “the human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Deep neural networks mimic human hearing

A new study from MIT has found that computational models derived from machine learning are moving closer to mimicking the structure and function of the human auditory system.

This could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces, the researchers suggest.

In this largest study yet of deep neural networks trained to perform auditory tasks, the MIT team showed that most of these models share properties seen in the human brain when people are listening to the same sounds.

Mimicking the human auditory system

When trained on auditory inputs, including background noise, the models closely mimic the activation patterns of the human auditory cortex, matching those seen in fMRI scans of people listening to the same sounds.

“This is the most comprehensive comparison of these kinds of models to the auditory system so far,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study, which appears in PLOS Biology.

Auditory models

For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks. They also created 14 models of their own, based on two different architectures.

Most of these models were trained to perform a single task: recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre, while two of them were trained to perform multiple tasks.

The researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

Better models

McDermott’s lab now plans to develop models that are even more successful at reproducing human brain responses. Such models could help scientists learn more about how the brain may be organized, and could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.

“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.

The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MITand a Department of Energy Computational Science Graduate Fellowship.

Citation: Greta Tuckute, Jenelle Feather, Dana Boebinger, Josh H. McDermott. December 13, 2023, PLOS Biology, https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002366 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists unveil complete cell map of a whole mammalian brain

An NIH-funded international team of researchers has created a complete cell atlas of a whole mammalian brain, a first. A map for the mouse brain, it describes the type, location, and molecular information of more than 32 million cells and provides information on connectivity between these cells.

The mouse is the most commonly used vertebrate experimental model in neuroscience research, and this cellular map paves the way for a greater understanding of the human brain. The cell atlas also lays the foundation for developing a new generation of precision therapeutics for people with mental and neurological disorders of the brain.

The findings were funded by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies® Initiative, or The BRAIN Initiative®, and appear in a collection of 10 papers published in Nature.

“The mouse atlas has brought the intricate network of mammalian brain cells into unprecedented focus, giving researchers the details needed to understand human brain function and diseases,” said Joshua A. Gordon, M.D., Ph.D., Director of the National Institute of Mental Health, part of the National Institutes of Health.

Unprecedented map of cellular organization and diversity

The cell atlas describes the types of cells in each region of the mouse brain and their organization within those regions. It also provides a detailed catalog of the cell’s transcriptome—the complete set of gene readouts in a cell, which contains instructions for making proteins and other cellular products.

The atlas also characterizes the cell epigenome—chemical modifications to a cell’s DNA and chromosomes that alter the way the cell’s genetic information is expressed—detailing thousands of epigenomic cell types and millions of candidate genetic regulation elements for different brain cell types.

A blueprint of transmitted chemical signals

The atlas also provides an accounting of the neurotransmitters and neuropeptides used by different cells and the relationship among cell types within the brain. This information can be used as a detailed blueprint for how chemical signals are initiated and transmitted in different parts of the brain. Those electrical signals are the basis for how brain circuits operate and how the brain functions overall.

“This product is a testament to the power of this unprecedented, cross-cutting collaboration and paves our path for more precision brain treatments,” said John Ngai, Ph.D., Director of the NIH BRAIN Initiative.

Of the 10 studies included in this collection, seven are funded through the NIH BRAIN Initiative Cell Census Network (BICCN), and two are funded through the larger NIH BRAIN Initiative.

The core aim of the BICCN, a groundbreaking, cross-collaborative effort to understand the brain’s cellular makeup, is to develop a comprehensive inventory of the cells in the brain—where they are, how they develop, how they work together, and how they regulate their activity—to better understand how brain disorders develop, progress, and are best treated.

Next big step: completing the cell maps of the human brain and the nonhuman primate brain

“By leveraging the unique nature of its multi-disciplinary and international collaboration, the BICCN was able to accomplish what no other team of scientists has been able to before,” said Ngai. “Now we are ready to take the next big step—completing the cell maps of the human brain and the nonhuman primate brain.”

The BRAIN Initiative Cell Atlas Network (BICAN) is the next stage in the NIH BRAIN Initiative’s effort to understand the cell and cellular functions of the mammalian brain.

Aim: revolutionize neuroscience research

Acording to NIH, BICAN is a transformative project that, together with two other large-scale projects—the BRAIN Initiative Connectivity Across Scales and the Armamentarium for Precision Brain Cell Access—aim to revolutionize neuroscience research by illuminating foundational principles governing the circuit basis of behavior and informing new approaches to treating human brain disorders.

Citation: Yao, Z., van Velthoven, C. T. J., Kunst, M., Zhang, M., McMillen, D., Lee, C., Jung, W., Goldy, J., Abdelhak, A., Aitken, M., Baker, K., Baker, P., Barkan, E., Bertagnolli, D., Bhandiwad, A., Bielstein, C., Bishwakarma, P., Campos, J., Carey, D., … Zeng, H. (2023). A high-resolution transcriptomic and spatial atlas of cell types in the whole mouse brain. Naturehttps://www.nature.com/articles/s41586-023-06812-z (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter