MIT researchers remotely map food crops, using machine learning and satellites

Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food.

Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But only a handful of countries have the resources to create accurate maps.

So MIT engineers used a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, without requiring in-person assessments of every single farm.

First nationwide crop map of Thailand

The researchers used this technique to generate the first nationwide crop map of Thailand’s four major crops—rice, cassava, sugarcane, and maize—with an accuracy of 93 percent.

That’s comparable to on-the-ground mapping efforts in high-income, big-farm countries, says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering.

Mapping the millions of India smallholder farmers

The researchers are now moving to map crops across India, where roadside images have recently become available from Google Street View and other services . “There are over 150 million smallholder farmers in India,” says Wang.

“India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

The mapping method

Wang, along with the Institute for Data, Systems, and Society (IDSS), graduate students, and Jagtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

“What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.”

Citation: Soler, J. L., Friedel, T., & Wang, S. (2023). Combining Deep Learning and Street View Imagery to Map Smallholder Crop Types. arXiv https://arxiv.org/abs/2309.05930 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Shape-shifting soft machines

Researchers at Lawrence Livermore National Laboratory have developed an enhanced new type of soft material that can change shape in response to light. It could create “soft machines” for a variety of fields from robotics to medicine.

Liquid crystals and 3D-printing

The novel liquid crystal elastomer (LCE) material is made by incorporating liquid crystals into the molecular structure of a stretchable material.

Adding gold nanorods to the LCE material, scientists and engineers created photo-responsive inks and 3D-printed structures that could be made to bend, crawl and move when exposed to a laser that causes localized heating in the material.

The results were recently published online by the journal Matter.

As described in the paper, the LLNL team, along with their collaborators from Harvard University, North Carolina State University and the University of Pennsylvania, used a direct ink-writing printing technique to build a variety of light-responsive objects. These include cylinders that could roll, asymmetric “crawlers” that could go forward, and lattice structures that oscillated.

Hierarchical lattices

By combining shape morphing with photoresponsivity, the new material could change the way people think about machines and materials.

“At LLNL, we’ve focused on developing static materials and architectures for some time,” said principal investigator Caitlyn Krikorian Cook. “We’ve made these complex types of structures like hierarchical lattices, and we’ve even started exploring more responsive materials, like shape memory polymers, that have a one-time shape memory response. ”

Soft machines

The researchers said the new material could be used to create a “soft machine” made from these flexible LCE composite materials—capable of responding to external stimuli and even mimicking the movements and behaviors of living organisms.

Soft robots made of the shape morphing material could crawl, swim or fly, and explore environments that are too difficult or dangerous for humans to access, like caves or outer space.

Soft machines could also be used in medical applications, such as implantable devices that can adapt to the body’s movements, or prosthetic limbs that move like natural limbs, and other applications that aren’t possible with machines made from rigid materials, like metal or plastic.

“Rigid robots maybe wouldn’t be ideal for humans to interact with, so we need systems and materials that are more compliant,” said the paper’s lead author Michael Ford, a postdoc at Carnegie Mellon University.

“You start with components that make up our robots, and one of those components is an actuator. That’s where these materials come in; they could potentially be an actuator. It reduces computational complexity; you’re making a material that gets rid of onboard electronics and replacing them with a single material that can do all those things.

That will allow you to put more computational complexity into another component or drive power to other sensors that you wouldn’t have been able to do with traditional rigid materials.”

Photothermal actuation

Researchers said the movement of the LCE material is driven primarily by a process known as photothermal actuation, which involves converting light energy into thermal energy, resulting in a mechanical response from the material.

Driven by the interaction between light, gold nanorods and the LCE matrix, the process enables the printed structures to exhibit dynamic and reversible movements in response to external stimuli.

In the study, researchers used a computer vision system, involving cameras and a tracking software, to control the movement of a printed cylinder.

The tracking system monitored the position of the rolling cylinder and continuously adjusted the position of the laser to raster the edge of the cylinder. This continuous tracking and adjustment allowed for the cylinder to maintain its rolling motion in a controlled manner.

By leveraging computer vision with the photothermal actuation of the cylinder, the researchers achieved a sophisticated level of manipulation of the soft machine’s movement, showcasing the potential for advanced control systems in the field of soft robotics and soft machines.

The team also showed that responsivity could be controlled so the soft machines could perform useful tasks, such as a moving cylinder carrying a wire.

“[Lead author Ford] did some awesome work in using computer vision to control the locomotion of the printed cylinder and using a rastering laser to force it to move,” said co-author Elaine Lee.

 “But once you start to get into much more complex motion—like using various rastering speeds and light intensities on a printed lattice, causing it to move in various different modes.

Those were actually outside of what our high performance computing (HPC) simulations were able to predict, because those codes are expecting a uniform heating or stimuli on that lattice.

So, using computer vision and machine learning to learn the actuation speeds, and what doses of light can cause locomotion from that printed architecture, will push us a lot further in understanding how our materials will respond.”

New control systems and computer algorithms also could enable soft machines to move and interact with their environment in a more intelligent and autonomous way, they said.

Sentient materials

Cook said the team is looking at incorporating responses to different types of stimuli, beyond thermal and light stimuli, into areas like humidity and energy absorption, and conditions that the material might experience in space.

The team is looking at starting a new Strategic Initiative at the Lab to focus on autonomous materials and “move the needle” towards sentient materials that can sense, respond, be programmed, learn, decide and communicate,”

“These liquid crystal elastomers are responsive materials — they’re able to sense a stimuli and respond, and will respond repeatedly every time — but it doesn’t have a sense of memory or a way to learn the repeated stimuli and respond accordingly. These are really the materials that we’re striving towards, and this might be a five to 10-year timespan of effort.”

Funding for the research came from the Laboratory Directed Research and Development (LDRD) project “Shape Changing of Responsive Elastomer Structures,” and a grant from the National Science Foundation.

Citation: Michael J. Ford et al. February 2, 2024. Movement with light: Photoresponsive shape morphing of printed liquid crystal elastomers. Matter. https://www.cell.com/matter/abstract/S2590

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Fusion research experiments yield new high-energy record

The Joint European Torus (JET), one of the world’s largest and most powerful fusion machines, has demonstrated the ability to reliably generate fusion energy and has set a world record in energy output—a significant milestone in the field of fusion science and engineering.

In JET’s final deuterium-tritium experiments (DTE3), high fusion power was consistently produced for 5 seconds, resulting in a ground-breaking record of 69 megajoules, using a mere 0.2 milligrams of fuel.

10 times hotter than the core of the Sun

JET is a “tokamak”—a design that uses powerful magnetic fields to confine a plasma in the shape of a doughnut. Fusion, the process that powers stars like our Sun, promises a clean baseload source of heat and electricity for the long term, using small amounts of fuel that can be sourced worldwide from inexpensive materials.

When a mix of two forms of hydrogen (deuterium and tritium) is heated to form a controlled plasma at extreme temperatures—10 times hotter than the core of the Sun—they fuse together to create helium and release energy that can be harnessed to produce electricity.

Sustainable fusion energy

More than 300 scientists and engineers from EUROfusion—a consortium of researchers across Europe—contributed to these landmark experiments at the UK Atomic Energy Authority (UKAEA) site in Oxford. The results solidify JET’s pivotal role in advancing safe, low-carbon, and sustainable fusion energy, according to the researchers.

More information: https://www.gov.uk/ukaea. Social Media: @UKAEAofficial

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

South Pole Telescope scientists publish new observations of the afterglow of the Big Bang

For more than five years, scientists at the South Pole Telescope in Antarctica have been observing the sky with an upgraded camera, picking up remnant light from the universe’s early formation—an afterglow of the Big Bang. They have now analyzed an initial batch of data, and published details in the journal Physical Review D.

The telescope at the Amundsen-Scott South Pole Station, operated by the National Science Foundation, received a new camera known as SPT-3G in 2017. Equipped with 16,000 detectors—10 times more than its predecessor—the SPT-3G is central to multi-institutional research led in part by the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

Afterglow of the Big Bang

The goal: measure faint light known as the cosmic microwave background (CMB)—the afterglow of the Big Bang, when the universe burst forth from a single point of energy nearly 14 billion years ago.

“The CMB is a treasure map for cosmologists,” said Zhaodi Pan, the paper’s lead author and a Maria Goeppert Mayer fellow at Argonne.  “Its minuscule variations in temperature and polarization provide a unique window into the universe’s infancy.

Gravitational lensing—warping of the fabric of space-time

The paper in Physical Review D offers the first CMB gravitational lensing measurements from the SPT-3G. Gravitational lensing happens when the universe’s vast web of matter distorts the CMB as it travels across space. Albert Einstein described this warping in the fabric of space-time in his theory of general relativity.

Measurements of that distortion hold clues about the early universe and mysteries like dark matter, an invisible component of the cosmos. “Dark matter is tricky to detect, because it doesn’t interact with light or other forms of electromagnetic radiation, said Zhaodi Pan, Maria Goeppert Mayer fellow at Argonne. “Currently, we can only observe it through gravitational interactions.”

Ideal location

The dry, stable atmosphere and remote location of the South Pole Telescope create as little interference as possible when hunting for CMB patterns. The team used a dedicated cluster of computers at the Argonne Laboratory Computing Resource Center to run some of the calculations for the research.

This work was funded by the National Science Foundation’s Office of Polar Programs and the DOE Office of Science’s High Energy Physics program.

Citation: Z. Pan et al. Measurement of gravitational lensing of the cosmic microwave background using SPT-3G 2018 data, 12 December 2023, Phys. Rev. D 108, 10.1103/PhysRevD.108.122005 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Inexpensive carbon-neutral biofuels finally possible

Introducing a simple renewable chemical to the pretreatment step can finally make next-generation biofuel production both cost-effective and carbon neutral, a new study finds.

The Department of Energy’s Bioenergy Technology Office has awarded researchers a $2 million grant to build a small-scale pilot plant at the University of California – Riverside. lt could lead to larger-scale investment in the technology, as harnessing energy from fossil fuels adds to global warming and hurts the planet.

The secret: using lignin from plant cell walls

For biofuels to compete with petroleum, biorefinery operations must be designed to better utilize lignin, one of the main components of plant cell walls. It provides plants with greater structural integrity and resiliency from microbial attacks. However, these natural properties of lignin also make it difficult to extract and utilize from the plant matter, also known as biomass.

A billion tons per year of biomass could replace 30% of our petroleum consumption

To overcome the lignin hurdle, UC Riverside Associate Research Professor Charles Cai invented CELF (co-solvent enhanced lignocellulosic fractionation), an innovative biomass pretreatment technology. “CELF uses tetrahydrofuran (THF) to supplement water and dilute acid during biomass pretreatment. It improves overall efficiency and adds lignin extraction capabilities,” Cai said. “Best of all, THF itself can be made from biomass sugars.”

A Energy & Environmental Science paper details the degree to which a CELF biorefinery offers economic and environmental benefits over both petroleum-based fuels and earlier biofuel production methods.

The paper is a collaboration between Cai’s research team at UCR, the Center for Bioenergy Innovation managed by Oak Ridge National Laboratories, and the National Renewable Energy Laboratory, with funding provided by the U.S. Department of Energy’s Office of Science.

Non-edible plant biomass as feedstocks

First-generation biofuel operations use food crops like corn, soy, and sugarcane as raw materials, or feedstocks. Because these feedstocks divert land and water away from food production, using them for biofuels is not ideal. 

Instead, second-generation operations use non-edible plant biomass as feedstocks, such as wood residues from milling operations, sugarcane bagasse, or corn stover, all of which are abundant low-cost byproducts of forestry and agricultural operations. 

According to the Department of Energy, up to a billion tons per year of biomass could be made available for the manufacture of biofuels and bioproducts in the US alone, capable of displacing 30% of our petroleum consumption while also creating new domestic jobs. 

Because a CELF biorefinery can more fully utilize plant matter than earlier second-generation methods, the researchers found that a heavier, denser feedstock like hardwood poplar is preferable over less carbon-dense corn stover for yielding greater economic and environmental benefits. 

A break-even $3.15 per gallon of gasoline equivalent

Using poplar in a CELF biorefinery, the researchers demonstrate that sustainable aviation fuel could be made at a break-even price as low as $3.15 per gallon of gasoline equivalent. The current average cost for a gallon of jet fuel in the U.S. is $5.96. 

The U.S. government issues credits for biofuel production in the form of renewable identification number credits, a subsidy meant to bolster domestic biofuel production. The tier of these credits issued for second-generation biofuels, the D3 tier, is typically traded at $1 per gallon or higher. At this price per credit, the paper demonstrates that one can expect a rate of return of over 20% from the operation. 

“Spending a little more for a more carbon-rich feedstock like poplar still yields more economic benefits than a cheaper feedstock like corn stover, because you can make more fuel and chemicals from it,” Cai said.

The paper also illustrates how lignin utilization can positively contribute to overall biorefinery economics while keeping the carbon footprint as low as possible. In older biorefinery models, where biomass is cooked in water and acid, the lignin is mostly unusable for more than its heating value. 

“The older models would elect to burn the lignin to supplement heat and energy for these biorefineries because they could mostly only leverage the sugars in the biomass—a costly proposition that leaves a lot of value off the table,” said Cai. 

Renewable chemicals

In addition to better lignin utilization, the CELF biorefinery model proposes to produce renewable chemicals. These chemicals could be used as building blocks for bioplastics and food and drink flavoring compounds. These chemicals take up some of the carbon in the plant biomass that would not get released back into the atmosphere as CO2.

“Adding THF helps reduce the energy cost of pretreatment and helps isolate lignin, so you wouldn’t have to burn it anymore. On top of that, we can make renewable chemicals that help us achieve a near-zero global warming potential,” Cai said. “I think this moves the needle from Gen 2 biofuels to Gen 2+.”

Citation: Bruno Colling Klein,  ab   Brent Scheidemantle,bcd   Rebecca J. Hanes,  be   Andrew W. Bartling,ab   Nicholas J. Grundl,ab   Robin J. Clark,af   Mary J. Biddy,ab   Ling Tao,  ab   Cong T. Trinh,bg   Adam M. Guss,bh   Charles E. Wyman,  bcd   Arthur J. Ragauskas,  bgi   Erin G. Webb,bf   Brian H. Davison  bh  and  Charles M. Cai  *bcd.  07 February 2024, Economics and global warming potential of a commercial-scale delignifying biorefinery based on co-solvent enhanced lignocellulosic fractionation to produce alcohols, sustainable aviation fuels, and co-products from biomass. Energy & Environmental Science Issue 3, Page 827 to 1296. https://pubs.rsc.org/en/content/articlelanding/2024/ee/d3ee02532b (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First 3D-printed brain tissue to grow and function like typical brain tissue

University of Wisconsin–Madison scientists have developed the first 3D-printed brain tissue that can grow and function like typical brain tissue.

“This could be a hugely powerful model to help us understand how brain cells and parts of the brain communicate in humans,” says Su-Chun Zhang, professor of neuroscience and neurology at UW–Madison’s Waisman Center. “It could change the way we look at stem cell biology, neuroscience, and the pathogenesis of many neurological and psychiatric disorders.

“It’s an achievement with important implications for scientists studying the brain and working on treatments for a broad range of neurological and neurodevelopmental disorders, such as Alzheimer’s and Parkinson’s disease.”

Printing methods have had limited success in previous attempts to print brain tissue, according to Zhang and Yuanwei Yan, a scientist in Zhang’s lab. The group behind the new 3D-printing process described their method today in the journal Cell Stem Cell (open-access).

Horizontal printing, grown from induced pluripotent stem cells

Instead of using the traditional 3D-printing approach—stacking layers vertically—the researchers went horizontally. They also situated neurons, grown from induced pluripotent stem cells, in a softer “bio-ink” gel than previous attempts had employed.

“The tissue still has enough structure to hold together but it is soft enough to allow the neurons to grow into each other and start talking to each other,” Zhang says. The cells are laid next to each other like pencils laid next to each other on a tabletop.

“Our tissue stays relatively thin and this makes it easy for the neurons to get enough oxygen and enough nutrients from the growth media,” Yan says.

Forming networks comparable to human brains

The printed cells reach through the medium to form connections inside each printed layer as well as across layers, forming networks comparable to human brains. The neurons communicate, send signals, interact with each other through neurotransmitters, and even form proper networks with support cells that were added to the printed tissue.

“We printed the cerebral cortex and the striatum and what we found was quite striking,” Zhang says. “Even when we printed different cells belonging to different parts of the brain, they were still able to talk to each other in a very special and specific way.”

The printing technique offers precision control over the types and arrangement of cells, which is not found in brain organoids (miniature organs used to study brains, which grow with less organization and control).

“Our lab is very special in that we are able to produce pretty much any type of neurons at any time. Then we can piece them together at almost any time and in whatever way we like,” Zhang says. “Because we can print the tissue by design, we can have a defined system to look at how our human brain network operates. We can look very specifically at how the nerve cells talk to each other under certain conditions because we can print exactly what we want.”

Studying brain diseases

That specificity provides flexibility. The printed brain tissue could be used to study signaling between cells in Down syndrome, interactions between healthy tissue and neighboring tissue affected by Alzheimer’s, testing new drug candidates, or even watching the brain grow.

“In the past, we have often looked at one thing at a time, which means we often miss some critical components. Our brain operates in networks. We want to print brain tissue this way because cells do not operate by themselves. They talk to each other. This is how our brain works and it has to be studied all together like this to truly understand it,” Zhang says.

“Our brain tissue could be used to study almost every major aspect of what many people at the Waisman Center are working on. It can be used to look at the molecular mechanisms underlying brain development, human development, developmental disabilities, neurodegenerative disorders, and more.”

Accessible to other labs

The new printing technique should also be accessible to many labs. It does not require special bio-printing equipment or culturing methods to keep the tissue healthy, and can be studied in depth with microscopes, standard imaging techniques and electrodes already common in the field.

The researchers say they would like to explore the potential of specialization, further improving their bio-ink and refining their equipment to allow for specific orientations of cells within their printed tissue.

“Right now, our printer is a benchtop commercialized one,” Yan says. “We can make some specialized improvements to help us print specific types of brain tissue on-demand.”

This study was supported in part by NIH-NINDS (NS096282, NS076352, NS086604), NICHD (HD106197, HD090256), the National Medical Research Council of Singapore (MOH-000212, MOH-000207), Ministry of Education of Singapore (MOE2018-T2-2-103), Aligning Science Across Parkinson’s (ASAP-000301), the Bleser Family Foundation, and the Busta Foundation.

Citation: Yuanwei Yan et al. 3D bioprinting of human neural tissues with functional connectivity. Cell Stem Cell. https://doi.org/10.1016/j.stem.2023.12.009 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Study discovers neurons in the human brain that can predict what you are going to say and help you say it

By using advanced brain recording techniques, a new study demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech.

These findings, led by researchers from Massachusetts General Hospital (MGH), provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production.

Treatment for speech and language disorders

The work, published in the journal Nature, reveals insights into the brain’s neurons that enable language production, and could lead to improvements in the understanding and treatment of speech and language disorders.

“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech—including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations,” says senior author Ziv Williams, MD, an associate professor in Neurosurgery at MGH and Harvard Medical School.

The researchers used a cutting-edge technology called Neuropixels, using probes to record the activities of single neurons in the prefrontal cortex. Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening.

By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables.

Artificial prosthetics or brain-machine interfaces

With their technology, the scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients.

The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.

This work was supported by the National Institutes of Health.

Citation: Khanna, A.R., Muñoz, W., Kim, Y.J. et al. Single-neuronal elements of speech production in humans. Nature (2024). https://doi.org/10.1038/s41586-023-06982-w

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New skin patch could help you control a robotic exoskeleton

A wearable, stretchy patch could help people with mobility issues move robotic arms or legs or could assist doctors in diagnosing neurological illnesses.

The size of a BandAid, the patch sticks to your skin and picks up tiny electric signals from human muscles. In lab experiments, researchers at the Korea Advanced Institute of Science and Technology (KAIST) and the Unversity of Colorado Boulder showed that humans could use these devices to efficiently operate robotic exoskeletons (machines that mimic, and even enhance, the power of human muscles and bones). They described the new “microneedle adhesive patch” (SNAP) this month in the journal Science Advances (open access).

Microneedles

SNAP is an array of about 144 “microneedles” made of silicon coated with gold and are less than a hundredth of an inch long, making the mircroneedles hard to see with the naked eye. The microneedles only enter the top layer of your skin and aren’t long enough to reach the body’s pain sensors.

Every time you bend your arm, twist your back or even twitch a finger, currents run along your muscle fibers. Doctors typically monitor these electromyography (EMG) signals using gel electrodes that stick onto your skin. The problem: the gel dries up over time and the electrodes often slide around, resulting in poor data. 

A better EMG sensor

The researchers set out to design an EMG sensor that could function almost like a part of your body. The team’s SNAP devices are self-contained machines made of a stretchy, polymer base. They incorporate stretchable serpentine wires fabricated out of ultrathin metal.

To test out those possibilities, the researchers ran a series of experiments in their lab in which they asked people to take on an everyday task: lifting a heavy weight from the floor. In this case, the humans had a little help. They strapped on a machine that looks a bit like a knapsack and provides a robotic boost for the lower back. 

Less muscle power needed

Some of the subjects also wore SNAP devices just above their glute muscles. When the patches detected that the subjects were flexing their muscles during lifting, the devices sent a wireless signal to the robotic backpacks to begin moving. Humans wearing the patches, the team reported, used an average of 18% less muscle power while lifting than subjects who were using the robotic exoskeleton on its own.

The researchers still have a lot of work to do before their patches make it into the real world. For a start, they need to test the tools with other kinds of exoskeleton machines.

Citation: Kim, H., Lee, J., Heo, U., Jayashankar, D. K., Agno, C., Kim, Y., Kim, C. Y., Oh, Y., Byun, H., Choi, B., Jeong, H., Yeo, H., Li, Z., Park, S., Xiao, J., Kim, J., & Jeong, W. (2024). Skin preparation–free, stretchable microneedle adhesive patches for reliable electrophysiological sensing and exoskeleton robot control. Science Advances. https://www.science.org/doi/10.1126/sciadv.adk5260 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Musk announces first neural implant in a patient

“The first human received an implant from @Neuralink yesterday and is recovering well. Initial results show promising neuron spike detection,” Elon Musk said Monday on X.

So how significant is this news?

“Whilst there are many companies working on exciting products, there are only a few other companies who have implanted their devices in humans, so Neuralink has joined a rather small group,” said Professor Anne Vanhoestenberghe, Professor of Active Implantable Medical Devices at King’s College London, writing in Expert reaction to Elon Musk reporting Neuralink has implanted wireless brain chip in a human by Science Media Center on Jan. 30.

“I expect Neuralink will want to give the participant time to recover before they start training their system with the participant,” Vanhoestenberghe said. “We know Elon Musk is very adept at generating publicity for his company, so we may expect announcements as soon as they begin testing, although true success in my mind should be evaluated in the long-term, by how stable the interface is over time, and how much it benefits the participant.”

Remote control by just thinking

“This study involves placing a small, cosmetically invisible implant in a part of the brain that plans movements,” Neuralink’s website reads. “The device is designed to interpret a person’s neural activity, so they can operate a computer or smartphone by simply intending to move—no wires or physical movement are required.

“It enables control of your phone or computer, and through them almost any device, just by thinking. Initial users will be those who have lost the use of their limbs. Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That is the goal.”

Telepathy

Musk nicknamed the implant as “Telepathy,” leading to some interesting speculations on X. Musk may have been inspired by an idea presented at TED 2014 by Ray Kurzweil: “We’ll have nanobots that… connect our neocortex to a synthetic neocortex in the cloud… Our thinking will be a … biological and non-biological hybrid.”

Kurzweil’s imaginative idea was explored in a 2019 paper, Human Brain/Cloud Interface (188,575 views), which focused on possible future “neuralnanorobotics” technologies. (“Full disclosure: I was co-lead author of that paper.”—Amara Angelica.) 

Meanwhile, Neuralink says “first clinical trial is open to recruitment … for people with limited or no use of both hands due to a cervical spinal cord injury or to amyotrophic lateral sclerosis, a neurological disorder that affects nerve cells.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Should your doctor use AI?

In an article published today in the Journal of Internal Medicine, authors at Stanford University suggest that large language models (LLMs, like ChatGPT) can be used for administrative tasks.

These tasks could include summarizing medical notes and aiding documentation; tasks related to augmenting knowledge, like answering diagnostic questions and questions about medical management; and tasks related to education.

Pitfalls

However, the authors also warn of potential pitfalls, including a lack of HIPAA adherence, inherent biases, lack of personalization, and possible ethical concerns related to text generation.

The authors also suggest checks and balances: for example, always having a human being in the loop, and using AI tools to augment work tasks, rather than replace them. In addition, the authors highlight active research areas in the field that promise to improve LLMs’ usability in health care contexts.

Citation: Jesutofunmi A. Omiye, MD, MS, Haiwen Gui, BS, Shawheen J. Rezaei, MPhil, James Zou, PhD, and Roxana Daneshjou, MD, PhD. Large Language Models in Medicine: The Potentials and Pitfalls. https://doi.org/10.7326/M23-2772

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter