AI model finds cancer clues at lightning speed

Researchers at the University of Gothenburg have developed an AI model that’s faster and better at detecting cancer than the current semi-manual method.

Glycans (structures of sugar molecules in cells) can be measured by mass spectrometry to detect cancer. However, the data must be carefully analyzed by one of the few experts in the world—taking up to days per sample.

Detecting cancer in seconds

Researchers at the University of Gothenburg have now developed an AI model named “Candycrunch” that does it in just a few seconds per test, as reported in the journal Nature Methods.

“The AI model was trained using a database of more than 500,000 examples of different fragmentations and associated structures of sugar molecules in 90 percent of cases,” said Daniel Bojar, Associate Senior Lecturer in Bioinformatics at the University of Gothenburg in a statement.

Detects biomarkers missed by human analyses

That means the AI model could soon reach the same levels of accuracy as the sequencing of other biological sequences, such as DNA, RNA or proteins. In addition, the AI model is fast and accurate in its answers, so it can accelerate the discovery of glycan-based biomarkers for both diagnosis and prognosis of a cancer.

The Candycrunch model is also able to identify low concentrations of biomarkers, which are often missed by human analyses.

Citation: Urban, J., Jin, C., Thomsson, K. A., Karlsson, N. G., Ives, C. M., Fadda, E., & Bojar, D. (2024). Predicting glycan structure from tandem mass spectrometry via deep learning. Nature Methods, 1-10. https://doi.org/10.1038/s41592-024-02314-6 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Soft, stretchy wearable electrode simulates touch sensations for VR, other uses

A team of researchers led by the University of California San Diego has developed a soft, stretchy electronic device capable of simulating the feeling of pressure or vibration when worn on the skin, for use in virtual reality, medical prosthetics and wearable technology.

Wearable on a sticker on the fingertip or forearm

The idea is to create a wearable system that can deliver haptic (feeling) touch sensations (pressure or vibration) using electrical signals, but without causing pain for the wearer. (Existing technologies that recreate a sense of touch through electrical stimulation often induce pain due to the use of rigid metal electrodes, and do not conform well to the skin.)

This device, reported in Science Robotics, consists of a soft, stretchable electrode attached to a newly designed silicone patch (based on PEDOT:PSS and PPEGMEA) that can be worn like a sticker on the fingertip or forearm as either pressure or vibration. The electrode, in contact with the skin, is also connected to an external power source via wires.

The work was supported by the National Science Foundation Disability and Rehabilitation Engineering program.

Citation: Blau, R., et al. (2024). Conductive block copolymer elastomers and psychophysical thresholding for accurate haptic effects. Science Robotics. 10.1126/sci-robotics.adk3925

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First realistic robot faces

University of Tokyo researchers have found a way to bind engineered skin tissue to the complex forms of humanoid robots, taking inspiration from human skin ligaments.

In addition to better lifelike appearance, potential benefits to robotic platforms include increased mobility, self-healing abilities, and embedded sensing capabilities, according to Professor Shoji Takeuchi of the University of Tokyo in a statement. Their research could also be useful in the cosmetics industry and to help train plastic surgeons.

Binding skin to complex structures

Takeuchi is a pioneer in the field of biohybrid robotics, where biology and mechanical engineering meet. So far, his Biohybrid Systems Laboratory has created mini robots that walk using biological muscle tissue with 3D-printed lab-grown meat and engineered skin that can heal.

“By mimicking human skin-ligament structures and by using specially made V-shaped perforations in solid materials, we found a way to bind the skin to complex structures. The natural flexibility of the skin and the strong method of adhesion mean the skin can move with the mechanical components of the robot without tearing or peeling away,” Takeuchi said.

Any shape of the surface can have living skin applied to it—think of the possibilities

2D facial robot with living-skin smile created by activating anchors (credit ©2024 Takeuchi et al. CC-BY-ND)

Previous methods to attach skin tissue to solid surfaces involved things like mini anchors or hooks, but these limited the kinds of surfaces that could receive skin coatings and could cause damage during motion. By carefully engineering small perforations instead, essentially any surface shape can have skin applied to it.

The trick: use a special collagen gel for adhesion, which is naturally viscous so difficult to feed into the minuscule perforations. But using a common technique for plastic adhesion called plasma treatment, they managed to coax the collagen into the fine structures of the perforations while also holding the skin close to the surface in question.

Imagine super-self-healing robots with humanlike dexterity and realistic skin

“Manipulating soft, wet biological tissues during the development process is much harder than people outside the field might think, said Takeuchi. “For instance, if sterility is not maintained, bacteria can enter and the tissue will die. However, now that we can do this, living skin can bring a range of new abilities to robots.

“Self-healing is a big deal. Some chemical-based materials can be made to heal themselves, but they require triggers such as heat, pressure or other signals, and they also do not proliferate like cells. Biological skin repairs minor lacerations as ours does, and nerves and other skin organs can be added for use in sensing and so on.”

Major implications for cosmetic and surgical procedures

“A face-on-a-chip could be useful in research into skin aging, cosmetics, surgical procedures, plastic surgery and more. And if sensors can be embedded, robots may be endowed with better environmental awareness and improved interactive capabilities,” Takeuchi said.

“We believe that creating thicker and more realistic skin can be achieved by incorporating sweat glands, sebaceous glands, pores, blood vessels, fat and nerves. And creating humanlike expressions by integrating sophisticated actuators, or muscles, inside the robot.

“Creating robots that can heal themselves, sense their environment more accurately and perform tasks with humanlike dexterity is incredibly motivating.”

Citation: M. Kawai, M. Nie, H. Oda, S. Takeuchi. “PERFORATION-TYPE ANCHORS INSPIRED BY SKIN LIGAMENT FOR THE ROBOTIC FACE COVERED WITH LIVING SKIN,” Cell Reports Physical Science, https://www.cell.com/cell-reports-physical-science/fulltext/S2666-3864(24)00335-7 (open access) 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ketamine slow-release at-home tablet reduces symptoms of severe depression

A new tablet form of ketamine has shown promise in treating severe depression, offering a take-at-home alternative to existing clinic-based treatments, which can be expensive and lacking in convenience for some patients.

Professor Paul Glue of the University of Otago and colleagues from other research institutions in Australia and New Zealand ran a randomized controlled trial testing the effectiveness of ketamine tablets to treat depression compared with placebo. The results were published today (June 24) in Nature Medicine.

No tripping required

Unlike injectable and nasal spray alternatives, the new slow-release tablet form can be taken safely at home without medical supervision and with negligible side effects. The researchers say the new tablet challenges beliefs about how ketamine works in helping people successfully overcome depression: the psychedelic-assisted therapy model, which says changing your brain circuit functioning in a very profound way gives you new insights that help you to break out of your way of thinking.

The new drug requires further research and is not yet approved by the FDA in the US or the TGA in Australia. (We hope our down-under readers keep us advised.)

Citation: Glue, P., Loo, C., Fam, J., Lane, H., Young, A. H., & Surman, P. (2024). Extended-release ketamine tablets for treatment-resistant depression: A randomized placebo-controlled phase 2 trial. Nature Medicine, 1-6. https://doi.org/10.1038/s41591-024-03063-x

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Opening the AI black box

AI can sift through hundreds of thousands of genome data points to identify potential new therapeutic targets. But scientists aren’t sure how today’s AI models come to their conclusions in the first place.

Enter SQUID (Surrogate Quantitative Interpretability for Deepnets), a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and can lead to more accurate predictions about the effects of genetic mutations, said the scientists in a statement.

How it works better

The key, CSHL Assistant Professor Peter Koo says, lies in SQUID’s specialized training. “What we did with SQUID was leverage decades of quantitative genetics knowledge to help us understand what these deep neural networks are learning,” explains Koo. 

“The tools that people use to try to understand these models have been largely coming from other fields like computer vision or natural language processing. While they can be useful, they’re not optimal for genomics.”

100,000 variant DNA sequences

SQUID computational pipeline (credit: Koo and Kinney labs/ Cold Spring Harbor Laboratory)

SQUID first generates a library of more than 100,000 variant DNA sequences. It then analyzes the library of mutations and their effects using a program called MAVE-NN (Multiplex Assays of Variant Effects Neural Network).

This tool allows scientists to perform thousands of virtual experiments simultaneously. In effect, they can “fish out” the algorithms behind a given AI’s most accurate predictions. Their computational “catch” could set the stage for experiments that are more grounded in reality. 

Citation: Seitz, E. E., McCandlish, D. M., Kinney, J. B., & Koo, P. K. (2024). Interpreting cis-regulatory mechanisms from genomic deep neural networks using surrogate models. Nature Machine Intelligence, 1-13. Seitz, E. E., McCandlish, D. M., Kinney, J. B., & Koo, P. K. (2024). https://doi.org/10.1038/s42256-024-00851-5 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model may allow doctors to detect cancer from DNA

Using AI to detect and diagnose cancer in patients may soon allow for earlier treatment, say investigators at Cambridge University and Imperial College London.

The key is in our DNA

Genetic information is encoded in DNA by patterns of the four bases (A, T, G and C) that make up its structure. However, environmental changes outside the cell can cause some DNA bases to be modified by adding a methyl group (this process is called “DNA methylation”).

Each individual cell possesses millions of these DNA methylation marks. Researchers have observed changes to these marks in early cancer development and they could assist in early diagnosis of cancer. Currently, It’s difficult to examine which bases in DNA are methylated in cancers and to what extent, compared to healthy tissue.

So investigators trained an AI model, using a combination of machine and deep learning, to look at the DNA methylation patterns. It identified 13 different cancer types (including breast, liver, lung, and prostate cancers) from non-cancerous tissue with 98.2% accuracy. They found that the model reinforces and enhances understanding of the underlying processes contributing to cancer.

Additional training and testing

However, they say this model only relies on tissue samples (not DNA fragments in the blood), so it would need additional training and testing on a more diverse collection of biopsy samples to be ready for clinical use.

Identifying these unusual methylation patterns may allow healthcare providers to detect cancer early. “Computational methods such as this model, through better training on more varied data and rigorous testing in the clinic, will eventually help doctors with early detection and screening of cancers,” said the paper’s lead author, Shamith Samarajiwa, in a statement.

Citation: Newsham, I., Sendera, M., Jammula, S. G., & Samarajiwa, S. A. (2024). Early detection and diagnosis of cancer with interpretable machine learning to uncover cancer-specific DNA methylation patterns. Biology Methods and Protocols, 9(1). https://doi.org/10.1093/biomethods/bpae028

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

FDA approves clinical trial to test nanoscale sensors for recording brain activity

The Federal Drug Administration has approved a clinical trial to test the effectiveness of a new electronic grid that records brain activity during surgery. Developed by engineers at the University of California San Diego, the device has nanoscale sensors that record electrical signals directly from the surface of the human brain in record-breaking detail.

The grid’s breakthrough high resolution could provide better guidance for planning and performing surgeries to remove brain tumors and treat drug-resistant epilepsy and could improve neurosurgeons’ ability to minimize damage to healthy brain tissue, the researchers say. During epilepsy surgery, the novel grid could also improve the ability to precisely identify the regions of the brain where epileptic seizures originate, allowing for safe, effective treatment.

Ultra-thin brain sensor array

The new brain sensor array ( known as “platinum nanorod grid (PtNRGrid)” features a densely packed grid of a record-breaking 1,024 embedded electrocorticography (ECoG) sensors. The device rests on the surface of the brain and is approximately 6 microns thin and flexible. So it can both adhere and conform to the surface of the brain, bending as the brain moves while providing high-quality, high-resolution recordings of brain activity. 

In contrast, the ECoG grids most commonly used in surgeries today typically have between 16 and 64 sensors and are rigid, stiffer and more than 500 microns in thickness; and do not conform to the curved surface of the brain.

The PtNRGrid was invented by Shadi Dayeh, a Professor in the Department of Electrical and Computer Engineering at the University of California San Diego and members of his team. The team developed the PtNRGrid technology in collaboration with neurosurgeons and medical researchers from UC San Diego, Massachusetts General Hospital (MGH) and Oregon Health & Science University (OHSU).

Currently, Dayeh’s research group holds the world record for recording brain activity from a single cortical grid with 2,048 channels on the surface of the human brain, published in Science Translational Medicine in 2022.

The clinical trial is designed to demonstrate the effectiveness of the PtNRGrid device to map both normal and pathological brain activity. Surgeons will implant the PtNRGrid in 20 patients, then measure and compare the grid’s performance with the present state-of-the-art. The PtNRGrid will be deployed in surgeries to remove brain tumors and to remove tissue that causes epileptic seizures.

“Our goal is to provide a new atlas for understanding and treating neurological disorders, working with a network of highly experienced clinical collaborators at UC San Diego, MGH, and OHSU,” Dayeh said in a statement.

Record-breaking density

Pending the success of this staged trial, the team will transition to the next crucial step: making the PtNRGrid available for commercial use at scale. Demonstrating that ECoG grids with sensors in the thousands of channels record brain activity with high fidelity also opens new opportunities in neuroscience for uncovering a deeper understanding of how the human brain functions. Basic science advances, in turn, could lead to improved treatments grounded in an enhanced understanding of brain function.

Dayeh’s work toward the FDA approval is supported by an NIH BRAIN® Initiative award.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model identifies tennis-player affective states

Researchers at Karlsruhe Institute of Technology and the University of Duisburg-Essen have trained an AI model to accurately identify affective states from the body language of tennis players during games.

Their study, published in the journal Knowledge-Based Systems, demonstrates that AI can assess body language and emotions with an accuracy similar to that of humans.

Accuracy comparable to that of human observers

“Our model can identify affective states with an accuracy of up to 68.9 percent, which is comparable and sometimes even superior to assessments made by both human observers and earlier automated methods,” said Professor Darko Jekauc of Karlsruhe Institute of Technology Institute of Sports and Sports Science in a statement.

“The reason [for this accuracy] could be that negative emotions are easier to identify because they’re expressed in more obvious ways,” said Jekauc. “Psychological theories suggest that people are evolutionarily better adapted to perceive negative emotional expressions, for example, because defusing conflict situations quickly is essential to social cohesion.”

Body language clues

The researchers recorded video sequences of 15 tennis players in a specific setting, focusing on the body language displayed when a point was won or lost. The videos showed players with cues including lowered head, arms raised in exultation, hanging racket, or differences in walking speed; these cues could be used to identify the players’ affective states. 

After being fed this data, the AI learned to associate the body language signals with different affective reactions and to determine whether a point had been won (positive body language) or lost (negative body language). “Training in natural contexts is a significant advance for the identification of real emotional states, and it makes predictions possible in real scenarios,” said Jekauc.

Uses of reliable emotion recognition

The researchers envision a number of sports applications for reliable emotion recognition, such as improving training methods, team dynamics and performance, and preventing burnout. Other fields, including healthcare, education, customer service and automotive safety, could also benefit from reliable early detection of emotional states.

Citation: Darko Jekauc, Diana Burkart, Julian Fritsch, Marc Hesenius, Ole Meyer, Saquib Sarfraz, Rainer Stiefelhagen. Recognizing affective states from the expressive behavior of tennis players using convolutional neural networks. Knowledge-Based Systems, Vol. 295, 2024. DOI: 10.1016/j.knosys.2024.111856 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Holographic acoustic device precisely targets diseased neurons in brain

Researchers at Washington University in St. Louis have developed a noninvasive technology to treat human brain diseases, such as Parkinson’s disease, that simultaneously involve damage in various regions of the brain. The holographic acoustic device, combined with genetic engineering, precisely targets affected neurons in selected cell types at multiple diseased brain regions.

Hong Chen, associate professor of biomedical engineering in the McKelvey School of Engineering and of neurosurgery in the School of Medicine, and her team created the noninvasive “AhSonogenetic” device to alter genetically selected neurons in the brains of mice. The results of the study were published June 17 in the journal Proceedings of the National Academy of Sciences

Multiple technologies

AhSonogenetics combines several of Chen’s group’s recent advances. In 2021, she and her team launched Sonogenetics, a method that uses focused ultrasound to deliver a viral construct containing ultrasound-sensitive ion channels to genetically selected neurons in the brain. They use non-invasive low-intensity focused ultrasound to deliver a small burst of warmth, which opens ion channels and activates the neurons. Chen’s team was the first to show that sonogenetics could modulate the behavior of freely moving mice.

In 2022, the team designed and 3D-printed a flexible and versatile tool known as an Airy beam-enabled binary acoustic metasurface, which allowed them to manipulate ultrasound beams. She is currently developing Sonogenetics 2.0, which combines the advantage of ultrasound and genetic engineering to modulate defined neurons noninvasively and precisely in the brains of humans and animals. AhSonogenetics brings them together as a potential method to intervene in neurodegenerative diseases.

Sonogenetics gives researchers a way to precisely control brains, while airy-beam technology allows researchers to bend or steer the sound waves to generate arbitrary beam patterns inside the brain at high spatial resolution.

Treating Parkinson’s disease

Chen’s team tested the technique on a mouse model of Parkinson’s disease. With AhSonogenetics, they were able to stimulate two brain regions simultaneously in a single mouse, eliminating the need for multiple implants or interventions. This stimulation alleviated Parkinson’s-related motor deficits in the mouse model, including slow movements, difficulty walking and freezing behaviors.

The device, which costs roughly $50 to make, can be tailored in size to fit various brain sizes, expanding its potential applications. The design file for the Airy-beam holographic transducer is available on GitHub: https://github.com/ChenUltrasoundLabWUSTL/AiryBeam_Lens_Design. Funding was provided by the National Institutes of Health.

Citation: CitHu Z, Yang Y, Gong Y, Chukwu C, Ye D, Yue Y, Yuan J, Kravitz AV, Chen H. Airy-beam holographic sonogenetics for advancing neuromodulation precision and flexibility. Proceedings of the National Academy of Sciences June 17, 2024. DOI. 10.1073/pnas.2402200121 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Amber-like polymer allows for long-term storage of DNA and digital files

MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, including entire human genomes or digital files such as photos. 

“The rapid decline in DNA sequencing costs has fueled the demand for nucleic acid collection to unravel genomic information, develop treatments for genetic diseases, and track emerging biological threats,” the researchers say.

Most current methods for storing DNA require expensive freezing temperatures and are not feasible in many parts of the world. The new polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water. 

“Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”

The researchers showed that they could use this polymer to store DNA sequences and that the DNA can be easily removed from the polymer without damaging it.

Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published in the Journal of the American Chemical Society.

Capturing DNA

DNA offers a way to store this digital information at very high density: a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.

The researchers decided to make a thermoset polymer from styrene and a cross-linker, which form an amber-like thermoset called “cross-linked polystyrene.” This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA.

“Inspired by the millennia-long preservation of fossilized biological specimens in calcified minerals or glassy amber, we present Thermoset-REinforced Xeropreservation (T-REX): a method for storing DNA in deconstructable glassy polymer networks,” say the researchers.

Storing information

Using these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.

The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.

Storing genomes

The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future. 

The research was funded by the National Science Foundation.

Citation: Elisabeth Prince, Ho Fung Cheng, James L. Banal, and Jeremiah A. Johnson. Reversible Nucleic Acid Storage in Deconstructable Glassy Polymer Networks, Journal of the American Chemical Society. 10.1021/jacs.4c01925

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter