How to drink water from the air

An innovative “Hydropanel” device that distills water directly from water vapor in the atmosphere, using only sunlight power, has been developed by Source, a Scottsdale, Arizona company.

After distillation, the water is mineralized for ideal pH and TDS (total dissolved solid contaminants), resulting in “safe, premium-quality drinking water,” the company says. The process also mineralizes the water with essential magnesium and calcium to increase body absorption and refine alkalinity and taste.

“Each Hydropanel can produce up to three liters of drinking water a day, about the average daily intake for one person,” according to Cody Friesen, an Associate Professor at the School for Engineering of Matter, Transport and Energy at Arizona State University.

Other larger solar-powered devices for producing water are also commercially available.

The company also plans to offer a can version.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ant selectively amputates infected limbs of wounded nestmates

Scientists have found that Florida carpenter ants selectively treat the wounded limbs of fellow nestmates by either wound cleaning or amputation to aid in recovery—based on the injury.

“When we’re talking about amputation behavior, this is literally the only case in which a sophisticated and systematic amputation of an individual by another member of its species occurs in the animal Kingdom,” says first author Erik Frank (@ETF1989), a behavioral ecologist from the University of Würzburg In a study published July 2 in the journal Current Biology.

Assessing the type of injury to choose treatment

In a paper published in 2023, it was discovered that a different group of ants, Megaponera analis, use a special gland to inoculate injuries with antimicrobial compounds meant to quell possible infections. But what makes Florida carpenter ants (Camponotus floridanus) stand out is that because they have no such gland; they appear to be using only mechanical means to treat their nestmates.

The researchers found that this mechanical care involves one of two routes: perform wound cleaning with just their mouthparts or perform a cleaning followed by the full amputation of the leg. To select which route they take, the ants appear to assess the type of injury to make informed adjustments on how best to treat.

“The fact that the ants are able to diagnose a wound, see if it’s infected or sterile, and treat it accordingly over long periods of time by other individuals—the only medical system that can rival that would be the human one,” Frank says.

So how are these ants capable of such precise care?

“When you look at the videos where you have the ant presenting the injured leg and letting the other one bite off completely voluntarily, and then present the newly made wound so another one can finish cleaning process—this level of innate cooperation to me is quite striking,”

Citation: Frank et al. July 02, 2024. Wound-dependent leg amputations to combat infections in an ant society. Current Biologyhttps://www.cell.com/current-biology/fulltext/S0960-9822(24)00805-4 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model finds cancer clues at lightning speed

Researchers at the University of Gothenburg have developed an AI model that’s faster and better at detecting cancer than the current semi-manual method.

Glycans (structures of sugar molecules in cells) can be measured by mass spectrometry to detect cancer. However, the data must be carefully analyzed by one of the few experts in the world—taking up to days per sample.

Detecting cancer in seconds

Researchers at the University of Gothenburg have now developed an AI model named “Candycrunch” that does it in just a few seconds per test, as reported in the journal Nature Methods.

“The AI model was trained using a database of more than 500,000 examples of different fragmentations and associated structures of sugar molecules in 90 percent of cases,” said Daniel Bojar, Associate Senior Lecturer in Bioinformatics at the University of Gothenburg in a statement.

Detects biomarkers missed by human analyses

That means the AI model could soon reach the same levels of accuracy as the sequencing of other biological sequences, such as DNA, RNA or proteins. In addition, the AI model is fast and accurate in its answers, so it can accelerate the discovery of glycan-based biomarkers for both diagnosis and prognosis of a cancer.

The Candycrunch model is also able to identify low concentrations of biomarkers, which are often missed by human analyses.

Citation: Urban, J., Jin, C., Thomsson, K. A., Karlsson, N. G., Ives, C. M., Fadda, E., & Bojar, D. (2024). Predicting glycan structure from tandem mass spectrometry via deep learning. Nature Methods, 1-10. https://doi.org/10.1038/s41592-024-02314-6 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Soft, stretchy wearable electrode simulates touch sensations for VR, other uses

A team of researchers led by the University of California San Diego has developed a soft, stretchy electronic device capable of simulating the feeling of pressure or vibration when worn on the skin, for use in virtual reality, medical prosthetics and wearable technology.

Wearable on a sticker on the fingertip or forearm

The idea is to create a wearable system that can deliver haptic (feeling) touch sensations (pressure or vibration) using electrical signals, but without causing pain for the wearer. (Existing technologies that recreate a sense of touch through electrical stimulation often induce pain due to the use of rigid metal electrodes, and do not conform well to the skin.)

This device, reported in Science Robotics, consists of a soft, stretchable electrode attached to a newly designed silicone patch (based on PEDOT:PSS and PPEGMEA) that can be worn like a sticker on the fingertip or forearm as either pressure or vibration. The electrode, in contact with the skin, is also connected to an external power source via wires.

The work was supported by the National Science Foundation Disability and Rehabilitation Engineering program.

Citation: Blau, R., et al. (2024). Conductive block copolymer elastomers and psychophysical thresholding for accurate haptic effects. Science Robotics. 10.1126/sci-robotics.adk3925

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First realistic robot faces

University of Tokyo researchers have found a way to bind engineered skin tissue to the complex forms of humanoid robots, taking inspiration from human skin ligaments.

In addition to better lifelike appearance, potential benefits to robotic platforms include increased mobility, self-healing abilities, and embedded sensing capabilities, according to Professor Shoji Takeuchi of the University of Tokyo in a statement. Their research could also be useful in the cosmetics industry and to help train plastic surgeons.

Binding skin to complex structures

Takeuchi is a pioneer in the field of biohybrid robotics, where biology and mechanical engineering meet. So far, his Biohybrid Systems Laboratory has created mini robots that walk using biological muscle tissue with 3D-printed lab-grown meat and engineered skin that can heal.

“By mimicking human skin-ligament structures and by using specially made V-shaped perforations in solid materials, we found a way to bind the skin to complex structures. The natural flexibility of the skin and the strong method of adhesion mean the skin can move with the mechanical components of the robot without tearing or peeling away,” Takeuchi said.

Any shape of the surface can have living skin applied to it—think of the possibilities

2D facial robot with living-skin smile created by activating anchors (credit ©2024 Takeuchi et al. CC-BY-ND)

Previous methods to attach skin tissue to solid surfaces involved things like mini anchors or hooks, but these limited the kinds of surfaces that could receive skin coatings and could cause damage during motion. By carefully engineering small perforations instead, essentially any surface shape can have skin applied to it.

The trick: use a special collagen gel for adhesion, which is naturally viscous so difficult to feed into the minuscule perforations. But using a common technique for plastic adhesion called plasma treatment, they managed to coax the collagen into the fine structures of the perforations while also holding the skin close to the surface in question.

Imagine super-self-healing robots with humanlike dexterity and realistic skin

“Manipulating soft, wet biological tissues during the development process is much harder than people outside the field might think, said Takeuchi. “For instance, if sterility is not maintained, bacteria can enter and the tissue will die. However, now that we can do this, living skin can bring a range of new abilities to robots.

“Self-healing is a big deal. Some chemical-based materials can be made to heal themselves, but they require triggers such as heat, pressure or other signals, and they also do not proliferate like cells. Biological skin repairs minor lacerations as ours does, and nerves and other skin organs can be added for use in sensing and so on.”

Major implications for cosmetic and surgical procedures

“A face-on-a-chip could be useful in research into skin aging, cosmetics, surgical procedures, plastic surgery and more. And if sensors can be embedded, robots may be endowed with better environmental awareness and improved interactive capabilities,” Takeuchi said.

“We believe that creating thicker and more realistic skin can be achieved by incorporating sweat glands, sebaceous glands, pores, blood vessels, fat and nerves. And creating humanlike expressions by integrating sophisticated actuators, or muscles, inside the robot.

“Creating robots that can heal themselves, sense their environment more accurately and perform tasks with humanlike dexterity is incredibly motivating.”

Citation: M. Kawai, M. Nie, H. Oda, S. Takeuchi. “PERFORATION-TYPE ANCHORS INSPIRED BY SKIN LIGAMENT FOR THE ROBOTIC FACE COVERED WITH LIVING SKIN,” Cell Reports Physical Science, https://www.cell.com/cell-reports-physical-science/fulltext/S2666-3864(24)00335-7 (open access) 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ketamine slow-release at-home tablet reduces symptoms of severe depression

A new tablet form of ketamine has shown promise in treating severe depression, offering a take-at-home alternative to existing clinic-based treatments, which can be expensive and lacking in convenience for some patients.

Professor Paul Glue of the University of Otago and colleagues from other research institutions in Australia and New Zealand ran a randomized controlled trial testing the effectiveness of ketamine tablets to treat depression compared with placebo. The results were published today (June 24) in Nature Medicine.

No tripping required

Unlike injectable and nasal spray alternatives, the new slow-release tablet form can be taken safely at home without medical supervision and with negligible side effects. The researchers say the new tablet challenges beliefs about how ketamine works in helping people successfully overcome depression: the psychedelic-assisted therapy model, which says changing your brain circuit functioning in a very profound way gives you new insights that help you to break out of your way of thinking.

The new drug requires further research and is not yet approved by the FDA in the US or the TGA in Australia. (We hope our down-under readers keep us advised.)

Citation: Glue, P., Loo, C., Fam, J., Lane, H., Young, A. H., & Surman, P. (2024). Extended-release ketamine tablets for treatment-resistant depression: A randomized placebo-controlled phase 2 trial. Nature Medicine, 1-6. https://doi.org/10.1038/s41591-024-03063-x

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Opening the AI black box

AI can sift through hundreds of thousands of genome data points to identify potential new therapeutic targets. But scientists aren’t sure how today’s AI models come to their conclusions in the first place.

Enter SQUID (Surrogate Quantitative Interpretability for Deepnets), a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and can lead to more accurate predictions about the effects of genetic mutations, said the scientists in a statement.

How it works better

The key, CSHL Assistant Professor Peter Koo says, lies in SQUID’s specialized training. “What we did with SQUID was leverage decades of quantitative genetics knowledge to help us understand what these deep neural networks are learning,” explains Koo. 

“The tools that people use to try to understand these models have been largely coming from other fields like computer vision or natural language processing. While they can be useful, they’re not optimal for genomics.”

100,000 variant DNA sequences

SQUID computational pipeline (credit: Koo and Kinney labs/ Cold Spring Harbor Laboratory)

SQUID first generates a library of more than 100,000 variant DNA sequences. It then analyzes the library of mutations and their effects using a program called MAVE-NN (Multiplex Assays of Variant Effects Neural Network).

This tool allows scientists to perform thousands of virtual experiments simultaneously. In effect, they can “fish out” the algorithms behind a given AI’s most accurate predictions. Their computational “catch” could set the stage for experiments that are more grounded in reality. 

Citation: Seitz, E. E., McCandlish, D. M., Kinney, J. B., & Koo, P. K. (2024). Interpreting cis-regulatory mechanisms from genomic deep neural networks using surrogate models. Nature Machine Intelligence, 1-13. Seitz, E. E., McCandlish, D. M., Kinney, J. B., & Koo, P. K. (2024). https://doi.org/10.1038/s42256-024-00851-5 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model may allow doctors to detect cancer from DNA

Using AI to detect and diagnose cancer in patients may soon allow for earlier treatment, say investigators at Cambridge University and Imperial College London.

The key is in our DNA

Genetic information is encoded in DNA by patterns of the four bases (A, T, G and C) that make up its structure. However, environmental changes outside the cell can cause some DNA bases to be modified by adding a methyl group (this process is called “DNA methylation”).

Each individual cell possesses millions of these DNA methylation marks. Researchers have observed changes to these marks in early cancer development and they could assist in early diagnosis of cancer. Currently, It’s difficult to examine which bases in DNA are methylated in cancers and to what extent, compared to healthy tissue.

So investigators trained an AI model, using a combination of machine and deep learning, to look at the DNA methylation patterns. It identified 13 different cancer types (including breast, liver, lung, and prostate cancers) from non-cancerous tissue with 98.2% accuracy. They found that the model reinforces and enhances understanding of the underlying processes contributing to cancer.

Additional training and testing

However, they say this model only relies on tissue samples (not DNA fragments in the blood), so it would need additional training and testing on a more diverse collection of biopsy samples to be ready for clinical use.

Identifying these unusual methylation patterns may allow healthcare providers to detect cancer early. “Computational methods such as this model, through better training on more varied data and rigorous testing in the clinic, will eventually help doctors with early detection and screening of cancers,” said the paper’s lead author, Shamith Samarajiwa, in a statement.

Citation: Newsham, I., Sendera, M., Jammula, S. G., & Samarajiwa, S. A. (2024). Early detection and diagnosis of cancer with interpretable machine learning to uncover cancer-specific DNA methylation patterns. Biology Methods and Protocols, 9(1). https://doi.org/10.1093/biomethods/bpae028

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

FDA approves clinical trial to test nanoscale sensors for recording brain activity

The Federal Drug Administration has approved a clinical trial to test the effectiveness of a new electronic grid that records brain activity during surgery. Developed by engineers at the University of California San Diego, the device has nanoscale sensors that record electrical signals directly from the surface of the human brain in record-breaking detail.

The grid’s breakthrough high resolution could provide better guidance for planning and performing surgeries to remove brain tumors and treat drug-resistant epilepsy and could improve neurosurgeons’ ability to minimize damage to healthy brain tissue, the researchers say. During epilepsy surgery, the novel grid could also improve the ability to precisely identify the regions of the brain where epileptic seizures originate, allowing for safe, effective treatment.

Ultra-thin brain sensor array

The new brain sensor array ( known as “platinum nanorod grid (PtNRGrid)” features a densely packed grid of a record-breaking 1,024 embedded electrocorticography (ECoG) sensors. The device rests on the surface of the brain and is approximately 6 microns thin and flexible. So it can both adhere and conform to the surface of the brain, bending as the brain moves while providing high-quality, high-resolution recordings of brain activity. 

In contrast, the ECoG grids most commonly used in surgeries today typically have between 16 and 64 sensors and are rigid, stiffer and more than 500 microns in thickness; and do not conform to the curved surface of the brain.

The PtNRGrid was invented by Shadi Dayeh, a Professor in the Department of Electrical and Computer Engineering at the University of California San Diego and members of his team. The team developed the PtNRGrid technology in collaboration with neurosurgeons and medical researchers from UC San Diego, Massachusetts General Hospital (MGH) and Oregon Health & Science University (OHSU).

Currently, Dayeh’s research group holds the world record for recording brain activity from a single cortical grid with 2,048 channels on the surface of the human brain, published in Science Translational Medicine in 2022.

The clinical trial is designed to demonstrate the effectiveness of the PtNRGrid device to map both normal and pathological brain activity. Surgeons will implant the PtNRGrid in 20 patients, then measure and compare the grid’s performance with the present state-of-the-art. The PtNRGrid will be deployed in surgeries to remove brain tumors and to remove tissue that causes epileptic seizures.

“Our goal is to provide a new atlas for understanding and treating neurological disorders, working with a network of highly experienced clinical collaborators at UC San Diego, MGH, and OHSU,” Dayeh said in a statement.

Record-breaking density

Pending the success of this staged trial, the team will transition to the next crucial step: making the PtNRGrid available for commercial use at scale. Demonstrating that ECoG grids with sensors in the thousands of channels record brain activity with high fidelity also opens new opportunities in neuroscience for uncovering a deeper understanding of how the human brain functions. Basic science advances, in turn, could lead to improved treatments grounded in an enhanced understanding of brain function.

Dayeh’s work toward the FDA approval is supported by an NIH BRAIN® Initiative award.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model identifies tennis-player affective states

Researchers at Karlsruhe Institute of Technology and the University of Duisburg-Essen have trained an AI model to accurately identify affective states from the body language of tennis players during games.

Their study, published in the journal Knowledge-Based Systems, demonstrates that AI can assess body language and emotions with an accuracy similar to that of humans.

Accuracy comparable to that of human observers

“Our model can identify affective states with an accuracy of up to 68.9 percent, which is comparable and sometimes even superior to assessments made by both human observers and earlier automated methods,” said Professor Darko Jekauc of Karlsruhe Institute of Technology Institute of Sports and Sports Science in a statement.

“The reason [for this accuracy] could be that negative emotions are easier to identify because they’re expressed in more obvious ways,” said Jekauc. “Psychological theories suggest that people are evolutionarily better adapted to perceive negative emotional expressions, for example, because defusing conflict situations quickly is essential to social cohesion.”

Body language clues

The researchers recorded video sequences of 15 tennis players in a specific setting, focusing on the body language displayed when a point was won or lost. The videos showed players with cues including lowered head, arms raised in exultation, hanging racket, or differences in walking speed; these cues could be used to identify the players’ affective states. 

After being fed this data, the AI learned to associate the body language signals with different affective reactions and to determine whether a point had been won (positive body language) or lost (negative body language). “Training in natural contexts is a significant advance for the identification of real emotional states, and it makes predictions possible in real scenarios,” said Jekauc.

Uses of reliable emotion recognition

The researchers envision a number of sports applications for reliable emotion recognition, such as improving training methods, team dynamics and performance, and preventing burnout. Other fields, including healthcare, education, customer service and automotive safety, could also benefit from reliable early detection of emotional states.

Citation: Darko Jekauc, Diana Burkart, Julian Fritsch, Marc Hesenius, Ole Meyer, Saquib Sarfraz, Rainer Stiefelhagen. Recognizing affective states from the expressive behavior of tennis players using convolutional neural networks. Knowledge-Based Systems, Vol. 295, 2024. DOI: 10.1016/j.knosys.2024.111856 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter