Mars environment found conducive to the emergence of life

A HEXAGONAL FOSSIL PATTERN IN SEDIMENTARY ROCKS ANALYSED BY CURIOSITY ON THE 3154TH DAY OF ITS JOURNEY THROUGH THE GALE CRATER ON MARS. view more CREDIT: © NASA/JPL-CALTECH/MSSS/IRAP/RAPIN ET AL./NATURE

The surface of Mars, unlike the Earth’s, is not constantly renewed by plate tectonics, resulting in huge areas of terrain with fossil rivers and lakes, dating back billions of years (credit: NASA/JPL-CALTECH/MSSS/IRAP/RAPIN ET AL./NATURE)

The emergence of primitive life forms, as hypothesized by scientists, initially requires environmental conditions favorable to the spontaneous organization of these molecules into complex organic compounds. That’s what a research team from the Institut de Recherche en Astrophysique et Planétologie with their US and Canadian colleagues has discovered.

First fossil evidence of dry/wet seasons

Using the Mastcam1 and the ChemCam2 instruments on Curiosity, they found deposits of salts forming a hexagonal pattern in sedimentary layers dating from 3.8 to 3.6 billion years ago — the first fossil evidence of a sustained, cyclical, regular Martian climate with dry and wet seasons.

Independent laboratory experiments have shown that this kind of environment provides the ideal conditions for the formation of complex precursor and constituent compounds of life, such as RNA.

Large-scale images from orbit next

These new observations should now enable scientists to take a fresh look at the large-scale images obtained from orbit, which have already identified numerous terrains with a similar composition. They now know where to look for traces of the natural processes that gave rise to life, of which no vestiges remain on Earth.

Footnotes

1- https://mars.nasa.gov/msl/spacecraft/instruments/mastcam/

2- ChemCam was built by a French-US consortium under the responsibility of the Institut de Recherche en Astrophysique et Planétologie (CNRS/Université de Toulouse III – Paul Sabatier/CNES) and the Los Alamos National Lab (United States). In France, the design of the instrument was funded by the French space agency CNES, the CNRS, the French Alternative Energies and Atomic Energy Commission CEA, and a number of universities. Mars Science Laboratory is a NASA mission run by the Jet Propulsion Laboratory (USA), which designed and operates the Curiosity rover.

Citation: Rapin, W., Dromart, G., Clark, B. C., Schieber, J., Kite, E. S., Kah, L. C., Thompson, L. M., Gasnault, O., Lasue, J., Meslin, P., Gasda, P. J., & Lanza, N. L. (2023). 9-Aug-2023. Sustained wet–dry cycling on early Mars. Nature, 620(7973), 299-302. https://doi.org/10.1038/s41586-023-06220-3 

Original source: Mars: new evidence of an environment conducive to the emergence of life, Aug. 9, 2023, https://www.cnrs.fr/en/mars-new-evidence-environment-conducive-emergence-life

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DARPA and NASA Aim to Test Nuclear Rocket by 2026 

The engine would drastically shorten trips to the Moon and Mars

“If you want to fly to Mars, you have to pick your departure date carefully … the ideal launch windows only come around every 26 months, and the planets need to be in alignment,” notes IEEE Spectrum.     

The problem: chemical rockets are “limited by the fuel and oxygen they can carry.” Instead, you could use nuclear power, which “NASA and DARPA want to build, first as a prototype, then as a moon rocket, and finally as an interplanetary vehicle.”

“The old program put weapons-grade uranium-235 in the reactor, which is also off the agenda now. Instead, the design specifies much less enriched U-235. ‘It is safe to work around; it is safe to be around; it doesn’t need the protection measures that need to be a place for plutonium,’ says Anthony Calomino, a materials and structural scientist at NASA.”

Project DRACO

On July 26, the agencies disclosed details of a project to do just that, nicknamed DRACO (Demonstration Rocket for Agile Cislunar Operations).

The plan: test the prototype in space, beginning in late 2026. If it works out, “the next step will be to build a lunar rocket … but the real payoff could come when the order is given to go to Mars.” (Note: “military dividends may flow.”)

Project Orion

The nuclear pulse propulsion concept is based on an idea from physicist/science-fiction writer Stanislaw Ulam, according to Project Orion: The Atomic Spaceship 1957 to 1965, a book by George Dyson, the son of Freeman Dyson, one of the leaders of the project. 

TED Feb 19, 2008. George Dyson tells the amazing story of Project Orion, a massive, nuclear-powered spacecraft that could have taken us to Saturn in five years. With a priceless insider’s perspective and a cache of documents, photos and film, Dyson brings this dusty Atomic Age dream to vivid life.

An animation of today’s rocket, produced by Lockheed Martin, can be seen here. More at National Defense.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

We’re closer to engineering working blood vessels

Researchers at the University of Melbourne and Monash University have developed a fast, inexpensive and scalable tissue engineering method for creating the complex geometries of blood vessels.

Illness and dysfunction in blood vessels can result in life-threatening disorders such as heart attacks, strokes and aneurysms, making cardiovascular disease the number one killer globally, according to University of Melbourne associate professor Daniel Heath. “Current methods are slow and require specialized and expensive equipment like bioreactors and a supply of engineered vessels.”

Tissue-engineered blood vessels

To overcome these limitations, the researchers investigated developing “tissue-engineered” blood vessels made from human cells and tissues. These newly created vessels have the potential to treat cardiovascular illness, as well as construct built-in blood supply for larger tissue creations.

“We are now able to rapidly and cheaply manufacture blood vessels using living tissue that has appropriate mechanical properties and mimics the cellular orientation of the innermost layer of blood vessels,” University of Melbourne Professor O’Connor said.

“While [these] are not yet ready for bypass surgery, the findings mark a significant advancement in the field of tissue engineering.”

Read more in Pursuit: “We’re closer to engineering blood vessels.

Citation: ACS Appl. Mater. Interfaces 2023, 15, 29, 34631–34641. July 13, 2023. Spontaneous Orthogonal Alignment of Smooth Muscle Cells and Endothelial Cells Captures Native Blood Vessel Morphology in Tissue-Engineered Vascular Grafts, https://doi.org/10.1021/acsami.3c08511

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to measure your brain activity using an EEG sensor mounted on a VR headset

Researchers at The University of Texas at Austin have installed a noninvasive electroencephalogram (EEG) sensor on a Meta Quest 2 VR headset, allowing for measuring brain electrical activity during immersive VR interactions, or for other purposes.

The sensor could be used by people with anxiety to detect their reactions, for example, or to measure stressors of aviators using a flight simulator, for example.

New comfortable EEG electrode design

The new electrode design overcomes problems with currently available electrodes, which are rigid and comb-shaped, and hair blocks them from connecting with the scalp, creating an uncomfortable experience for the user, according to Hongbian Li, a research associate in the Cockrell School of Engineering’s Department of Aerospace Engineering and Engineering Mechanics.

Hongbian Li, a research associate in professor Nanshu Lu’s lab (credit: The University of Texas at Austin/ Cockrell School of Engineering)

The new design uses spongy electrodes made of soft, conductive materials, arranged across the top strap and forehead pad. An EEG recording device is attached to the back of the headset.

Detecting human-robot interactions

This technology will play a role in another major research project at UT Austin: A new robot delivery network that will also serve as the largest study to date of human-robot interactions. The VR headsets will be used by people either traveling with robots or in a remote “observatory.” They will be able to watch along from the robot’s perspective, and the device will also measure the mental load of this observation for long periods.

“If you can see through the eyes of the robot, it paints a clearer picture of how people are reacting to it and lets operators monitor their safety in case of potential accidents,” said Luis Sentis, a professor in the Department of Aerospace Engineering and Engineering Mechanics who is co-leading the robot delivery project and a co-author on the VR EEG paper.

The researchers have filed preliminary patent paperwork for the EEG and are open to partner with VR companies to create a built-in version of the technology.

Citation: Li H, Shin H, Zhang M, Yu A, Huh H, Kwon G, Riveira N, Kim S, Gangopadahyay S, Peng J, Li Z, Rao Y, Sentis L, Millán JdR, Lu N. Hair-compatible sponge electrodes integrated on VR headset for electroencephalography. Soft Sci 2023;3:22. https://dx.doi.org/10.20517/ss.2023.11 (open-access; PDF also available).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Connecting neural and symbolic AI networks

The U.S. Office of Naval Research has awarded Michigan State University researchers with a $1.8 million grant project to make artificial intelligence more reliable and transparent.

This could also enable people to entrust AI systems with more advanced jobs that rely on understanding language and visual information, including education, navigation and multimodal question-answering systems. 

Connecting symbolic AI with deep neural networks

Led by Parisa Kordjamshidi, an assistant professor in the Department of Computer Science and Engineering, the team is working to connect symbolic AI with current deep neural networks to create a neuro-symbolic framework. This combined approach could create systems that have a vast wealth of learning from data along with explicit reasoning capabilities. Dan Roth, a professor at the University of Pennsylvania, is also a co-investigator.

“Every day, these AI models impress us, but we’re still not sure how trustworthy and reliable they are,” said Kordiamshidi. “Even when they provide the right answer, they might be right for the wrong reasons. We need to know what is their line of reasoning. That’s not very clear right now, and that’s the challenge.”

The researchers are also helping to better process a range of inputs — text, images and video — to make human interactions with computer systems more powerful and seamless.

Dr. Ben Geortzel: A comment on neural-symbolic AI

“Neural-symbolic AI approaches have been the subject of deep research and practical experimentation and commercial deployment for a couple decades now, and it’s great to see them finally start to get more of the attention, prominence and funding they deserve. 

“When I built my first neural-symbolic AI system in Webmind Inc. in 1997, it was fairly cutting-edge, though not totally unprecedented. By now, there is a vast reserve of experience in such systems to draw on in doing neural-symbolic engineering, and I have no doubt this is one of the ingredients that will be valuable in integrating LLMs into broader AGI architectures during the coming months and years.” (Also see: https://people.cs.ksu.edu/~hitzler/nesy.)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Implanted microchips re-link brain to arms and spinal cord, restoring movement to paralyzed man, an amazing first

Keith Thomas was paralyzed from the chest down from a diving accident. Now a neural bypass has restored movement and sensations in his hand, with lasting gains in his arm and wrist, including outside of the laboratory. 

“This is the first time the brain, body and spinal cord have been linked together electronically in a paralyzed human to restore lasting movement and sensation,” said Chad Bouton, professor in the Institute of Bioelectronic Medicine in New York.

Mapping brain areas for arm movement and sensation of touch in hand

Bioelectronic engineers, AI programmers, and surgeons at Feinstein Institutes spent months mapping Thomas’ brain, using fMRIs to help pinpoint the areas responsible for both arm movement and the sensation of touch in his hand. Armed with that information, surgeons performed a grueling 15-hour surgery at North Shore University Hospital, during parts of which the study participant was awake and giving surgeons real-time feedback.

Keith Thomas shakes his sister’s hand for the first time in years (credit: The Feinstein Institutes for Medical Research)

As they probed portions of the surface of his brain, in a delicate procedure known as a “double neuro bypass,” Thomas would tell them what sensations he was feeling in his hands. A surgeon, Ashesh Mehta, MD, PhD, inserted two chips in the area responsible for movement and three more in the part of the brain responsible for touch and feeling in the fingers.

The chips send and interpret signals between his brain, damaged spinal cord, and hands, allowing him some movement.

“This type of thought-driven therapy is a game-changer,” said Mehta. “Our goal is to use this technology to give people living with paralysis the ability to live fuller, more independent lives.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Fiber-infused ink enables 3D-printed artificial-heart muscles to beat

Scientists from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have invented artificial heart tissue that beats in coordination, like a human heart.

In a paper published in Nature Materials, the researchers report they have developed a new hydrogel ink infused with gelatin fibers and applied with “rotary jet-spinning.” This method allows muscle cells (printed in the shape of a ventricle) to align and beat in coordination, as in a human heart chamber.

Defeating heart disease with new implantable tissues

Initially, the 3D-printed cardiac tissues could help evaluate which treatments might work best in individual patients. But the future plan is to fabricate actual implantable tissues that can heal or replace faulty or diseased structures inside a patient’s heart, says Suji Choi, research associate at SEAS and first author on the paper.

Rotary Jet-Spinning (RJS) works like a cotton candy machine (credit: Harvard John A. Paulson School Of Engineering And Applied Sciences)

Citation: Choi, S., Lee, K. Y., Kim, S. L., MacQueen, L. A., Chang, H., Zimmerman, J. F., Jin, Q., Peters, M. M., Ardoña, H. A., Liu, X., Heiler, A., Gabardi, R., Richardson, C., Pu, W. T., Bausch, A. R., & Parker, K. K. (2023). Fibre-infused gel scaffolds guide cardiomyocyte alignment in 3D-printed ventricles. Nature Materials, 22(8), 1039-1046. https://doi.org/10.1038/s41563-023-01611-3

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Computer system based on light could jumpstart power of ChatGPT-type machine-learning programs

MIT-led team also demonstrates giant improvement in energy efficiency over today’s systems

ChatGPT has made headlines around the world with its ability to write well-done essays, emails, and computer code based on a few questions from a user.

Now an MIT-led team reports a system that could lead to machine-learning programs several orders of magnitude more powerful than the one behind ChatGPT. Plus, the system they developed could use several orders of magnitude less energy than the state-of-the-art supercomputers behind the machine-learning models of today.

In the July 17 issue of Nature Photonics, the researchers report the first experimental demonstration of the new system, which does its computations based on the movement of light rather than electrons using hundreds of micron-scale lasers. With the new system, the team reports a greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density, a measure of the power of a system, over state-of-the-art digital computers for machine learning.

In the paper, the team also cites “substantially several more orders of magnitude for future improvement.” As a result, the authors continue, the technique “opens an avenue to large-scale optoelectronic processors to accelerate machine-learning tasks from data centers to decentralized edge devices.” In other words, cell phones and other small devices could become capable of running programs that can currently only be computed at large data centers.

Further, because the components of the system can be created using fabrication processes already in use today, “we expect that it could be scaled for commercial use in a few years. For example, the laser arrays involved are widely used in cell-phone face ID and data communication,” says Zaijun Chen, first author, who conducted the work while a postdoctoral associate at MIT in the Research Laboratory of Electronics and is now an assistant professor at the University of Southern California.

Says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and leader of the work, “ChatGPT is limited in its size by the power of today’s supercomputers. It’s just not economically viable to train models that are much bigger. Our new technology could make it possible to leapfrog to machine-learning models that otherwise would not be reachable in the near future.”

He continues, “We don’t know what capabilities the next-generation ChatGPT will have if it is 100 times more powerful, but that’s the regime of discovery that this kind of technology can allow.” Englund is also leader of MIT’s Quantum Photonics Laboratory and is affiliated with the Research Laboratory of Electronics (RLE) and the Materials Research Laboratory.

ChatGPT has made headlines around the world with its ability to write well-done essays, emails, and computer code based on a few questions from a user. Now an MIT-led team reports a system that could lead to machine-learning programs several orders of magnitude more powerful than the one behind ChatGPT. Plus, the system they developed could use several orders of magnitude less energy than the state-of-the-art supercomputers behind the machine-learning models of today.

In the July 17 issue of Nature Photonics, the researchers report the first experimental demonstration of the new system, which does its computations based on the movement of light rather than electrons using hundreds of micron-scale lasers. With the new system, the team reports a greater than 100 fold improvement in energy efficiency and a 25 fold improvement in compute density, a measure of the power of a system, over state-of-the-art digital computers for machine learning. 

Toward the Future

In the paper, the team also cites “substantially several more orders of magnitude for future improvement.” As a result, the authors continue, the technique “opens an avenue to large-scale optoelectronic processors to accelerate machine-learning tasks from data centers to decentralized edge devices.” In other words, cell phones and other small devices could become capable of running programs that can currently only be computed at large data centers.

Further, because the components of the system can be created using fabrication processes already in use today, “we expect that it could be scaled for commercial use in a few years. For example, the laser arrays involved are widely used in cell-phone face ID and data communication,” says Zaijun Chen, first author, who conducted the work while a postdoctoral associate at MIT in the Research Laboratory of Electronics and is now an assistant professor at the University of Southern California.

Says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and leader of the work, “ChatGPT is limited in its size by the power of today’s supercomputers. It’s just not economically viable to train models that are much bigger. Our new technology could make it possible to leapfrog to machine-learning models that otherwise would not be reachable in the near future.”

He continues, “We don’t know what capabilities the next-generation ChatGPT will have if it is 100 times more powerful, but that’s the regime of discovery that this kind of technology can allow.” Englund is also leader of MIT’s Quantum Photonics Laboratory and is affiliated with the Research Laboratory of Electronics (RLE) and the Materials Research Laboratory.

A Drumbeat of Progress

The current work is the latest achievement in a drumbeat of progress over the last few years by Englund and many of the same colleagues. For example, in 2019 an Englund team reported the theoretical work that led to the current demonstration. The first author of that paper, Ryan Hamerly, now of RLE and NTT Research Inc, is also an author of the current paper.

Additional coauthors of the current Nature Photonics paper are Alexander Sludds, Ronald Davis, Ian Christen, Liane Bernstein, and Lamia Ateshian, all of RLE; and Tobias Heuser, Niels Heermeier, James A. Lott, and Stephan Reitzensttein of Technische Universitat Berlin.

Deep neural networks (DNNs) like the one behind ChatGPT are based on huge machine-learning models that simulate how the brain processes information. However, the digital technologies behind today’s DNNs are reaching their limits even as the field of machine learning is growing. Further, they require huge amounts of energy and are largely confined to large data centers. That is motivating the development of new computing paradigms.

The Advantages of Light

Using light rather than electrons to run DNN computations has the potential to break through the current bottlenecks. Computations using optics, for example, have the potential to use far less energy than those based on electronics. Further, with optics, “you can have much larger bandwidths,” or compute densities, says Chen. Light can transfer much more information over a much smaller area.

But current optical neural networks (ONNs) have significant challenges. For example, they use a great deal of energy because they are inefficient at converting incoming data based on electrical energy into light. Further, the components involved are bulky and take up significant space. And while ONNs are quite good at linear calculations like adding, they are not great at nonlinear calculations like multiplication and “if” statements.

In the current work the researchers introduce a compact architecture that, for the first time, solves all of these challenges and two more simultaneously. That architecture is based on state-of-the-art arrays of vertical surface-emitting lasers (VCSELs), a relatively new technology used in applications including LiDAR remote sensing and laser printing. The particular VCELs reported in the Nature Photonics paper were developed by the Reitzenstein group at Technische Universitat Berlin. “This was a collaborative project that would not have been possible without them,” Hamerly says.

Logan Wright is an assistant professor at Yale University who was not involved in the current research. Comments Wright, “The work by Zaijun Chen et al. is inspiring, encouraging me and likely many other researchers in this area that systems based on modulated VCSEL arrays could be a viable route to large-scale, high-speed optical neural networks. Of course, the state-of-the-art here is still far from the scale and cost that would be necessary for practically useful devices, but I am optimistic about what can be realized in the next few years, especially given the potential these systems have to accelerate the very large-scale, very expensive AI systems like those used in popular textual ‘GPT’ systems like ChatGPT.”

Chen, Hamerly, and Englund have filed for a patent on the work, which was sponsored by the Army Research Office, NTT Research, the National Defense Science and Engineering Graduate Fellowship Program, the National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, and the Volkswagen Foundation.

Citation: Chen, Z., Sludds, A., Davis, R., Christen, I., Bernstein, L., Ateshian, L., Heuser, T., Heermeier, N., Lott, J. A., Reitzenstein, S., Hamerly, R., & Englund, D. (2023). Deep learning with coherent VCSEL neural networks. Nature Photonics, 1-8. https://doi.org/10.1038/s41566-023-01233-w

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

MIT’s PhotoGuard alters photos to stop AI systems from tinkering with them

A new tool called PhotoGuard, created by researchers at MIT, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated, reports MIT Technology Review.

“If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been “immunized” by PhotoGuard, the result will look unrealistic or warped.”

A second method, a diffusion attack, “disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model.” 

For now, PhotoGuard works reliably only on Stable Diffusion, so users’ old images may still be available for misuse.

“In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, a professor at MIT who contributed to the research. But a more effective approach would be for tech companies to add it to images that people upload into their platforms automatically.”

Citation: Hadi Salman et al. Raising the Cost of Malicious AI-Powered Image Editing. arXiv:2302.06588

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

OpenAI’s “insane secret image generator … outperforms previous image generators”

“OpenAI has been privately testing a new iteration of its AI imaging model … and early samples leaked by YouTuber MattVidPro AI show it outperforming previous image generators,” Jose Antonio Lanz reveals in Decrypt today.

“Likely an upgrade of DALL-E 2 … the model showed sharp images with lighting and reflections that mimic real photos .. and recreated detailed paintings down to visible brush strokes.

“Extremely exciting—this blows anything we’ve seen before out of the water, it’s insane,” Matt said in a preview he posted to YouTube. “As models continue improving, the line between artificial and real blurs even further.” 

Image created by Decrypt using AI (credit: Decrypt)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter