Fiber-infused ink enables 3D-printed artificial-heart muscles to beat

Scientists from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have invented artificial heart tissue that beats in coordination, like a human heart.

In a paper published in Nature Materials, the researchers report they have developed a new hydrogel ink infused with gelatin fibers and applied with “rotary jet-spinning.” This method allows muscle cells (printed in the shape of a ventricle) to align and beat in coordination, as in a human heart chamber.

Defeating heart disease with new implantable tissues

Initially, the 3D-printed cardiac tissues could help evaluate which treatments might work best in individual patients. But the future plan is to fabricate actual implantable tissues that can heal or replace faulty or diseased structures inside a patient’s heart, says Suji Choi, research associate at SEAS and first author on the paper.

Rotary Jet-Spinning (RJS) works like a cotton candy machine (credit: Harvard John A. Paulson School Of Engineering And Applied Sciences)

Citation: Choi, S., Lee, K. Y., Kim, S. L., MacQueen, L. A., Chang, H., Zimmerman, J. F., Jin, Q., Peters, M. M., Ardoña, H. A., Liu, X., Heiler, A., Gabardi, R., Richardson, C., Pu, W. T., Bausch, A. R., & Parker, K. K. (2023). Fibre-infused gel scaffolds guide cardiomyocyte alignment in 3D-printed ventricles. Nature Materials, 22(8), 1039-1046. https://doi.org/10.1038/s41563-023-01611-3

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Computer system based on light could jumpstart power of ChatGPT-type machine-learning programs

MIT-led team also demonstrates giant improvement in energy efficiency over today’s systems

ChatGPT has made headlines around the world with its ability to write well-done essays, emails, and computer code based on a few questions from a user.

Now an MIT-led team reports a system that could lead to machine-learning programs several orders of magnitude more powerful than the one behind ChatGPT. Plus, the system they developed could use several orders of magnitude less energy than the state-of-the-art supercomputers behind the machine-learning models of today.

In the July 17 issue of Nature Photonics, the researchers report the first experimental demonstration of the new system, which does its computations based on the movement of light rather than electrons using hundreds of micron-scale lasers. With the new system, the team reports a greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density, a measure of the power of a system, over state-of-the-art digital computers for machine learning.

In the paper, the team also cites “substantially several more orders of magnitude for future improvement.” As a result, the authors continue, the technique “opens an avenue to large-scale optoelectronic processors to accelerate machine-learning tasks from data centers to decentralized edge devices.” In other words, cell phones and other small devices could become capable of running programs that can currently only be computed at large data centers.

Further, because the components of the system can be created using fabrication processes already in use today, “we expect that it could be scaled for commercial use in a few years. For example, the laser arrays involved are widely used in cell-phone face ID and data communication,” says Zaijun Chen, first author, who conducted the work while a postdoctoral associate at MIT in the Research Laboratory of Electronics and is now an assistant professor at the University of Southern California.

Says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and leader of the work, “ChatGPT is limited in its size by the power of today’s supercomputers. It’s just not economically viable to train models that are much bigger. Our new technology could make it possible to leapfrog to machine-learning models that otherwise would not be reachable in the near future.”

He continues, “We don’t know what capabilities the next-generation ChatGPT will have if it is 100 times more powerful, but that’s the regime of discovery that this kind of technology can allow.” Englund is also leader of MIT’s Quantum Photonics Laboratory and is affiliated with the Research Laboratory of Electronics (RLE) and the Materials Research Laboratory.

ChatGPT has made headlines around the world with its ability to write well-done essays, emails, and computer code based on a few questions from a user. Now an MIT-led team reports a system that could lead to machine-learning programs several orders of magnitude more powerful than the one behind ChatGPT. Plus, the system they developed could use several orders of magnitude less energy than the state-of-the-art supercomputers behind the machine-learning models of today.

In the July 17 issue of Nature Photonics, the researchers report the first experimental demonstration of the new system, which does its computations based on the movement of light rather than electrons using hundreds of micron-scale lasers. With the new system, the team reports a greater than 100 fold improvement in energy efficiency and a 25 fold improvement in compute density, a measure of the power of a system, over state-of-the-art digital computers for machine learning. 

Toward the Future

In the paper, the team also cites “substantially several more orders of magnitude for future improvement.” As a result, the authors continue, the technique “opens an avenue to large-scale optoelectronic processors to accelerate machine-learning tasks from data centers to decentralized edge devices.” In other words, cell phones and other small devices could become capable of running programs that can currently only be computed at large data centers.

Further, because the components of the system can be created using fabrication processes already in use today, “we expect that it could be scaled for commercial use in a few years. For example, the laser arrays involved are widely used in cell-phone face ID and data communication,” says Zaijun Chen, first author, who conducted the work while a postdoctoral associate at MIT in the Research Laboratory of Electronics and is now an assistant professor at the University of Southern California.

Says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and leader of the work, “ChatGPT is limited in its size by the power of today’s supercomputers. It’s just not economically viable to train models that are much bigger. Our new technology could make it possible to leapfrog to machine-learning models that otherwise would not be reachable in the near future.”

He continues, “We don’t know what capabilities the next-generation ChatGPT will have if it is 100 times more powerful, but that’s the regime of discovery that this kind of technology can allow.” Englund is also leader of MIT’s Quantum Photonics Laboratory and is affiliated with the Research Laboratory of Electronics (RLE) and the Materials Research Laboratory.

A Drumbeat of Progress

The current work is the latest achievement in a drumbeat of progress over the last few years by Englund and many of the same colleagues. For example, in 2019 an Englund team reported the theoretical work that led to the current demonstration. The first author of that paper, Ryan Hamerly, now of RLE and NTT Research Inc, is also an author of the current paper.

Additional coauthors of the current Nature Photonics paper are Alexander Sludds, Ronald Davis, Ian Christen, Liane Bernstein, and Lamia Ateshian, all of RLE; and Tobias Heuser, Niels Heermeier, James A. Lott, and Stephan Reitzensttein of Technische Universitat Berlin.

Deep neural networks (DNNs) like the one behind ChatGPT are based on huge machine-learning models that simulate how the brain processes information. However, the digital technologies behind today’s DNNs are reaching their limits even as the field of machine learning is growing. Further, they require huge amounts of energy and are largely confined to large data centers. That is motivating the development of new computing paradigms.

The Advantages of Light

Using light rather than electrons to run DNN computations has the potential to break through the current bottlenecks. Computations using optics, for example, have the potential to use far less energy than those based on electronics. Further, with optics, “you can have much larger bandwidths,” or compute densities, says Chen. Light can transfer much more information over a much smaller area.

But current optical neural networks (ONNs) have significant challenges. For example, they use a great deal of energy because they are inefficient at converting incoming data based on electrical energy into light. Further, the components involved are bulky and take up significant space. And while ONNs are quite good at linear calculations like adding, they are not great at nonlinear calculations like multiplication and “if” statements.

In the current work the researchers introduce a compact architecture that, for the first time, solves all of these challenges and two more simultaneously. That architecture is based on state-of-the-art arrays of vertical surface-emitting lasers (VCSELs), a relatively new technology used in applications including LiDAR remote sensing and laser printing. The particular VCELs reported in the Nature Photonics paper were developed by the Reitzenstein group at Technische Universitat Berlin. “This was a collaborative project that would not have been possible without them,” Hamerly says.

Logan Wright is an assistant professor at Yale University who was not involved in the current research. Comments Wright, “The work by Zaijun Chen et al. is inspiring, encouraging me and likely many other researchers in this area that systems based on modulated VCSEL arrays could be a viable route to large-scale, high-speed optical neural networks. Of course, the state-of-the-art here is still far from the scale and cost that would be necessary for practically useful devices, but I am optimistic about what can be realized in the next few years, especially given the potential these systems have to accelerate the very large-scale, very expensive AI systems like those used in popular textual ‘GPT’ systems like ChatGPT.”

Chen, Hamerly, and Englund have filed for a patent on the work, which was sponsored by the Army Research Office, NTT Research, the National Defense Science and Engineering Graduate Fellowship Program, the National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, and the Volkswagen Foundation.

Citation: Chen, Z., Sludds, A., Davis, R., Christen, I., Bernstein, L., Ateshian, L., Heuser, T., Heermeier, N., Lott, J. A., Reitzenstein, S., Hamerly, R., & Englund, D. (2023). Deep learning with coherent VCSEL neural networks. Nature Photonics, 1-8. https://doi.org/10.1038/s41566-023-01233-w

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

MIT’s PhotoGuard alters photos to stop AI systems from tinkering with them

A new tool called PhotoGuard, created by researchers at MIT, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated, reports MIT Technology Review.

“If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been “immunized” by PhotoGuard, the result will look unrealistic or warped.”

A second method, a diffusion attack, “disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model.” 

For now, PhotoGuard works reliably only on Stable Diffusion, so users’ old images may still be available for misuse.

“In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, a professor at MIT who contributed to the research. But a more effective approach would be for tech companies to add it to images that people upload into their platforms automatically.”

Citation: Hadi Salman et al. Raising the Cost of Malicious AI-Powered Image Editing. arXiv:2302.06588

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

OpenAI’s “insane secret image generator … outperforms previous image generators”

“OpenAI has been privately testing a new iteration of its AI imaging model … and early samples leaked by YouTuber MattVidPro AI show it outperforming previous image generators,” Jose Antonio Lanz reveals in Decrypt today.

“Likely an upgrade of DALL-E 2 … the model showed sharp images with lighting and reflections that mimic real photos .. and recreated detailed paintings down to visible brush strokes.

“Extremely exciting—this blows anything we’ve seen before out of the water, it’s insane,” Matt said in a preview he posted to YouTube. “As models continue improving, the line between artificial and real blurs even further.” 

Image created by Decrypt using AI (credit: Decrypt)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to create programmable bioelectronic nanowires modeled on human-based proteins

What if you could design proteins, like those in your own cells, to function like conductive, biodegradable electronic wires that are compatible with electronic components, like transistors?

A University of Bristol-led study, published today in The Proceedings of the National Academy of Sciences (PNAS), explains how that could work. The protein-chain-based “nanowires” for conducting electrons could be compatible with conventional electronic components made from copper or iron, as well as biological machinery.

Broad range of applications

Ultimately, these nanoscale designer wires would have the potential for use in a wide range of applications, like biosensors for the diagnosis of diseases and detection of environmental pollutants, and like catalysts as artificial photosynthetic proteins for green industrial biotechnology to capture solar energy.

“While our designs take inspiration from the protein-based electronic circuits necessary for all life on Earth, they are free from much of the complexity and instability that can prevent the exploitation of their natural equivalents,” explained lead author Ross Anderson, Professor of Biological Chemistry at the University of Bristol. “We can also build to order these minute electronic components, specifying their properties in a way that is not possible with natural proteins.”

The multidisciplinary team used advanced computational tools to design simple building blocks that could be combined into longer, wire-like protein chains for conducting electrons.

The Circuits of Life

The researchers were also able to visualize the structures of these wires using protein X-ray crystallography and electron cryo-microscopy (cryo-EM) techniques, which allow structures to be viewed in the finest detail. Pushing the technical boundaries of cryo-EM, they obtained images of the smallest individual proteins ever studied with this technique.

These minuscule wires, which are one thousandth of the width of the finest human hair, were made completely of natural amino acids and heme molecules (found in proteins such as hemoglobin, which transports oxygen in red blood cells). Harmless bacteria were used for the manufacture, eliminating the need for potentially complex and environmentally damaging procedures commonly used in the production of synthetic molecules.

New electronic circuits

The team studied electron transfer, biomolecular simulation, structural biology and spectroscopy, gaining insight into how electrons flow through natural biological molecules—a fundamental process that underpins cellular respiration and photosynthesis.

The multidisciplinary team also used advanced computational tools to design simple building blocks that could be combined into longer, wire-like protein chains for conducting electrons.

This invention could form the foundation of new electrical circuits for creating tailor-made catalysts for green industrial biotechnology and artificial photosynthetic proteins for capturing solar energy. Further advances are expected as the project, which began last year, progresses, presenting “significant opportunities to help meet the transition to net zero and more sustainable industrial processes,” the researchers say.

This breakthrough was possible thanks to a £4.9 million grant from the Biotechnology and Biological Science Research Council (BBSRC), the UK’s largest bioscience funder. It resulted in a five-year project entitled “The Circuits of Life,” involving the Universities of Bristol, Portsmouth, East Anglia, and University College London.

Citation: Hutchins, G. H., Noble, C. E., Bunzel, H. A., Williams, C., Dubiel, P., Yadav, S. K., Molinaro, P. M., Barringer, R., Blackburn, H., Hardy, B. J., Parnell, A. E., Landau, C., Race, P. R., Oliver, T. A., Koder, R. L., Crump, M. P., Schaffitzel, C., Oliveira, A. S., Mulholland, A. J., . . . Anderson, J. L. (2023). An expandable, modular de novo protein platform for precision redox engineering. Proceedings of the National Academy of Sciences, 120(31), e2306046120. https://doi.org/10.1073/pnas.2306046120 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI tool tracks detailed evolution of viral pandemics–could have predicted COVID-19 before WHO

In a paper in Patterns on July 21, 2023, Scripps Research scientists demonstrated an AI system for tracking future viral pandemics by using data on recorded SARS-CoV-2 (the virus that causes COVID-19) variants and COVID-19 mortality rates.

The scientists showed that the system could have predicted the emergence of new SARS-CoV-2 “variants of concern” (VOCs) months ahead of the official designations by the World Health Organization (WHO).

COVID early warning “anomaly detector” of “variant dark matter”

Co-first author of the study Salvatore Loguercio, PhD, a staff scientist in the Scripps Research Balch lab at the time of the study, and his team showed that they could use this SARS-CoV-2 tracking system as an early warning “anomaly detector” for gene variants associated with significant changes in viral spread and mortality rates.*

“One of the big lessons of this work is that it is important to take into account not just a few prominent variants, but also the tens of thousands of other undesignated variants, which we call the ‘variant dark matter,’” says study senior author William Balch, PhD, professor in the Department of Molecular Medicine at Scripps Research..

Developing treatments and vaccines using AI

The researchers also visualize the use of their approach to better understand virus biology and thereby enhance the development of treatments and vaccines. Currently, they are using their AI system to uncover key details of how different SARS-CoV-2 proteins worked together in the evolution of the pandemic.

Citation: Ben C. Calverley et al. Understanding the host-pathogen evolutionary balance through Gaussian process modeling of SARS-CoV-2. Patterns. July 21, 2023. DOI: https://doi.org/10.1016/j.patter.2023.100800 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Growing human brain cells onto silicon chips to transform machine learning

A team led by the Monash University Turner Institute for Brain and Mental Health has been awarded almost $600,000 (AUD) by the Australian National Intelligence and Security Discovery Research Grants Program for research in growing human brain cells onto silicon chips. The research goal is to “develop new continual learning capabilities to transform machine learning,” according to an announcement.

DishBrain system

The new research program—led by Associate Professor Adeel Razi from the Turner Institute for Brain and Mental Health in collaboration with Melbourne start-up Cortical Labs, involves growing about 800,000 brain cells living in a dish (the “DishBrain system”).

These cell will be “taught” to perform goal-directed tasks and help understand the various biological mechanisms that underlie lifelong continual learning.

Merging AI and synthetic biology

According to Razi, the research program’s work (using lab-grown brain cells embedded onto silicon chips) “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms.”

However, “this new technology capability in future may eventually surpass the performance of existing, purely silicon-based hardware … with “significant implications across multiple fields such as planning, robotics, advanced automation, brain-machine interfaces, and drug discovery, giving Australia a significant strategic advantage.”

This “continual lifelong learning” means machines can acquire new skills without compromising old ones, adapt to changes, and apply previously learned knowledge to new tasks—all while conserving limited resources such as computing power, memory and energy. Current AI cannot do this and suffers from “catastrophic forgetting,” Razi said (and “hallucinating”?).

The goal is to “develop better AI machines that replicate the learning capacity of these biological neural networks. This will help us scale up the hardware and methods capacity to the point where they become a viable replacement for in silico computing,“ Razi said.

A dystopian vision?

“Just like AGI in general, a BCI [brain-computer interface] can be a boon or a harmful disaster, depending on how it is done,” said Dr. Paul Werbos in an email. Werbos is best known for his 1974 dissertation that first described the process of training artificial neural networks through backpropagation of errors; and as a former program director at the National Science Foundation.

“Bio connections could create huge leapfrogs in neural network hardware, but the brute-force approach is not likely to be anywhere near as powerful as the best current electronic/photonic advances now in process, or as bio-inspired projects better grounded in new information about how brains actually work. 

“This project reminds me a lot of Stapleton’s dystopian vision in Last and First Men. Those approaches in the BCI slides are so ill-grounded that they do pose threats to our very existence. But in most cases, it just wastes money that could have been better spent elsewhere.

“In my view, the NSF/EFRI COPN activity we funded described how to do it right, to benefit both science and technology. If Australia had learned from what we learned, after huge effort and wide-ranging planning conversations, they would be much safer, less wasteful, and far more useful.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Robots explore large complex worlds on their own

Carnegie Mellon University’s Autonomous Exploration Research Team has developed a suite of robotic systems and planners that enable robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way without human intervention.

The group combined a 3D scanning lidar sensor, forward-looking camera and inertial measurement unit sensors with an exploration algorithm to enable the robot to know where it is, where it has been and where it should go next.

“You can set it in any environment, like a department store or residential building after a disaster, and off it goes,” said Ji Zhang, a systems scientist in the Robotics Institute. “It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map.”

The team has worked on exploration systems for more than three years, using a modified motorized wheelchair and drones for much of its testing. They’ve explored and mapped several underground mines, a parking garage, and several other indoor and outdoor locations on the CMU campus. And the system’s computers and sensors can be attached to nearly any robotic platform, transforming it into an explorer.

These robots can explore in three modes

(1) A person can control the robot’s movements and direction while autonomous systems keep it from crashing into walls, ceilings or other objects. (2) A person can select a point on a map and the robot will navigate to that point. (3) The robot sets off on its own, investigates the entire space and creates a map.

The new systems can also work in low-light, treacherous conditions where communication is spotty, like caves, tunnels and abandoned structures.

The group’s most recent work appeared in Science Robotics: “Representation Granularity Enables Time-Efficient Autonomous Exploration in Large, Complex Worlds.” More information: the group’s website.

Citation: Cao, C., Zhu, H., Ren, Z., Choset, H., & Zhang, J. (2023). Representation granularity enables time-efficient autonomous exploration in large, complex worlds. Science Robotics. https://doi.org/adf0970 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Metal cracks, then heals itself

Scientists for the first time have witnessed pieces of metal crack, then fuse back together without any human intervention, overturning fundamental scientific theories in the process.

If the newly discovered phenomenon can be harnessed, it could usher in an engineering revolution—one in which self-healing engines, bridges and airplanes could reverse damage caused by wear and tear, making them safer and longer-lasting, according to the research team from Sandia National Laboratories and Texas A&M University,

“This was absolutely stunning to watch first-hand,” said Sandia materials scientist Brad Boyce. “What we have confirmed is that metals have their own intrinsic, natural ability to heal themselves, at least in the case of fatigue damage at the nanoscale,” he said.

Fatigue damage is one way machines wear out and eventually break. Repeated stress or motion causes microscopic cracks to form. Over time, these cracks grow and spread until—snap! The whole device breaks, or in scientific lingo, fails. The fissure Boyce and his team saw disappear was one of these tiny but consequential fractures, measured in nanometers.

Unexpected discovery

In 2013, Michael Demkowicz—then an assistant professor at MIT’s department of materials science and engineering, now a full professor at Texas A&M—published a new theory, based on findings in computer simulations: under certain conditions metal should be able to weld shut cracks formed by wear and tear.

The discovery that his theory was true came inadvertently at the Center for Integrated Nanotechnologies, a Department of Energy user facility jointly operated by Sandia and Los Alamos national laboratories..

Sandia National Laboratories researcher Ryan Schoell uses a specialized transmission electron microscope technique to study fatigue cracks at the nanoscale. (Credit: Craig Fritz, Sandia National Laboratories)

Khalid Hattar, now an associate professor at the University of Tennessee, Knoxville, and Chris Barr, who now works for the Department of Energy’s Office of Nuclear Energy, were running the experiment at Sandia when the discovery was made. They only meant to evaluate how cracks formed and spread through a nanoscale piece of platinum using a specialized electron microscope technique they had developed to repeatedly pull on the ends of the metal 200 times per second.

Computer model confirms theory

Surprisingly, about 40 minutes into the experiment, the damage reversed course. One end of the crack fused back together as if it was retracing its steps, leaving no trace of the former injury. Over time, the crack regrew along a different direction.

“I was very glad to hear it, of course,” Demkowicz said. The professor then recreated the experiment on a computer model, substantiating that the phenomenon witnessed at Sandia was the same one he had theorized years earlier

“The extent to which these findings are generalizable will likely become a subject of extensive research,” Boyce said. “We show this happening in nanocrystalline metals in vacuum. But we don’t know if this can also be induced in conventional metals in air.”

Their work was supported by the Department of Energy’s Office of Science, Basic Energy Sciences; the National Nuclear Security Administration and the National Science Foundation.

Citation: Barr, C. M., Duong, T., Bufford, D. C., Milne, Z., Molkeri, A., Heckman, N. M., Adams, D. P., Srivastava, A., Hattar, K., Demkowicz, M. J., & Boyce, B. L. (2023). Autonomous healing of fatigue cracks via cold welding. Nature, 1-5. https://doi.org/10.1038/s41586-023-06223-0

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Will AI end civilization (or save it)?

Researchers at Lero, the Science Foundation Ireland Research Centre for Software and University College Cork, are seeking help determining what the public believes and knows about AI, and software more generally.

Psychologist Dr. Sarah Robinson, a senior postdoctoral researcher with Lero, is asking members of the public to take part in a ten-minute anonymized online survey to establish peoples’ hopes and fears for AI, and software in general.

What does the public think?

“As the experts debate, little attention is given to what the public thinks—and the debate is raging,” says Robinson. “Some AI experts express concern that others prioritize imagined apocalyptic scenarios over immediate concerns (such as racist and sexist biases being programmed into machines). As software impacts all our lives, the public is a key stakeholder in deciding what being responsible for software should mean.

“So, that’s why we want to find out what the public is thinking. For example, human rights abuses are happening through AI and facial recognition software.

Biased data

“Research by my Lero colleague Dr. Abeba Birhane and others found that data used to train some AI is contaminated with racist and misogynist language. As AI becomes widespread, the use of biased data may lead to harm and further marginalisation for already marginalised groups.

“While there is a lot in the media about AI, especially ChatGPT, and what kind of world it is creating, there is less information about how the public perceives the software all around us, from social media to streaming services and beyond. We are interested in understanding the public’s point of view—what concerns the public have, what are their priorities in terms of making software responsible and ethical, and the thoughts and ideas they have to make this a reality,” said Robinson.

Short survey

Participants in the survey will be asked for their views and possible concerns on a range of issues and topics, with the hope of clarifying their views on critical issues. Lero is asking members of the public to donate 10 minutes of their time for this short survey.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter