The Tong test for AGI

Researchers in China have developed an AGI test platform for evaluating and benchmarking AGI models.

Funded by the National Key Research and Development Program of China, the Tong* test architecture consists of infrastructure, “dynamic embodied physical and social interactions” (DEPSI), and evaluation tools, the researchers report in the journal Engineering.

Reasonable, but not AGI in general

“The Tong test looks fairly reasonable,” says AGI pioneer Dr. Ben Goertzel, CEO of SingularityNet, “but it’s plainly a test of what I’d call ‘human-capable AGI,’ not AGI in general.

“This sort of test can be valuable, but I don’t think it’s really meaningful to have a single litmus test for ‘general AGI’ nor for ‘human-capable AGI.’ These are multidimensional things that can be measured in a lot of different ways. And different in-progress AGI systems are going to perform differently on different dimensions.   

Useful evaluation tools

“However, it’s useful to have some standardized environments and tests for evaluating in-progress AGI systems, as long as one interprets these as useful tools to play with, rather than as definitive measurement devices,” says Goertzel. 

“General intelligence is not something highly precisely defined, like temperature or pressure, which can be accurately measured if one just builds the right instrument (and I say this being very aware of how hard it is to build an accurate calorimeter…!).”

* “Tong” corresponds to the pronunciation of the Chinese character of “general,” as in “artificial general intelligence”

Citation: Peng, Y., Han, J., Zhang, Z., Fan, L., Liu, T., Qi, S., Feng, X., Ma, Y., Wang, Y., & Zhu, S. (2023). The Tong Test: Evaluating Artificial General Intelligence Through Dynamic Embodied Physical and Social Interactions. Engineering. https://www.sciencedirect.com/science/article/pii/S209580992300293X (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Finger-worn device can ‘translate’ text messages to braille on-the-fly

Engineers and chemists at Lawrence Livermore National Laboratory (LLNL) and Meta have developed a new kind of 3D-printed material that can replicate some characteristics of biological tissue.

The researchers say the advancement could impact the future of “augmented humanity,” where technology enhances human performance by closely interfacing engineered devices with our anatomy. 

Emulating the range of bio materials

As described In a paper in the journal Matter, the method patterns smooth gradients in stiffness to approximate gradients found in biology, such as where bone meets muscle. This overcomes “mechanical mismatch” (natural tissues are soft; electronic devices are usually made of rigid materials).

“For engineers, it’s very hard to get a softer material combined with a stiffer material such as is common in nature,” explained lead author and LLNL engineer Sijia Huang. “Engineers make a part that is stiff and another part that is soft, and then manually assemble them together, so we have a very sharp interface that compromises the mechanical property.

The new technique works by manipulating the intensity of light applied to a photopolymer resin, using a layer-by-layer technique that can rapidly produce parts by projecting light into a liquid resin. Lower light intensity results in a softer material; higher light a stiffer material.

Wearable braille display

To demonstrate the potential, engineers at Meta used the material to 3D-print an inexpensive braille display that could be worn on a single finger and connected to a smartphone and an air pump.

When text is transmitted via the phone, sections of the wearable fill with air, causing it to deform and create braille letters, enabling a sightless person to “read” the text through the device.

The material is stretchable to around 200 times its original properties, and as its gradient transitions from soft to stiffer material, its toughness increases by 10 times. Huang said the material could be tailored for energy-absorbing materials, soft robotics and wearable electronic devices.

Citation: Sijia Huang et al. (June 20, 2023) One-pot ternary sequential reactions for photopatterned gradient multimaterials. Matter. DOI: https://doi.org/10.1016/j.matt.2023.05.040 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Does a brain in a dish have moral rights?

Inventors of DishBrain—a collection of 800,000 living brain cells in a dish that learned to play Pong—are exploring ethical applications of biocomputing with an international team of ethicists and medical researchers.

In a paper published in Biotechnology Advances, the research team maps such a framework.

“Combining biological neural systems with silicon substrates to produce intelligence-like behavior has significant promise, but we need to proceed with the bigger picture in mind to ensure sustainable progress,” says lead author Dr. Brett Kagan, Chief Scientific Officer of biotech start-up Cortical Lab.

“We haven’t adequately addressed the moral issues of what is even considered ‘conscious’ in the context of today’s technology,” says Dr. Julian Savulescu, author and Uehiro Chair in Practical Ethics at the University of Oxford. “There are still many ways of describing consciousness or intelligence, each raising different implications for how we think about biologically based intelligent systems.”

But can they suffer?

The paper cites early English philosopher Jeremy Bentham, who argued that “the question is not, ‘can they reason?’ Or ‘can they talk?’ but, ‘can they suffer?’”

“From that perspective, even if new biologically based computers show human-like intelligence, it does not necessarily follow that they have moral status,” says co-author Dr. Tamra Lysaght, Director of Research at the Centre for Biomedical Ethics, National University of Singapore.

“Our paper doesn’t attempt to definitively answer the full suite of moral questions posed by bio-computers, but it provides a starting framework to ensure that the technology can continue to be researched and applied responsibly,” says Lysaght.

Understanding diseases

The paper further highlights the ethical challenges and opportunities offered by DishBrain’s potential to greatly accelerate our understanding of diseases such as epilepsy and dementia.

“Current cell lines used in medical research predominately have European-type genetic ancestry, potentially making it harder to identify genetic-linked side effects,” says co-author Dr. Christopher Gyngell, Research Fellow in biomedical ethics from the Murdoch Children’s Research Institute and The University of Melbourne.

“In future models of drug screening, we have the chance to make them more sufficiently representative of real-world patients by using more diverse cell lines, and that means potentially faster and better drug development.”

The researchers point out that it is worth working through these moral issues, as the potential impact of bio-computing is significant.

Bioenergy’s high-level energy efficiency

“Silicon-based computing is massively energy-hungry, with a supercomputer consuming millions of watts of energy. By contrast, the human brain uses as little as 20 watts of energy—biological intelligences will show similar energy efficiency,” says Kagan.

“As it stands, the IT industry is a massive contributor to carbon emissions. If even a relatively small number of processing tasks could be done with biocomputers, there is a compelling environmental reason to explore these alternatives.”

Citation: Kagan, B. J., Gyngell, C., Lysaght, T., Cole, V. M., Sawai, T., & Savulescu, J. (2023). The technology, opportunities, and challenges of Synthetic Biological Intelligence. Biotechnology Advances, 68, 108233. https://doi.org/10.1016/j.biotechadv.2023.108233.

email sharing button
facebook sharing button
print sharing button
twitter sharing button
whatsapp sharing button
reddit sharing button

COMMENTS | COMMENTING POLICY

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Possible new chemical combinations for the origin of life in distant inhabited planets

A team led by scientists at the University of Wisconsin-Madison has written a “cookbook” with hundreds of recipes that have the potential to give rise to life.

“The origin of life really is a something-from-nothing process,” says Betül Kaçar, a NASA-supported astrobiologist and UW–Madison professor of bacteriology. “But that something can’t happen just once. Life comes down to chemistry and conditions that can generate a self-reproducing pattern of reactions.”

Autocatalytic reactions

Chemical reactions that produce molecules that encourage the same reaction to happen again and again are called “autocatalytic reactions.” In a new study published in the Journal of the American Chemical Society, Zhen Peng, a postdoctoral researcher in the Kaçar laboratory, and collaborators compiled 270 combinations of molecules. These involve atoms from all groups and series across the periodic table that have the potential for sustained autocatalysis.

Comproportionation reactions

But to be autocatalytic, the outcome of the reaction also needs to provide starting materials for the reaction to occur again, allowing the output to become a new input.

The researchers focused their search on “comproportionation reactions,” where two compounds that include the same element (but with different numbers of electrons, or reactive states) combine to create a new compound in which the element is in the middle of the starting reactive states.

Comproportionation reactions result in multiple copies of some of the molecules involved, providing materials for the next steps in autocatalysis.

“We will never definitively know what exactly happened on this planet to generate life,” Kaçar says. “But, in a test tube, we can create multiple planetary conditions to understand how the dynamics to sustain life can evolve in the first place.”

Kaçar also leads a NASA-supported consortium called MUSE (Metal Utilization & Selection Across Eons).

Citation: Zhen Peng, Zachary R. Adam, Albert C. Fahrenbach, and Betül Kaçar. 18-Sep-2023. Assessment of Stoichiometric Autocatalysis across Element Groups. Journal of the American Chemical Society. DOI: 10.1021/jacs.3c07041 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Making AI smarter: a multisensory, integrated artificial neuron

Sensors in robots and other devices typically run separately. However, “allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed,” emulating the human brain, says Saptarshi Das, associate professor of engineering science and mechanics at Penn State.

Penn State researchers applied this synergistic approach in developing the first multisensory integrated artificial neuron, which they announced today (Sept. 15) in the open-access journal Nature Communications.

Creating a tactile-light neuron

The team focused on integrating a tactile sensor and a visual sensor. This combination allows the output of one sensor to modify the other, with the help of visual memory. (For example, a flash of light could enhance the chance of successful movement through a dark room.)

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor, based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics, useful for detecting light and supporting transistors.

The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

Visual and tactile stimuli are integrated in this bio-inspired artificial neuron (credit: Tyler Henderson/Penn State)

Simulations

To simulate touch, the tactile sensor used the triboelectric effect, in which two layers slide against one another to produce electricity (meaning the touch stimuli were encoded into electrical impulses). To simulate visual input, the researchers shined a light into a monolayer molybdenum disulfide photo memtransistor (which can remember visual input), similar to how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron—simulated as electrical output—increased when both visual and tactile signals were weak.

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

Combining sensors to mimic how our brains actually work

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute.

“The requirements for different sensors are based on the context. In a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense,” he noted. “We have a complete sense of our surroundings, and our decision-making is based on the integration of what we’re seeing, hearing, touching, smelling, etc.

“The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

The Army Research Office and the National Science Foundation supported this work.

Citation: Sadaf, M. U., Sakib, N. U., Pannone, A., Ravichandran, H., & Das, S. (2023). A bio-inspired visuotactile neuron for multisensory integration. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-40686-z (open-access)


Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Your brain’s amygdala may release cannabinoid molecules during stress, researchers find

When you’re under stress, your brain’s amygdala (a key emotional brain center) may release cannabinoid molecules*, activating the same brain receptors that THC** does, Northwestern Medicine researchers have discovered.

These endogenous (body’s own) feel-good cannabinoid molecules dampen incoming stress alarms from the hippocampus, a memory and emotion center in the brain, the researchers report in a study with mice, published Sept. 12 in the journal Cell Reports. 

Overcoming stress disorders: new drug development

Stress heightens risk for many psychiatric disorders. This new finding opens a new avenue for drug development to treat psychiatric disorders from conditions like generalized anxiety, major depression and post-traumatic stress disorder (PTSD), the researchers suggest.

“Understanding how the brain adapts to stress at the molecular, cellular and circuit level could provide critical insight into how stress is translated into mood disorders and may reveal novel therapeutic targets for the treatment of stress-related disorders,” said study author Dr. Sachi Patel, chair of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine and a Northwestern Medicine psychiatrist.

“The endocannabinoid system is one of the leading signaling systems that has been identified as a prominent drug-development candidate for stress-related psychiatric disorders, Patel said.

Removing cannabinoid receptor decreases pleasure and ability to cope

When these scientists removed the target of these cannabinoids, the cannabinoid receptor type 1, it resulted in poorer ability to cope with stress and motivational deficits in the mice. Specifically, when the receptor target of these endogenous cannabinoids was removed at hippocampal-amygdala synapses, mice adopted more passive and immobile responses to stress and had a lower preference to drink a sweetened sucrose water after stress exposure.

This finding may relate to anhedonia (decrease in pleasure), often experienced by patients with stress-related disorders such as depression and PTSD.

“Determining whether increasing the levels of endogenous cannabinoids can be used as potential therapeutics for stress-related disorders is a next logical step from this study and our previous work,” said Patel. “Also, there are ongoing clinical trials in this area that may be able to answer this question in the near future.” 

* Cannabinoids are a class of biological compounds that bind to cannabinoid receptors. They are most frequently sourced from and associated with the plants of the Cannabis genus, including Cannabis sativa, Cannabis indica, and Cannabis ruderalis.

** THC (tetrahydrocannabinol) is the substance that’s primarily responsible for the effects of marijuana on a person’s mental state.

Citation: Kondev, V., Najeed, M., Yasmin, F., Morgan, A., Loomba, N., Johnson, K., … & Patel, S. Sept.12, 2023. Endocannabinoid Release at Ventral Hippocampal-Amygdala Synapses Regulates Stress-Induced Behavioral Adaptation. Cell Reports. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=433

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

‘Lightning’ system combines light and electrons to unlock faster, greener computing for machine-learning models

The growing demand for high-performance computers that can support increasingly complex (and expensive) AI models has led engineers to explore new methods for expanding the computational capabilities of their machines.

Now Manya Ghobadi, an associate professor at MIT’s Department of Electrical Engineering and Computer Science (EECS) and a CSAIL member, and her colleagues have developed a solution: a system (dubbed “Lightning”) that connects photons (produced by lasers) to the electronic components (transistors and wires) of computers—creating hybrid photonic-electronic, reconfigurable network interface cards (“SmartNICs).

First photonic-computing prototype to serve real-time, machine-learning inference requests

The new system allows deep neural networks (machine-learning models that imitate how brains process information) to complete inference tasks (like image recognition and large language models generated in chatbots such as ChatGPT).

Machine-learning services completing inference-based tasks, like ChatGPT and BERT, currently require heavy computing resources and are expensive—some estimates show that ChatGPT requires $3 million or more per month to run.

They’re also environmentally detrimental. Instead, Lightning uses photons, which move faster than electrons do in wires while generating less heat.

The team is currently (Sept. 10–14) presenting their findings at the Association for Computing Machinery’s Special Interest Group on Data Communication (SIGCOMM).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How ecology and AI could collaborate

A paper published September 11 in the open-access journal Proceedings of the National Academy of Sciences argues for a synergy between AI and ecology—one that could strengthen AI and also help to solve complex global challenges, such as disease outbreaks, loss of biodiversity, and climate-change impacts.

Dealing with complex systems

The paper argues that there are many more possibilities for applying AI in ecology, such as in “synthesizing big data and finding missing links in complex systems,” said co-author Shannon LaDeau, a disease ecologist at Cary Institute.

Ecologists are also using AI to searching for patterns in large data sets and making more accurate predictions, such as whether new viruses might be capable of infecting humans and which animals are most likely to harbor those viruses.

Smarter AI

Inspired by ecological systems, a more robust AI might include feedback loops, redundant pathways, and decision-making frameworks, the researchers suggest. These flexibility upgrades could also contribute to a more “general intelligence” for AIs that could enable reasoning and connection-making beyond the specific data that the algorithm was trained on.

Ecology could also help the AI-driven large language models that power chatbots block “hallucinations” (when an AI generates false information). Because ecology examines complex systems at multiple levels and in holistic ways, it’s good at capturing emergent properties and can help reveal the mechanisms behind such behaviors, the researchers suggest.

The research was funded by the National Science Foundation.

Citation: Barbara A. Han et al. September 11, 2023. A synergistic future for AI and ecology. https://www.pnas.org/doi/10.1073/pnas.2220283120 (open source)

https://doi.org/10.1073/pnas.2220283120

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Was this written by a human or AI? You probably can’t tell.

Even linguistics experts are largely unable to spot the difference between writing created by an AI or human, say University of South Florida researchers.

Research just published in the ScienceDirect journal Research Methods in Applied Linguistics revealed that experts from the world’s top linguistic journals could not differentiate between AI- and human-generated abstracts more than 39 percent of the time.

“We thought if anybody is going to be able to identify human-produced writing, it should be people in linguistics who’ve spent their careers studying patterns in language and other aspects of human communication,” said Matthew Kessler, a scholar in the USF the Department of World Languages, working with Elliott Casal, assistant professor of applied linguistics at The University of Memphis.

Linquists mostly clueless

So Kessler and associates tasked 72 experts in linguistics with reviewing a variety of research abstracts to determine whether they were written by AI or humans. Each expert was asked to examine four writing samples. None correctly identified all four; 13 percent got them all wrong.

Based on this, Kessler and Casal concluded ChatGPT can write short genres just as well as most humans—if not better in some cases: AI typically does not make grammatical errors.

Longer texts: probably generated by humans

But for longer texts, AI has been known to hallucinate and make up content, making it easier to identify that it was generated by AI,” Kessler said.

Kessler hopes this study will lead to a bigger conversation to establish the necessary ethics and guidelines surrounding the use of AI in research and education.

Citation: Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2(3), 100068. https://doi.org/10.1016/j.rmal.2023.100068 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Global cooperation urgently needed to govern risks of advanced AI, warns report

A new report by The Millennium Project warns that advanced artificial intelligence systems could emerge sooner than expected, posing unprecedented risks unless prudent governance frameworks are rapidly put in place.

The report, focused on International governance Issues of the transition from Artificial Narrow Intelligence to Artificial General Intelligence (AGI), is a distillation of interviews and collected insights from 55 global AI experts on 22 questions.

The experts on how to regulate AGI—AI that can handle novel situations as well as, or better than humans—say it represents an “intelligence explosion” that creates AI that surpasses human abilities, the report states.

New kind of flexible governance

“Lack of governance could lead to catastrophic outcomes, including existential threats to humanity if such systems are misaligned with human values and interests,” the report says. And “no existing governance models are adequately prepared to manage the risks and opportunities posed by artificial general intelligence (AGI).”

The report calls for rapid development of a new kind of flexible governance that can match and anticipate the pace of AI change and can provide the necessary safeguards while not stifling the promises and advance of AI.

“AGI is closer than any time before—the next advance could surpass human intelligence,” says Dr. Ilya Sutskever, co-founder of OpenAI, in the report.

“This might even happen within just a few years,” Dr. Ben Goertzel, author of AGI Revolution added. “It is more about who controls the development and use of AGI than a list of ethics.

“Alignment with human values is critical but challenging. I think we can create AGI by assembling components that already exist,” Goertzel suggested. “For example, connecting a large language model, a symbolic reasoning system, and an evolutionary learning system with the right combination architecture (such as our OpenCog Hyperon system running on SingularityNET).

“In this way, I believe, we can get an AGI with roughly human-level capability. This might even happen just a few years from now. If so, It would be capable of rewriting and improving itself—which may then lead to super-human-level AGI.”

Racing for AI supremacy

Because the benefits of AGI are so great in medicine, education, management, and productivity, corporations are racing to be first. 

AGI will increase political power, so governments are also racing to be first. 

International cooperation is essential but threatened by competitive tensions among nations and corporations.

The shared risks may compel collaboration, but overcoming distrust poses enormous challenges:

Extraordinary enforcement powers may be needed for governance to be trusted and effective globally, potentially including military capabilities.

Although controversial, proposals to limit research and development may be needed to allow time to design and implement management solutions.

The window for developing effective governance is short, demanding unprecedented collaboration.

Urgent action

The Millennium Project is calling for urgent action to create AGI governance and alignment at national and international levels before advanced AI exceeds humanity’s ability to control it safely. With stakes potentially including human extinction, the report warns we can ill afford delay in mobilizing global cooperation.

“If we don’t get an UN Convention on AGI and a UN AGI Agency to enforce rules, guardrails, auditing, and verification right, various forms of artificial superintelligence could emerge beyond our control and not to our liking,” says Jerome Glenn, CEO of The Millennium Project. “Management of AGI could be the most difficult problem humanity has ever faced.”

This work was supported by the Dubai Future Foundation and the Future of Life InstituteThe Millennium Project is an international participatory think tank with 70 global nodes and three regional networks. Established in 1996, it has published more than 60 futures research projects based on international judgments. 

Full disclosure: I participated with the Millennium Project in this study—Amara Angelica

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter