The Scary Future According to AGI Safety Researcher Eliezer Yudkowsky

David Wood, Chair of London Futurists:

There’s been a significant development in the global conversation about finding a pathway to safe AGI.

“In this video, released last night, pioneering AGI safety researcher Eliezer Yudkowsky explains why he has taken a sabbatical from MIRI (which he co-founded) and now expects to ‘go down fighting with dignity’ in his quest to prevent a misconfigured AGI from killing everyone.”

I’ll warn you in advance that it’s 90 minutes of existential despair. It’s not for the faint-hearted.

[Eliezer starts speaking at ~15 minutes]

Dr. Ben Goertzel responds:

My view on this is summarized in this tweet I just wrote:

“We should build a neural-symbolic-evolutionary AGI with rich self-reflective and compassionate capability, educate it well, work with it on benefit projects, put it under decentralized control, and have some of us fuse with it. Working on it. No guarantees but odds seem good!”

For a longer quote, I would say: “Eliezer is a very serious thinker on these matters and was the core source of most of the ideas in nick bostroms influential book Superintelligence. But ever since I met him, and first debated these issues with him,  back in 2000 I have felt he had a somewhat narrow view of humanity and the universe in general.   

There are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of, and that we can tap into by creating self reflective compassionate AGIs and doing good loving works together with them. There are no guarantees to be found regarding the future of AGI but taking big risks is how humanity diverged from our apelike ancestors, its what we do. Despondence is unwarranted and unproductive.  We need to focus on optimistically maximizing odds of a wildly beneficial Singularity together.   

A few things I agree with Eli in are : AGI may very likely be just a few years off… This fact is not taken nearly seriously enough by society today. And yes the odds of existential disaster are not zero (but I don’t agree w him they are high)”

_______________________________________________________________________________

Wood continues:

Please do not think Eliezer is merely repeating centuries-old (or decades-old) worries.
Famously, he also started out as an optimist about the prospects for human flourishing in partnership with AI. But he gradually realised the scale of the difficulties we are facing.

To be absolutely clear, the risks from AGI are NOTHING TO DO WITH POSSIBLE AI SENTIENCE

If you’re not familiar with the arguments about that, kindly do yourself a HUGE favor and take the time to bring yourself up to speed.

Moreover, sometimes FEAR IS APPROPRIATE. Without an element of fear, minds would have failed to concentrate sufficiently regarding the threats posed by Covid-19, and the impact of that pathogen upon human civilization would have been even worse. Yes, fear sometimes plays into the hands of people who would manipulate us. But, equally, complacency and self-assurance and glib pseudo-philosophical platitudes can also play into the hands of people who would manipulate us.

This is not the time for naive optimism. Nor for a panicked pessimism. This is time for a clear-eyed realism.

If you don’t have the patience to listen to Eliezer’s explanations (and I concede that his personal style of communication is sometimes awkward), you can find a simpler, more straightforward explanation of the risks in a number of chapters of my book, The Singularity Principles.

— David W. Wood, Chair of London Futurists, Author of The Singularity Principles

.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Artificial Intelligence can help conserve biodiversity

The International Telecommunication Union (ITU) hosted a webinar to discuss the potential of artificial intelligence (AI) in supporting the detection and attribution of biodiversity change, as well as its role in implementing a biodiversity conservation framework. The AI for Good Discovery Series on AI for Biodiversity, co-organized with the Convention on Biological Diversity (CBD), seeks to bring together the AI and biodiversity communities in order to protect the natural world.

CBD Acting Executive Secretary David Cooper emphasized the importance of AI in identifying species, drivers of biodiversity loss, and mapping suitable areas for ecosystem restoration. He did, however, emphasize the importance of human intelligence in ensuring that AI does not disregard local knowledge or violate the rights of stakeholders and rights holders, including Indigenous Peoples and local communities.

McGill University’s Andrew Gonzalez proposed a framework for the detection and attribution of biodiversity change, as well as the potential of AI in tracking progress and linking action on drivers to biodiversity outcomes. He emphasized the importance of theory-guided data science, which would combine big and small data to guide AI applications.

Bottlenecks in advancing AI in the detection and attribution system were discussed, including AI literacy, data availability, and the disconnect between the human, the analytical tool, and the inferences made. Participants endorsed public-private partnerships to leverage tools for biodiversity change, peer-to-peer data sharing, and combining causal inference frameworks in machine learning with traditional and theoretical knowledge.

AI has the potential to play a significant role in biodiversity conservation; however, it must integrate human intelligence, combine big and small data, and address existing bottlenecks in AI advancement.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Experts urge the UK to invest in a ‘BritGPT’ to remain competitive in the AI race.

According to experts who testified before the UK’s science and technology committee, developing a BritGPT is critical for the UK to remain competitive and secure in the field of artificial intelligence. AI is a rapidly evolving field that is critical for a wide range of applications, including healthcare and cybersecurity. Without investing in large language models, the UK will lag behind tech behemoths such as China and the United States. While computing power is expensive, the benefits of making a national investment in AI will pay dividends. 

Creating a BritGPT will not only ensure that the UK remains a major player in AI research, but it will also create jobs and drive the economy. The United Kingdom has some of the best AI researchers, but they need a methodical approach to ensure they have access to the necessary computing power. The development of a BritGPT will require significant investment, but the UK must act quickly to avoid falling behind in the global race for AI.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI and Human Creativity: Can They Work Together?

By accelerating the pace of research and development, artificial intelligence (AI) is revolutionizing various industries and driving innovation. However, as AI becomes more intelligent, it has the potential to inhibit human creativity, raising the question of whether AI and human creativity can coexist. A recent Gottlieb Duttweiler Institute study investigated this issue and discovered that AI can catalyze human creativity in three ways. First, by identifying patterns in large data sets, AI can assist humans in developing new hypotheses that they may have overlooked previously. Second, AI can provide context by sifting through and analyzing massive amounts of data from various sources. Finally, AI can guide experimentation by predicting experiment outcomes based on existing data.

However, the study also noted that AI has limitations when it comes to assisting human creativity. Personal experiences or interactions that inspire human creativity cannot be replicated. Furthermore, AI cannot support essential skills for idea development like learning new skills, relaxing, reflecting, and daydreaming. Finally, those who can mindfully steer intelligent machines while remaining in control are likely to have the most successful ideas.

The study demonstrates that AI and human creativity can coexist effectively if humans make a concerted effort to use AI to free up time for creative activities. However, in order to ensure that AI supports human creativity, it is critical to understand its limitations and collaborate to achieve the best results.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Cloud Computing Will Take ChatGPT to New Heights

ChatGPT, OpenAI’s AI-powered chatbot, has made waves with its natural language processing capabilities. Cloud computing is emerging as a key factor in making ChatGPT more accessible and powerful as businesses look for new ways to integrate it into their workflows. OpenAI has been developing and running ChatGPT on Microsoft’s Azure cloud computing platform, and Microsoft now plans to offer the chatbot as part of its cloud services. This change is expected to open up new opportunities, such as automated code debugging, personalized promotional messages, answering customer questions, and summarizing important documents.

ChatGPT distinguishes itself from other AI language models by storing and drawing on previous conversations, making it an ideal tool for cloud-based conversations. Other cloud computing services, such as Google and Amazon, are testing ChatGPT and will most likely develop their own chatbots. As the cloud computing industry expands, ChatGPT integration is expected to drive even more use cases for cloud-based tools, assisting businesses in getting the most out of their cloud infrastructure.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

MOSS, China’s first ChatGPT competitor, crashes on launch

The conversational language model known as MOSS, China’s first competitor to ChatGPT, launched recently but crashed under the weight of its instant popularity. MOSS, created by Fudan University’s NLP Lab, is intended to perform a variety of natural language tasks such as question answering, summarizing text, generating text, and coding. Although it quickly gained popularity on China’s social media networks and was dubbed the “first Chinese rival to ChatGPT” by state media, MOSS’s servers were unable to handle the high volume of traffic, resulting in a crash.

The development team apologized, citing a lack of computing resources and engineering experience. MOSS’s website has since been updated to acknowledge its limitations, most notably its inability to understand and generate text in languages other than English. Despite the fact that it will be difficult to compete with ChatGPT, the team is actively working to improve its Chinese language skills. In contrast, Baidu is developing Ernie Bot, which will be able to learn and adapt to changing demand. Google is also competing with Bard, while Microsoft has muzzled Bing after discovering that long conversations can confuse the AI. The race to dominate conversational language models is still on, but computing resources are critical to success.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

What Will Be the Next Big Tech Wave? Lawmakers are concerned that Generative AI will be a repeat of social media.

As generative AI continues to grow and advance, lawmakers face a new challenge. Many people are concerned about the lack of consumer protections and standards surrounding AI products and systems, which could expose users to a variety of risks. Concerns have been raised that lawmakers will be ill-equipped to regulate this new technology, as they were with social media, so some have called for the creation of an effective regulatory regime that will force tech companies to prioritize consumer protections.

Several bills related to specific applications of AI have been introduced in Congress, but no comprehensive bill offering comprehensive consumer protections and development standards related to AI products and systems has been introduced. Some argue that the development and deployment of AI must be done in a way that is safe, ethical, and respectful of all Americans’ rights and privacy.

Concerns have also been raised about the potential impact of generative AI, which has the potential to transform the tech industry by creating an infinite amount of customized and personalized content out of thin air. While this may open up new opportunities for users, there are concerns about privacy and security, as well as content owners claiming harm from AI companies using their content to train models without permission. To avoid repeating the mistakes made with social media, lawmakers must be proactive in regulating generative AI.

Create an engaging and catchy title for lewisfarrell@gmail.com that makes people want to read the summary.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Babies Beat Bots

In a new study, researchers conducted a series of experiments with 11-month-old infants, and compared their responses to those of state-of-the-art learning-driven neural-network models.

Infants are fascinated by other people and observe their actions. They attribute goals to others and expect others to pursue goals rationally and efficiently. Conversely, “commonsense AI” — driven by machine-learning algorithms — predicts specific actions directly.

Infant understanding vs. algorithms

In the research, infants watched a series of Zoom videos with simple animated shapes moving around the screen — similar to a video game. The shapes’ actions simulated human behavior and decision-making. The researchers then compared their responses to those yielded by state-of-the-art learning-driven neural-network models with the same videos.

Results showed that infants recognize human-like motivations, even in the simplified actions of animated shapes; while the algorithm-based models showed no such evidence of understanding the motivations underlying the actions. Perhaps like the notorious Bing chatbot saying “I love you!”

Citation: Stojnić, G., Gandhi, K., Yasuda, S., Lake, B. M., & Dillon, M. R. (2023). Commonsense psychology in human infants and machines. Cognition, 235, 105406. https://www.sciencedirect.com/science/article/pii/S0010027723000409

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Why Language and Thought Shouldn’t be Equated: A Cognitive Perspective on Large Language Models

University of Texas at Austin and Massachusetts Institute of Technology (MIT) researchers have proposed a new way of thinking about large language models (LLMs). They argue in their paper “Dissociating language and thought in large language models: a cognitive perspective” that in order to understand the capabilities and limitations of LLMs, we must distinguish between “formal” and “functional” linguistic competence. The capacities required to produce and comprehend a given language are referred to as formal linguistics, whereas functional linguistics is concerned with using language to do things in the world.

In their paper, the researchers investigate two common fallacies in language and thought. The first fallacy is that an entity that is proficient in language is also proficient in thought. The second fallacy is that a language model is not a good model of human language if it cannot fully capture the richness and sophistication of human thought. These fallacies, according to the researchers, stem from the same misconception of equating language and thought.

LLMs excel at formal linguistic competence, which encompasses linguistic rules as well as statistical regularities that cannot be captured by rules. They do, however, have a long way to go in terms of functional linguistic competence. To achieve near-human performance, LLMs require unrealistic amounts of data, and they lack pre-existing biases that can help them learn quickly from sparse and noisy input. What would be interesting is to research inductive biases that can help LLMs learn faster and with less data, as well as architectures that can capture these biases.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to turn plastic waste into profitable nanomaterials

It would also be cool if the supply-chain/manufacturing-scale issues with nanotubes could be solved, and bring us into the next age of materials, with space habitats, invulnerable cars, etc.

Plastic waste (credit: Organization for Economic Cooperation)

The amount of plastic waste produced globally has doubled over the past two decades and plastic production is expected to triple by 2050. Most of this waste is ending up in landfills, incinerated or otherwise mismanaged, according to the Organization for Economic Cooperation and Development

When you put that soda bottle or takeout container into the recycling bin, you have no clue if it’s going to be converted into something new and useful. Some estimates suggest only 5% is actually being recycled.

Turning plastic into nanotubes and hybrid nanomaterials

Now Kevin Wyss, a Rice graduate student and lead author on a study published in Advanced Materials describes how he and colleagues in the lab of chemist James Tour had a solution. They used their flash Joule heating technique to turn plastic into valuable carbon nanotubes and hybrid nanomaterials.

By controlling the reaction parameters, a hybrid carbon nanomaterial consisting of nanotubes with graphene-sheet bits attached to their ends was obtained from mixed waste plastics. When used in composites, these hybrids performed better than either graphene or carbon nanotubes alone. (Credit: Tour lab/Rice University)

“We were able to make a hybrid carbon nanomaterial that outperformed both graphene and commercially available carbon nanotubes,” Wyss said.

Graphene, carbon nanotubes and other carbon-based nanomaterials are generally strong and chemically robust, have low density and lots of surface area, and possess conductivity and broadband electromagnetic absorption abilities. This makes them useful in a variety of industrial, medical and electronics applications such as composites, coatings, sensors, electrochemical energy storage and more.

The structure of the new hybrid carbon nanomaterial is responsible for its enhanced performance. “Let’s say I was trying to pull a string of yarn out of a sweater,” Wyss said. “If the string is straight and smooth, it can come out quite easily sometimes and ruin the weave. It’s the same with the carbon nanotubes; having these masses of graphene attached at the ends helps make them much harder to remove, thereby strengthening the composite.

Flash Joule heating can be used to make carbon nanotubes and carbon nanofibers from mixed waste plastics, a method that is 90% more efficient than existing production processes. The nanotube diameter can be controlled by changing the power or catalyst used. (Credit: Tour lab/Rice University)

“Upcycling” for an economic incentive

“Recycling plastic costs more than just producing new plastic,” Wyss said. “There’s very little economic incentive to recycle plastic. That’s why we turned to upcycling, or turning low-value waste materials into something with a higher monetary or use value. If we can turn waste plastic into something more valuable, then people can make money off of being responsible in how they deal with discarded plastics.”

A life cycle analysis of the production process also revealed that flash joule heating was considerably more energy-efficient and environmentally friendly than existing nanotube production processes. “Compared to commercial methods for carbon nanotube production that are being used right now, ours uses about 90% less energy and generates 90%-94% less carbon dioxide,” Wyss said.

The plastic, which does not need to be sorted or washed as in traditional recycling, is “flashed” at temperatures over 3,100 kelvins (about 5,120 degrees Fahrenheit). “All we do is grind the material into small, confetti-sized pieces, add a bit of iron and mix in a small amount of a different carbon — say, charcoal — for conductivity,” Wyss said.

Machine learning

The researchers describe in Advanced Materials how machine-learning models that adapt to variables and show them how to optimize procedures are helping them push forward. “Machine-learning algorithms will be critical to making the flash process rapid and scalable without negatively affecting the graphene roduct’s properties,” said James Tour, the T. T. and W. F. Chao Professor of Chemistry and a professor of materials science and nanoengineering in Rice’s George R. Brown School of Engineering.

Citation: Waste Plastic into Hybrid Carbon Nanomaterials | Advanced Materials | DOI:
https://doi.org/10.1002/adma.202209621 Authors: Kevin Wyss, John Li, Paul Advincula, Ksenia Bets, Weiyin Chen, Lucas Eddy, Karla Silva, Jacob Beckham, Jinhang Chen, Wei Meng, Bing Deng, Satish Nagarajaiah, Boris Yakobson and James Tour.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter