Microsoft’s ChatGPT is teaching robots to follow commands in plain English, but what could go wrong?

Microsoft is collaborating with OpenAI, the creators of ChatGPT, to teach robots to obey simple commands in plain English, and the implications of this development are enormous. The collaboration’s goal is to make it easier for people to interact with robots without having to learn complex programming languages or details about robotic systems.

Microsoft’s new set of design principles for ChatGPT enables robots to understand physical laws and receive instructions in plain English. The framework begins by defining a set of high-level tasks that a robot can perform, then writes a prompt that ChatGPT translates into robot speak, and then runs a simulation of the robot following your instructions.

While this is an exciting development, it also raises concerns because it implies that robots could potentially command themselves, which could have unintended consequences. Before any significant deployment, it is critical to ensure that the technology has been thoroughly tested and that any potential risks have been addressed. Nonetheless, this advancement has the potential to transform how people interact with robots, making it easier to develop and deploy new capabilities.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Hugging Face and Amazon Web Services are democratizing AI with their expanded partnership

Hugging Face and Amazon Web Services (AWS) have expanded their collaboration in order to make artificial intelligence (AI) more open and accessible to all. The partnership’s goal is to speed up the availability of next-generation machine learning models by making them more accessible to the machine learning community and assisting developers in achieving the best performance at the lowest cost.

Machine learning is becoming more common, and its impact on all sectors of the economy is becoming clear. As a result, it is critical that every developer has access to and evaluates the most recent models. Hugging Face’s collaboration with AWS paves the way for this future by making it faster and easier to build, train, and deploy the most advanced machine learning models in the cloud using purpose-built tools.

Hugging Face and AWS’s collaboration will have significant ramifications for the AI industry, and it is expected to revolutionize the development and deployment of machine learning models. With more accessible and affordable AI, the possibilities for innovation are limitless. This collaboration is an important step toward democratizing AI and making it more inclusive, accessible, and affordable to all.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Scary Future According to AGI Safety Researcher Eliezer Yudkowsky

David Wood, Chair of London Futurists:

There’s been a significant development in the global conversation about finding a pathway to safe AGI.

“In this video, released last night, pioneering AGI safety researcher Eliezer Yudkowsky explains why he has taken a sabbatical from MIRI (which he co-founded) and now expects to ‘go down fighting with dignity’ in his quest to prevent a misconfigured AGI from killing everyone.”

I’ll warn you in advance that it’s 90 minutes of existential despair. It’s not for the faint-hearted.

[Eliezer starts speaking at ~15 minutes]

Dr. Ben Goertzel responds:

My view on this is summarized in this tweet I just wrote:

“We should build a neural-symbolic-evolutionary AGI with rich self-reflective and compassionate capability, educate it well, work with it on benefit projects, put it under decentralized control, and have some of us fuse with it. Working on it. No guarantees but odds seem good!”

For a longer quote, I would say: “Eliezer is a very serious thinker on these matters and was the core source of most of the ideas in nick bostroms influential book Superintelligence. But ever since I met him, and first debated these issues with him,  back in 2000 I have felt he had a somewhat narrow view of humanity and the universe in general.   

There are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of, and that we can tap into by creating self reflective compassionate AGIs and doing good loving works together with them. There are no guarantees to be found regarding the future of AGI but taking big risks is how humanity diverged from our apelike ancestors, its what we do. Despondence is unwarranted and unproductive.  We need to focus on optimistically maximizing odds of a wildly beneficial Singularity together.   

A few things I agree with Eli in are : AGI may very likely be just a few years off… This fact is not taken nearly seriously enough by society today. And yes the odds of existential disaster are not zero (but I don’t agree w him they are high)”

_______________________________________________________________________________

Wood continues:

Please do not think Eliezer is merely repeating centuries-old (or decades-old) worries.
Famously, he also started out as an optimist about the prospects for human flourishing in partnership with AI. But he gradually realised the scale of the difficulties we are facing.

To be absolutely clear, the risks from AGI are NOTHING TO DO WITH POSSIBLE AI SENTIENCE

If you’re not familiar with the arguments about that, kindly do yourself a HUGE favor and take the time to bring yourself up to speed.

Moreover, sometimes FEAR IS APPROPRIATE. Without an element of fear, minds would have failed to concentrate sufficiently regarding the threats posed by Covid-19, and the impact of that pathogen upon human civilization would have been even worse. Yes, fear sometimes plays into the hands of people who would manipulate us. But, equally, complacency and self-assurance and glib pseudo-philosophical platitudes can also play into the hands of people who would manipulate us.

This is not the time for naive optimism. Nor for a panicked pessimism. This is time for a clear-eyed realism.

If you don’t have the patience to listen to Eliezer’s explanations (and I concede that his personal style of communication is sometimes awkward), you can find a simpler, more straightforward explanation of the risks in a number of chapters of my book, The Singularity Principles.

— David W. Wood, Chair of London Futurists, Author of The Singularity Principles

.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Artificial Intelligence can help conserve biodiversity

The International Telecommunication Union (ITU) hosted a webinar to discuss the potential of artificial intelligence (AI) in supporting the detection and attribution of biodiversity change, as well as its role in implementing a biodiversity conservation framework. The AI for Good Discovery Series on AI for Biodiversity, co-organized with the Convention on Biological Diversity (CBD), seeks to bring together the AI and biodiversity communities in order to protect the natural world.

CBD Acting Executive Secretary David Cooper emphasized the importance of AI in identifying species, drivers of biodiversity loss, and mapping suitable areas for ecosystem restoration. He did, however, emphasize the importance of human intelligence in ensuring that AI does not disregard local knowledge or violate the rights of stakeholders and rights holders, including Indigenous Peoples and local communities.

McGill University’s Andrew Gonzalez proposed a framework for the detection and attribution of biodiversity change, as well as the potential of AI in tracking progress and linking action on drivers to biodiversity outcomes. He emphasized the importance of theory-guided data science, which would combine big and small data to guide AI applications.

Bottlenecks in advancing AI in the detection and attribution system were discussed, including AI literacy, data availability, and the disconnect between the human, the analytical tool, and the inferences made. Participants endorsed public-private partnerships to leverage tools for biodiversity change, peer-to-peer data sharing, and combining causal inference frameworks in machine learning with traditional and theoretical knowledge.

AI has the potential to play a significant role in biodiversity conservation; however, it must integrate human intelligence, combine big and small data, and address existing bottlenecks in AI advancement.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Experts urge the UK to invest in a ‘BritGPT’ to remain competitive in the AI race.

According to experts who testified before the UK’s science and technology committee, developing a BritGPT is critical for the UK to remain competitive and secure in the field of artificial intelligence. AI is a rapidly evolving field that is critical for a wide range of applications, including healthcare and cybersecurity. Without investing in large language models, the UK will lag behind tech behemoths such as China and the United States. While computing power is expensive, the benefits of making a national investment in AI will pay dividends. 

Creating a BritGPT will not only ensure that the UK remains a major player in AI research, but it will also create jobs and drive the economy. The United Kingdom has some of the best AI researchers, but they need a methodical approach to ensure they have access to the necessary computing power. The development of a BritGPT will require significant investment, but the UK must act quickly to avoid falling behind in the global race for AI.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI and Human Creativity: Can They Work Together?

By accelerating the pace of research and development, artificial intelligence (AI) is revolutionizing various industries and driving innovation. However, as AI becomes more intelligent, it has the potential to inhibit human creativity, raising the question of whether AI and human creativity can coexist. A recent Gottlieb Duttweiler Institute study investigated this issue and discovered that AI can catalyze human creativity in three ways. First, by identifying patterns in large data sets, AI can assist humans in developing new hypotheses that they may have overlooked previously. Second, AI can provide context by sifting through and analyzing massive amounts of data from various sources. Finally, AI can guide experimentation by predicting experiment outcomes based on existing data.

However, the study also noted that AI has limitations when it comes to assisting human creativity. Personal experiences or interactions that inspire human creativity cannot be replicated. Furthermore, AI cannot support essential skills for idea development like learning new skills, relaxing, reflecting, and daydreaming. Finally, those who can mindfully steer intelligent machines while remaining in control are likely to have the most successful ideas.

The study demonstrates that AI and human creativity can coexist effectively if humans make a concerted effort to use AI to free up time for creative activities. However, in order to ensure that AI supports human creativity, it is critical to understand its limitations and collaborate to achieve the best results.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Cloud Computing Will Take ChatGPT to New Heights

ChatGPT, OpenAI’s AI-powered chatbot, has made waves with its natural language processing capabilities. Cloud computing is emerging as a key factor in making ChatGPT more accessible and powerful as businesses look for new ways to integrate it into their workflows. OpenAI has been developing and running ChatGPT on Microsoft’s Azure cloud computing platform, and Microsoft now plans to offer the chatbot as part of its cloud services. This change is expected to open up new opportunities, such as automated code debugging, personalized promotional messages, answering customer questions, and summarizing important documents.

ChatGPT distinguishes itself from other AI language models by storing and drawing on previous conversations, making it an ideal tool for cloud-based conversations. Other cloud computing services, such as Google and Amazon, are testing ChatGPT and will most likely develop their own chatbots. As the cloud computing industry expands, ChatGPT integration is expected to drive even more use cases for cloud-based tools, assisting businesses in getting the most out of their cloud infrastructure.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

MOSS, China’s first ChatGPT competitor, crashes on launch

The conversational language model known as MOSS, China’s first competitor to ChatGPT, launched recently but crashed under the weight of its instant popularity. MOSS, created by Fudan University’s NLP Lab, is intended to perform a variety of natural language tasks such as question answering, summarizing text, generating text, and coding. Although it quickly gained popularity on China’s social media networks and was dubbed the “first Chinese rival to ChatGPT” by state media, MOSS’s servers were unable to handle the high volume of traffic, resulting in a crash.

The development team apologized, citing a lack of computing resources and engineering experience. MOSS’s website has since been updated to acknowledge its limitations, most notably its inability to understand and generate text in languages other than English. Despite the fact that it will be difficult to compete with ChatGPT, the team is actively working to improve its Chinese language skills. In contrast, Baidu is developing Ernie Bot, which will be able to learn and adapt to changing demand. Google is also competing with Bard, while Microsoft has muzzled Bing after discovering that long conversations can confuse the AI. The race to dominate conversational language models is still on, but computing resources are critical to success.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

What Will Be the Next Big Tech Wave? Lawmakers are concerned that Generative AI will be a repeat of social media.

As generative AI continues to grow and advance, lawmakers face a new challenge. Many people are concerned about the lack of consumer protections and standards surrounding AI products and systems, which could expose users to a variety of risks. Concerns have been raised that lawmakers will be ill-equipped to regulate this new technology, as they were with social media, so some have called for the creation of an effective regulatory regime that will force tech companies to prioritize consumer protections.

Several bills related to specific applications of AI have been introduced in Congress, but no comprehensive bill offering comprehensive consumer protections and development standards related to AI products and systems has been introduced. Some argue that the development and deployment of AI must be done in a way that is safe, ethical, and respectful of all Americans’ rights and privacy.

Concerns have also been raised about the potential impact of generative AI, which has the potential to transform the tech industry by creating an infinite amount of customized and personalized content out of thin air. While this may open up new opportunities for users, there are concerns about privacy and security, as well as content owners claiming harm from AI companies using their content to train models without permission. To avoid repeating the mistakes made with social media, lawmakers must be proactive in regulating generative AI.

Create an engaging and catchy title for lewisfarrell@gmail.com that makes people want to read the summary.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Babies Beat Bots

In a new study, researchers conducted a series of experiments with 11-month-old infants, and compared their responses to those of state-of-the-art learning-driven neural-network models.

Infants are fascinated by other people and observe their actions. They attribute goals to others and expect others to pursue goals rationally and efficiently. Conversely, “commonsense AI” — driven by machine-learning algorithms — predicts specific actions directly.

Infant understanding vs. algorithms

In the research, infants watched a series of Zoom videos with simple animated shapes moving around the screen — similar to a video game. The shapes’ actions simulated human behavior and decision-making. The researchers then compared their responses to those yielded by state-of-the-art learning-driven neural-network models with the same videos.

Results showed that infants recognize human-like motivations, even in the simplified actions of animated shapes; while the algorithm-based models showed no such evidence of understanding the motivations underlying the actions. Perhaps like the notorious Bing chatbot saying “I love you!”

Citation: Stojnić, G., Gandhi, K., Yasuda, S., Lake, B. M., & Dillon, M. R. (2023). Commonsense psychology in human infants and machines. Cognition, 235, 105406. https://www.sciencedirect.com/science/article/pii/S0010027723000409

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter