back Back

Researchers argue that LLMs don’t understand the world

Nov. 07, 2024.
2 mins. read. 9 Interactions

In a new study researchers argue that, despite their impressive performance at certain tasks, LLMs don't really understand the world.

About the Writer

Giulio Prisco

62.36371 MPXR

Giulio Prisco is Senior Editor at Mindplex. He is a science and technology writer mainly interested in fundamental science and space, cybernetics and AI, IT, VR, bio/nano, crypto technologies.

Large language models (LLMs), like GPT-4, are good at tasks like writing poetry or generating computer code. After training, these models predict the next word in a sentence. This ability sometimes makes it seem like the models understand general truths about the world. However, a recent study questions this idea.

The researchers, from MIT and the universities of Chicago, Cornell, and Harvard, describe the methods and results of the study in a paper published in arXiv.

The researchers tested a transformer-based LLM that uses large amounts of text data to predict what comes next in a sequence. The researchers found that the model could give near-perfect driving directions in New York City, but it did not have an accurate map of the city. When streets were closed or detours added, the model’s performance dropped sharply.

The importance of coherent world models

The maps the model generated had fake streets and strange connections. This showed that while the model could give good directions, it didn’t truly understand the city’s layout.

This finding is important because it shows that LLMs can do tasks well without understanding the world around them. The researchers also tested the model using two problems: navigating through a city and playing Othello. The model could predict valid moves in Othello, but it didn’t understand the game’s rules. It also failed to create a coherent map of New York.

To test whether a model truly understands the world, the researchers created two new metrics. The first, called “sequence distinction,” checks if the model can recognize differences between two situations. The second, “sequence compression,” checks if the model understands that identical situations have the same possible next steps. The researchers found that transformers which made choices randomly performed better at forming accurate world models.

These results show that LLMs can perform tasks accurately without really understanding them. The researchers argue that if we want LLMs to solve complex problems or make new discoveries, we need to rethink how we measure their understanding of the world.

The question of “whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says research leader Ashesh Rambachan in an MIT press release.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article

0 Comments

0 thoughts on “Researchers argue that LLMs don’t understand the world

6

Like

Dislike

Share

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks