In a world where LLMs are able to generate ideas for dime a dozen, can humans expect to retain their role in society?
In an era of rapid technological advances, Large Language Models (LLMs), are seen as a frontier pushing the boundaries, especially in the realm of human creativity. LLMs like GPT-4, can generate text that is not only coherent but also creative. This has led some to speculate that such models could soon surpass human ingenuity in generating groundbreaking ideas. Research has found that the average person generates less creative ideas than LLMs, but the best ideas always come from the rare creative humans.
This leads us to one crucial question: “Can AI agents discern what makes an idea good?” Even though they are capable of generating ideas, they cannot (at this time) properly understand the quality of their ideas.
Before we get ahead of ourselves, let’s have a short dive into how AI agents work.
The Mechanics of AI and Language Models
Nobody doubts that artificial intelligence is a game-changer. If you are reading this article, you are on the forefront of the explorers of the effects of this technology, and have probably played with it first-hand.
AI has revolutionized many different industries since the 1950s, when machine learning was first conceptualized. But let’s narrow our focus a bit on the superstars of today, Large Language Models (LLMs) like GPT-4 and LLaMA. What is it about them that sets them apart?
Most people don’t have the time to learn how to properly work and prompt LLMs. ChatGPT, in particular, opened the floodgates by providing a pre-prompted model that is highly effective at understanding the context of requests and giving its best shot at producing the desired output. GPT was available before, through the OpenAI dashboard and API, but ChatGPT is what made it accessible and thus popular.
Certainly, it has its own issues and quirks, but fundamentally it does a good enough job that it can actually save you time in your work. Up to you if you consider it ethical or not.
So, how do they actually work?
Well, if you’ve been living under a rock, give it a spin at https://chat.openai.com. The free GPT 3.5 version is good enough to give you a demonstration of its capabilities.
These models have been trained on vast sets of text data and they are able to combine, regurgitate, and generate outputs in response to instructions (prompts), sometimes in completely unique ways.
So they can churn out content that appears original, but the kicker is that they don’t understand the value or meaning of the longer forms they themselves generate. You need a human for that. They produce highly legible and contextual outputs, relying on their training analyzing large sets of text and identifying patterns among words. They then use this information to predict what comes next. Models use so-called ‘seeds’: random numbers that make each response unique and varied. This is why it sometimes feels like a hit or miss with prompts that worked one day, but not the next.
The Bottom Line:
Are LLMs a valuable tool for the modern creative worker?
Absolutely.
Are they a replacement for human creativity?
Not by a long shot.
What Makes an Idea Good?
AI models share some similarities with human brains. They both rely on prediction mechanisms to generate outputs. The difference is models predict the next word in a sentence, while humans are trying to predict future outcomes based on the entire flow of an idea. We don’t just blindly generate ideas for the sake of it, although that too happens when we are bored.
Most often, we already have some goal behind our ideas. Whether this is to make money, deal with a specific problem or situation, or decide what kind of outfit and perfume will present us in the best light. These are all goal-oriented endeavors.
Models, on the other hand, are prompt driven – prediction is their only goal. So they do their best to fulfill the criteria of the prompt as they understand it, and predict which words are most likely to be the correct answer.
Fundamentally, defining what is a ‘good idea’ is incredibly difficult. At the moment, only humans decide on which ideas are good for which situation – and we’re not really great at doing that. We’ve all embraced an idea only to generate a terrible result, and vice versa, misjudged ideas that ended up having great outcomes.
So it cannot be the outcome alone that decides the goodness of the idea. Other factors play a role, and are all context-dependent. If you are an artist, originality will dictate what is a good idea. If you are a mother, the safety and wellbeing of your children will play a major role.
Some good ideas are established. They have a brand reputation. For example these would be:
• Going to the dentist regularly
• Not spending all the money you have and investing some of the money you save
• Have enough food to avoid constant trips to the supermarket (or to survive the winter)
• Don’t go outside naked
The conclusion I draw here is that ideas are as good as the context in which they were made. Evaluating any one of them requires great understanding and awareness of the physical, emotional, and mental state of the person that made them, as well as their worldview, knowledge, and desires.
In other words, only you (and sometimes your psychiatrist) could know what a good idea is.
In your experience, what has made an idea good or bad? Is it its impact, its uniqueness, or something else entirely? Share some stories in the comments.
So when we talk about AI generating ideas, it’s not enough to ask if those ideas are new or unique. We must also ask if those ideas are impactful, relevant, and emotionally resonant. Because that’s where AI currently falls short. It simply can’t evaluate these aspects; it just confabulates based on what it’s been trained to do.
AI and Human Creativity: A Symbiotic Relationship
It would be shortsighted to dismiss AI and LLMs as mere tools with zero utility. This is precisely why we don’t see anybody argue that LLMs are useless. In fact, they are great and can do a better job than an average person in some tasks, but they still need at least an average person to be able to do the job. So collaboration is in order.
AI can act as a brainstorming partner, throwing out hundreds of ideas a minute. This ‘idea shotgun’ approach can be invaluable for overcoming creative blocks or for quickly generating multiple solutions to a problem.
Take the instance of the short film Sunspring, written by an AI but directed and performed by humans. The AI provided the raw narrative, but it was the human touch that turned it into something watchable, even if they didn’t edit it at all. This was created seven years ago. The LLMs we have today like ChatGPT would do a much better job. Yet somehow the crew managed to turn it into a compelling story.
Consider musicians who use AI to explore new scales, filmmakers who use it for script suggestions, or designers who employ AI to create myriad design prototypes. They’re not using AI to replace their own creativity, but to augment it.
Here’s the key: the human mind filters these AI-generated ideas, selects the most promising ones, refines them, and brings them to life. In other words, humans provide the ‘why’ and ‘how’ that AI currently lacks. Fundamentally, human creativity is simply priceless.
Why use AI in your creative work?
• Speed: AI can rapidly generate ideas
• Diversity: It can cover a broad spectrum of topics
• Insight: But it lacks the depth of human intuition
• Skill Enhancement: Write better or create unique art (even if you are not an ‘artist’)
Personally, I’m not worried. I see AI as an extension of my creativity. AI can be a powerful ally in our creative endeavors, serving not as a replacement but as an enhancement to human creativity.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.