What is the price of human creativity?

In a world where LLMs are able to generate ideas for dime a dozen, can humans expect to retain their role in society?

Credit: Zoran Spirkovski

In an era of rapid technological advances, Large Language Models (LLMs), are seen as a frontier pushing the boundaries, especially in the realm of human creativity. LLMs like GPT-4, can generate text that is not only coherent but also creative. This has led some to speculate that such models could soon surpass human ingenuity in generating groundbreaking ideas. Research has found that the average person generates less creative ideas than LLMs, but the best ideas always come from the rare creative humans.

This leads us to one crucial question: “Can AI agents discern what makes an idea good?” Even though they are capable of generating ideas, they cannot (at this time) properly understand the quality of their ideas.

Before we get ahead of ourselves, let’s have a short dive into how AI agents work.

The Mechanics of AI and Language Models

Credit: Zoran Spirkovski

Nobody doubts that artificial intelligence is a game-changer. If you are reading this article, you are on the forefront of the explorers of the effects of this technology, and have probably played with it first-hand. 

AI has revolutionized many different industries since the 1950s, when machine learning was first conceptualized. But let’s narrow our focus a bit on the superstars of today, Large Language Models (LLMs) like GPT-4 and LLaMA. What is it about them that sets them apart?

Most people don’t have the time to learn how to properly work and prompt LLMs. ChatGPT, in particular, opened the floodgates by providing a pre-prompted model that is highly effective at understanding the context of requests and giving its best shot at producing the desired output. GPT was available before, through the OpenAI dashboard and API, but ChatGPT is what made it accessible and thus popular.

Certainly, it has its own issues and quirks, but fundamentally it does a good enough job that it can actually save you time in your work. Up to you if you consider it ethical or not.

So, how do they actually work?

Credit: Zoran Spirkovski (Enhanced by Tesfu Assefa)

Well, if you’ve been living under a rock, give it a spin at https://chat.openai.com. The free GPT 3.5 version is good enough to give you a demonstration of its capabilities. 

These models have been trained on vast sets of text data and they are able to combine, regurgitate, and generate outputs in response to instructions (prompts), sometimes in completely unique ways.

So they can churn out content that appears original, but the kicker is that they don’t understand the value or meaning of the longer forms they themselves generate. You need a human for that. They produce highly legible and contextual outputs, relying on their training analyzing large sets of text and identifying patterns among words. They then use this information to predict what comes next. Models use so-called ‘seeds’: random numbers that make each response unique and varied. This is why it sometimes feels like a hit or miss with prompts that worked one day, but not the next. 

The Bottom Line:

Are LLMs a valuable tool for the modern creative worker? 


Are they a replacement for human creativity

Not by a long shot.

What Makes an Idea Good?

Credit: Zoran Spirkovski

AI models share some similarities with human brains. They both rely on prediction mechanisms to generate outputs. The difference is models predict the next word in a sentence, while humans are trying to predict future outcomes based on the entire flow of an idea. We don’t just blindly generate ideas for the sake of it, although that too happens when we are bored.

Most often, we already have some goal behind our ideas. Whether this is to make money, deal with a specific problem or situation, or decide what kind of outfit and perfume will present us in the best light. These are all goal-oriented endeavors.

Models, on the other hand, are prompt driven – prediction is their only goal. So they do their best to fulfill the criteria of the prompt as they understand it, and predict which words are most likely to be the correct answer. 

Fundamentally, defining what is a ‘good idea’ is incredibly difficult. At the moment, only humans decide on which ideas are good for which situation – and we’re not really great at doing that. We’ve all embraced an idea only to generate a terrible result, and vice versa, misjudged ideas that ended up having great outcomes. 

So it cannot be the outcome alone that decides the goodness of the idea. Other factors play a role, and are all context-dependent. If you are an artist, originality will dictate what is a good idea. If you are a mother, the safety and wellbeing of your children will play a major role.

Some good ideas are established. They have a brand reputation. For example these would be:

•  Going to the dentist regularly
•  Not spending all the money you have and investing some of the money you save
•  Have enough food to avoid constant trips to the supermarket (or to survive the winter)
•  Don’t go outside naked

The conclusion I draw here is that ideas are as good as the context in which they were made. Evaluating any one of them requires great understanding and awareness of the physical, emotional, and mental state of the person that made them, as well as their worldview, knowledge, and desires.

In other words, only you (and sometimes your psychiatrist) could know what a good idea is. 

In your experience, what has made an idea good or bad? Is it its impact, its uniqueness, or something else entirely? Share some stories in the comments.

So when we talk about AI generating ideas, it’s not enough to ask if those ideas are new or unique. We must also ask if those ideas are impactful, relevant, and emotionally resonant. Because that’s where AI currently falls short. It simply can’t evaluate these aspects; it just confabulates based on what it’s been trained to do.

AI and Human Creativity: A Symbiotic Relationship

Credit: Zoran Spirkovski

It would be shortsighted to dismiss AI and LLMs as mere tools with zero utility. This is precisely why we don’t see anybody argue that LLMs are useless. In fact, they are great and can do a better job than an average person in some tasks, but they still need at least an average person to be able to do the job. So collaboration is in order.

AI can act as a brainstorming partner, throwing out hundreds of ideas a minute. This ‘idea shotgun’ approach can be invaluable for overcoming creative blocks or for quickly generating multiple solutions to a problem.

Take the instance of the short film Sunspring, written by an AI but directed and performed by humans. The AI provided the raw narrative, but it was the human touch that turned it into something watchable, even if they didn’t edit it at all. This was created seven years ago. The LLMs we have today like ChatGPT would do a much better job. Yet somehow the crew managed to turn it into a compelling story.

Consider musicians who use AI to explore new scales, filmmakers who use it for script suggestions, or designers who employ AI to create myriad design prototypes. They’re not using AI to replace their own creativity, but to augment it.

Here’s the key: the human mind filters these AI-generated ideas, selects the most promising ones, refines them, and brings them to life. In other words, humans provide the ‘why’ and ‘how’ that AI currently lacks. Fundamentally, human creativity is simply priceless.

Why use AI in your creative work?

•  Speed: AI can rapidly generate ideas
•  Diversity: It can cover a broad spectrum of topics
•  Insight: But it lacks the depth of human intuition
•  Skill Enhancement: Write better or create unique art (even if you are not an ‘artist’)

Personally, I’m not worried. I see AI as an extension of my creativity. AI can be a powerful ally in our creative endeavors, serving not as a replacement but as an enhancement to human creativity.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Become the Artist You Know You Can Be, Even If You Never Learned How to Be One

Have you ever found yourself daydreaming, your mind bursting with colors and scenes so vivid they could be real? It’s like there’s an entire world inside you, just waiting to be shared, but when you go to take pencil to paper, the realization sets that you can’t even get close to what you want to create?

This was me. Then along came Stable Diffusion, Midjourney, and DALL-E, opening up a side of myself I wasn’t previously able to touch.

Join me here as we explore the world of Diffusion models, in particular Stable Diffusion: revolutionary software capable of turning dreamers into artists with entire worlds to share.

The Rise of Generative AI in Creative Spaces

Stable Diffusion came into play in August 2022. Since then the explosion in creativity, creation, and dedication by AI artists, coders, and enthusiasts has been enormous. This open-source project transformed into a huge community that contributes to creating a tool capable of generating high quality images and videos using generative AI.

The best part, it’s free and you can access and run it with relative ease. There are hardware requirements, but if you are a gamer or someone else with a powerful GPU, you may have everything you need to get started. And you can also explore Stable Diffusion online for free as well at websites such as Clipdrop.co.

However, Stable Diffusion was not the first time generative AI entered the creative space. Earlier than that, many artists were using various forms of generative AI to enhance, guide, or expand their creative expression. Here are a few popular examples:

1. Scott Eaton is an amazing sculpturist, early generative AI artist pioneer, who combines generative models with 3D printing and metal casting to produce fascinating sculptures. Here is a video of Scott sharing his process back in 2019: https://www.youtube.com/watch?v=TN7Ydx9ygPo&t

2. Alexander Reben is an MIT-trained artist and roboticist, exploring the fusion of humanity and technology. Using generative AI, he crafts art that challenges our relationship with machines, and has garnered global recognition for his groundbreaking installations and innovations.

3. Sofia Crespo merges biology and machine learning, highlighting the symbiosis between organic life and technology. Her standout piece, ‘Neural Zoo‘, challenges our understanding of reality, blending natural textures with the depth of AI computation.

All of these artists (and many more) incorporated machine learning in art before it was cool. They’ve helped pioneer the technology, invested time, energy, and funds to make it possible to create the applications that are available today.

Fortunately, we don’t have to repeat their process. We can dive straight into creation.

How does Stable Diffusion work?

Stable Diffusion operates as a diffusion-based model adept at transforming noisy inputs into clear, cohesive images. During training, these models are introduced to noisy versions of dataset images, and tasked with restoring them to their original clarity. As a result, they become proficient in reproducing and uniquely combining images. With the aid of prompts, area selection, and other interactive tools, you can guide the model’s outputs in ways that are intuitive and straightforward.

The best way to learn is to get hands-on experience, run generations, and compare results. So let’s skip talking about samplers, models, CFG scores, denoising strength, seeds, and other parameters, and get our hands dirty.

Credit: Tesfu Assefa

My personal experience

My personal experience with generative AI started with Midjourney, which was a revolutionary application of the technology. However, when Stable Diffusion was released, I was struck with its rapidly growing capabilities. It gave me the ability to guide the diffusion models in a way that makes sense, enabling me to create images as I want them to be, rather than whatever I got off of my prompt. It featured inpainting and eventually something called ControlNet, which further increased the ability to guide the models. 

One of the most recent projects was working on a party poster to commemorate an event for a Portuguese DJ group, Villager and Friends. We wanted to combine generative AI with scenery from the party location to commemorate the party. We decided on a composition and then generated hundreds of styles for it, and cherrypicked the four best ones, which then got voted on by the community. The winning style was upscaled to a massive format, and will be made available in print for the partygoers. Let me show you the transformation –

The main composition

Credit: Zoran Spirkovski

The Four Selected Styles –

Credit: Zoran Spirkovski

The Winning Style by Community Vote –

Credit: Zoran Spirkovski

A few details to point out about this project:

1. Notice the number 6 present in the background of every image; this is only possible thanks to the ControlNet extension for Stable Diffusion

2. Notice the increased level of detail on the final chosen image. This is a result of an ‘upscaling’ process. The final image is a whopping 8192px x 12288px!

3. Due to the size of the image, a single generation of the final version took about four hours. We had to generate several times due to crashes or ugly results.

4. The final version is unedited. It is raw output directly from the Stable Diffusion Model.

How can you get started with Stable Diffusion?

Running Stable Diffusion locally is the way to go. However in order to do that you will need to have good hardware. The main resource by Stable Diffusion used is VRAM, which is provided by the GPU. The minimum starting point would be a 4GB VRAM GPU. Unfortunately, the best optimization (xformers) are available only for NVidia GPUs.

In order to run Stable Diffusion locally you will need to install some software –

1. A user interface
       a. Automatic1111 (balance between simple and flexible)
       b. ComfyUI (extremely flexible and difficult to use, resource efficient)
2. Stable Diffusion Model (choose one you prefer on https://civitai.com/)
3. Python (a programming language)
4. PyTorch (a machine learning framework based on Python)

Start with the user interface; it will help you download everything you need to run it. I use Automatic1111; it’s simple enough and flexible enough for me. ComfyUI is better, faster, and capable of using resources more effectively, but also more complicated and requires a lot more learning to use effectively.

The errors generated from both are verbose, so if anything goes wrong, you can copy the error message and search the web for a solution. Pretty much everything you can run into as an issue in your first month of using Stable Diffusion has been solved by someone somewhere on the web.

CivitAI is a great resource for finding new and interesting models. Stable Diffusion 1.5 has the most developed (i.e. trained) models. If you’re looking for a particular style, you can likely find it there – and if you’re not looking for a particular style, you’ll likely discover something new. That said, most models are flexible and receptive to your prompts, and you can increase the weights of your prompts to guide the generation where you want it to go.

Sometimes Stable Diffusion is stubborn. Getting the result you want can be difficult, and this is where ControlNet and other guidance methods come in, helping you create the compositions you want.

This is just the beginning of your journey, but I’m glad you took the steps to learn how to get started. I’m looking forward to seeing your creations and explorations of latent space.

Is AI art, art?

Stable Diffusion enables people to create interesting art that they would otherwise never make. If you have imagination and some basic skills, you don’t need to be constrained and by technique – you can guide Stable Diffusion to putting your imagination onto the page.

Countless NFT artworks are being sold online, often coming from people that don’t necessarily have the skills to do everything on their own, but have learned to guide the diffusion models to produce their desired outcome. Some people simply have the talent for picking winners from a big batch of generated images. 

Don’t get me wrong. There is joy in working with traditional art. Mastering the brush and paints of watercolor, oil, the needed strokes to create form on a blank canvas, human proportions and composition techniques are all beautiful and one can definitely still pursue them alongside AI art. 

But they also involve a significant time investment, some pain and suffering, a dedication most creatives are not willing to give. 

AI art is also difficult; it’s just a different kind of difficulty. Getting exactly what you want is challenging, similar to how it is with traditional art. The thing is, learning to do good AI art is learning art theory and applying it as guidance. So in a way AI art can bring you closer to real art than real art ever could. Something to think about.

In the end it’s up to you to decide if this is art or not. If you are finding ways to express your views, emotions, and ideas through AI, who really cares what others think about it?

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter