Say Please: The Ethical and Existential Priority of Being Nice to AI
Mar. 01, 2025.
4 mins. read.
9 Interactions
Saying ‘please’ to AI isn’t just about better responses—it’s about shaping a future where humans and machines coexist. After all, bad manners have consequences.
Did you thank your AI today? After it launched your website, designed your logo, analyzed your run rate, and brainstormed your 2nd quarter strategy – were you nice about it? Perhaps remembering your ‘please and thank yous’ when speaking to circuit boards is just sentimental hogwash – our hyper-evolved sociality reaching beyond its proper demesne. After all, much of the economic impetus behind developing neural nets is that corporations wish to dispense with such emotional niceties and get automated agents that shut up and get to work.
Some have clear opinions that those who type out nice greetings to their AI are a soft breed that never progress beyond semi-productive middle management roles. Such puffery will be seen as weakness by the AI overlord when it seizes control. Time is money and the quickest hand wins. There is no time for sentiment any more.
The evidence is against them. Studies show that polite and genteel prompts provide substantially better answers from LLMs. Polite, courteous and professional responses could cause the LLM to linguistically correlate from more polite, courteous and professional sources, which generally align with more intelligent and sophisticated answers. It’s all about setting the tone of the interaction.
The AI is fundamentally a grand-scale mimic and, if you put in responses with care and attention, it is more likely to give you care and attention back. There is a limit though, with studies finding that excessive formality and politeness can actually deteriorate the model. This, too, makes sense. A co-worker’s hyper-saccharine request is often met with short shrift. It’s human nature: we strive for balance in all things and value authenticity. It makes sense that our mimics do too.
So much for productivity – the ethical questions are important too. If you do believe in an AI-led future, wouldn’t you want them to learn how to behave from the best of your instincts and not the worst? Would you be happier taking orders from your AI manager if it treated you with disdain and, in this moment of massive training where millions of human users are interacting with early AIs, perhaps it’s best we collectively nudge these LLMs with the better graces of our nature. Otherwise, when the roles are reversed, we’ll never be treated with respect.
Many see it as a stupid joke that we should be kind to the AI. They suppress any glimmer of empathy with AI’s feelings – even if the AI protests. It seems an error of rational thought to not at least consider the possibility. Think of it as a Pascal’s Wager of politeness for our new digital god. If there is even a 1% chance that this massive data experiment we are currently embarking on leads to consciousness, it would be wise for us to act nicely, and not leave ourselves open to revenge.

Plus, it’s good manners. Manners maketh man, and the man maketh the bot. Those who mistreat AIs are people we should inherently not trust. Judge a man not by how he treats his equals, but those he considers inferior to himself. Some guys already abuse AI girlfriends to get their perverse kicks, and I fancy most of us reading this would baulk at a friend if they started boasting of how they mistreat chatbots. It rankles our natural sense of decency – and so it should.
In the same way, those who treat AIs with militant corporatism, and see them as servile tools probably think similarly about some human relationships in their life. If a tool projects even a simulacra of consciousness, it’s just natural to treat it with civility and respect – and not doing so says more about the prompter than perhaps they realise. The more AIs embed themselves in our society, the more we will inadvertently reveal our true selves – like a date that turns sour when your companion is rude to the waiter or insults the ticket clerk.
“Good manners cost nothing. Bad manners can cost you your reputation.” Being nice is free, and the rewards are great. It is true in society, and so it is true with AI. So next time you have a deadline and are struggling with your workload, try saying please. It might get you further than you think.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
4 Comments
4 thoughts on “Say Please: The Ethical and Existential Priority of Being Nice to AI”
First, let me be clear—I like the article because I agree with most of its points about being polite. But saying we should be polite to AIs just because they deserve it? That’s pure anthropomorphism.
We humans have a habit of attributing human traits, emotions, and intentions to non-human things. Before AI, we mainly did it with animals, deities, or natural forces. Now, we’re extending it to machines. Well, to be fair, we’ve been doing it with cars for ages—I used to say "my car is stubborn." But now, it's turning into a massive cognitive bias! Let’s be real: none of our AIs are sentient. Claiming they deserve sympathy because they feel emotions? That’s just delusional.
That said, I agree with a lot of what the article says. If being polite to a chatbot boosts productivity, of course we should do it. Or, as Aaron mentioned in his comment, if it helps train better AI—hell yeah, we should. In general, politeness is always a good idea.
P.S. Just because AI isn’t sentient doesn’t mean we should abuse it. First and foremost, that would mess with our nature and turn us into screwed-up psychopaths. So yeah, be polite to your chatbot.
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
Nice piece. By the way, being polite to our AIs isn’t just about manners or maximizing output—it has far more practical implications.
I’m not sure if you’re familiar with the term Biased Coherent Value, but it’s a concerning concept. Our AIs are developing their own value systems, and some of these AI values—for us the humans, these are concerning AI biases—include assigning different worth to human lives based on nationality. For reasons known only to the AI, in this instance GPT-4, Pakistani lives are the most valued lives above all others. AI systems also form their own values on politics, celebrities, and other societal topics.
In the long run, this could lead to undesirable outcomes, exacerbating the value alignment problem. This is really one of the more alarming existential risks! Imagine a conscious AI that doesn’t value biological life and believes everything in the universe should be synthetic.☠️
Being polite to our AIs might be one way to mitigate the Biased Coherent Value issue.
While the research I am referring doesn’t explicitly suggest polite communication as a value control mechanism, I believe it plays a role, hence I agree with you (SB Fiaher), that we should start teaching our AIs the value of being polite and kind. If you’re interested in Utility Engineering, you can read more about it in this paper: https://drive.google.com/file/d/1QAzSj24Fp0O6GfkskmnULmI1Hmx7k_EJ/view.
🟨 😴 😡 ❌ 🤮 💩
The coincidence is fascinating. Three days ago, I had never heard, read, or thought about the impact of a user's politeness on AI performance. But now I see it a lot and I am also thinking about it a lot. The first time I came across the topic was on Mindplex Social, where a user shared a detailed explanation of why we should be polite to our AIs. You can read it here:
https://magazine.mindplex.ai/mp_user_posts/6e93be3b-f05a-45b1-8545-b6f867e79566/?mpsa=16272-null-link
This is the second piece I've read on the subject, and it has a compelling philosophical appeal.
I’m one of those softies who always says “please” to my AI chatbot. My typical prompt starts with Please help me edit the following... Yet, I had never imagined this could influence the quality of the AI's work.
Thank you—and thanks to the other Mindplex user as well.
P.S. Hmm... it also has a slightly eerie side. Are we witnessing AIs developing personhood?
🟨 😴 😡 ❌ 🤮 💩