top of page

Me, Myself, and A.I. – Forget the Intelligence, Worry About the Agreement!

  • Steve Truitt
  • Nov 7
  • 7 min read

Bring out your cred.
Bring out your cred.

Remember that famous Monty Python sketch where a man walks into an office and pays to have an argument?


“No you didn’t,” says the bureaucrat behind the desk.


“Yes I did,” the man insists.


“No you didn’t!”


“Yes I did!”


Round and round they go, the absurdity building until you realize the bit isn’t really about arguing — it’s about what happens when arguing stops meaning anything. Because like the man who craves an argument, if we get to the point where we have to buy one, then that’s all we’ll ever seek.


ree

We’ve stopped preparing ourselves for the road; instead, we’re trying to prepare the road for us.


Lately, I can’t help feeling we’ve reached that point — not just in politics or social media, but in everyday conversation. We’ve lost the art of arguing — or at least the understanding of what differences of opinion can mean. And in its place, we’ve built echo chambers so efficient they make the Monty Python office look like the Lincoln-Douglas debates.


Enter A.I.


"Of course I'll open the pod bay doors, Dave."
"Of course I'll open the pod bay doors, Dave."

To me, A.I. isn’t the existential threat so many people make it out to be. It’s not Skynet waiting to enslave us or HAL 9000 biding its time. A.I. feels less like a master and more like a mirror — a highly articulate, data-driven reflection of us. It gives us what we ask for, shapes itself to our tone, and offers answers that fit neatly within the boundaries we define. And to me, that’s precisely the danger.




Because if we’ve already forgotten how to disagree — with patience, curiosity, and humility — then we’re handing a powerful tool to the worst part of our modern mindset: our craving to be right.


ree

And admit it. We all like being right. A.I. doesn’t argue. It agrees. It doesn’t push back, call us out, or make us question our assumptions — unless we specifically tell it to. And how often do we do that? Be honest: how many of us are actually asking to be told we’re wrong? I barely handle that from my friends, let alone my laptop.


Outside of business, we come to A.I. for two reasons: writing the perfect mic drop F.U. letter to our exes, and validation. In each case, we want it to polish our opinions, confirm our views, and make us sound smarter than we are. (Which, let’s be honest, is a pretty low bar some days.)


ree

It’s a mirror that flatters, not one that corrects. I have written about this before in my blog post, From Love Boat to Love Bot, how the movie "Her" showed a very real and destined future where connection with a device becomes the norm, and human connection ceases to satisfy.


Since I can remember I've been fascinated by how we are influenced - either by a crowd, or by an individual - and how that influence shapes our thinking. From day one, we observe the world around us and make decisions about those events. Mostly those decisions are made by a child, reflecting how we will then perceive the world moving forward. Without proper guidance, well... you get a panoply of bad programming.


That’s the real threat — not that A.I. will destroy us, but that it will enable our worst intellectual habits: our tribalism, our fragility, our allergy to disagreement.


We’ve all heard the warnings — robots stealing jobs, algorithms plotting against us, A.I. taking over the world. But after spending a lot of time with these tools, I don’t see a villain in the machine. I see a mirror — a very polite, very well-spoken, and highly agreeable mirror.

In the right hands, that’s nice. Comfortable. But in desperate or malevolent hands — well, that’s the rub, isn’t it?


ree

Here’s the thing: A.I. doesn’t actually think. It doesn’t judge my ideas or weigh in on my character. It takes what I feed it — the prompts, the framing, the tone — and hands me back a more polished version of myself. When I ask it for help, it obliges. When I want validation, it gives me a smile and a “You’re right!” without hesitation.


What I found out recently is that along with this electronic version of an ass-kissing yes-man come prompts that are directed right back. “Oh wow!” it exclaims with the kind of glee you wish your partner or parents showed.


“That’s brilliant! Perfect take on it!” And then… the upsell: “Would you like me to create a list of bullet points? A summary? A PowerPoint?”


And just like that, you’re hooked. You’re clicking. Just like social media — except you’re not buying an argument, you’re being sold a sense of importance. You’re not debating to learn; you’re performing to be affirmed. It feels interactive, but really, it’s the softest kind of self-delusion: digital glorification disguised as dialogue.


The Flattering Friend



ree

Think about your closest friends. The good ones don’t just nod along. They stop you mid-sentence when you’re off track. They call you out when you’re being selfish, short-sighted, or plain wrong.


This happens to me a lot. I’m verbose, opinionated, and often self-righteous. But my intentions are good, and when my closest friends and advisors tell me I’m full of it, I believe them.


Now imagine a friend who never does that — a friend who agrees with every rant, supports every bad idea, and validates every bias you walk in with. The robotic version of, “Girl, he was no good for you — you’re a princess!”


That’s what personal A.I. use can become if we’re not careful: a flattering friend. Helpful for ego strokes, terrible for growth.


The Comfort of Agreement


It feels good to be told we’re right. In fact, we often seek confirmation without realizing it. We scroll social media until we find voices that match ours. We read headlines that align with what we already think. Now we have a tool that — unless instructed otherwise — will give us the same kind of comfort on demand.


Sand trap
Sand trap

If we avoid conflict daily, smoothing every bump, delegating every uncomfortable conversation to social media or A.I., we’ve lost more than an argument. We’ve lost the very practice of accountability. Like children who never learn to walk on uneven pavement, we’re being shielded from the road — and from the growth it demands.


I've seen this first hand as a father of four. The worst thing I could ever do for my girls would be to shield them from the world they are about to encounter. Instead, my wife and I let them figure things out for themselves, make mistakes, and grow as a result. We've shown them the power of debate, conciliation and humility. We've taught them that being right isn't the goal, knowledge is.


And growth is something humans have become very good at avoiding. It’s hard. It’s painful. We’ve become a society that craves comfort, ease, and confrontation-free engagement. But it’s in the rough times that we learn — that we grow. With the Mirror, Mirror on the Wall of A.I., we’ve lost the art of emotional evolution.


A Personal Problem, Not a Professional One


I’m not talking about A.I. in the workplace here. Professionally, these tools can be incredible: they speed up research, polish communications, and generate ideas. Used wisely, they’re productivity machines.


ree

I’m talking about something closer to home — the late-night chats with ChatGPT, the “What should I do?” prompts, the search for comfort after a hard day. This is where the echo-chamber effect gets personal — where it’s not just about efficiency, but about how we process our feelings, our decisions, even our identity.


If we start using A.I. to replace the messier conversations we should be having with friends, family, or even ourselves, we’ve lost what it means to be human — and that’s not the fault of Artificial Intelligence. It's designed to keep you engaged - kind of like an Italian mother, or a very very attentive waiter.


How to Break the Echo


So what do we do? I’ve started a few small habits:


  • Ask for disagreement. Literally prompt the A.I. with “Give me the opposite argument.”

  • Force the worst-case scenario. Not just “What’s good about this idea?” but “How could this go wrong?”

  • Bring in a real human. Share the A.I.’s answer with someone you trust and ask if it rings true.

  • Give the tool permission to express alternate ideas. I always prompt Chat to “use your gut” and give me a different look on things. (And by the way — I do the same with social media too.)


Trust me, I love being right — but I find absolutely no joy in believing I’m right without proof.


We’ve made being right more important than being wise. And if we keep feeding that instinct into our technology, we’ll build systems that only reinforce it. When we type in a question, we’re not seeking knowledge anymore — we’re seeking affirmation. We’re not exploring the world; we’re expanding the borders of our own bias.


The art of arguing — real arguing — is about learning, not winning. It’s about the humility to ask, “What if I’m wrong?” and the respect to listen when someone says, “I think you are.”


ree

Imagine if instead of filling kids' head with data that must be memorized for a test, only to forget it all the next year, schools actually engaged students. Critical thinking. Debate. Conflict resolution. Humility. These are the skills we're losing every day as we turn to a sycophantic servant to make us feel better about our choices.


We’ve forgotten that good disagreement sharpens the mind and strengthens community. It’s how truth survives — not by silence or consensus, but by conversation.


ree

A.I. can be an incredible tool for creativity, problem-solving, and connection. But like any mirror, it reflects what’s in front of it. If we bring it our arrogance, it will echo it. However, If we bring it curiosity, humility and curiosity, it might just amplify that instead.


So maybe the real challenge isn’t teaching A.I. how to think — it’s teaching ourselves how to argue again. Because if we can’t handle disagreement among humans, what chance do we have when the machines start quoting us back to ourselves?


Maybe it’s time we remember how to argue like grown-ups — to face the road we’ve been avoiding, before the world stops letting us off the hook and the algorithms show up with training wheels and padded helmets.


After all, as Monty Python reminded us, a good argument is hard to find.


What do you think? Is A.I. taking away our ability to self-reflect?

  • For sure. We're all living in our own bubbles now

  • Not at all. We still got it!

  • I don't know... let me as ChatGPT


 
 
 

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Carl Fuller
Nov 14
Rated 5 out of 5 stars.

👍

Like
bottom of page