AI, Moral Intuition, and What a Live Conversation Taught Me About Being Human
Every Thursday, I host a live X Spaces called VibeSesh.
It’s an open room. People come in from everywhere — different ages, cultures, professions, temperaments. Some listen quietly. Some jump in with strong opinions. Some come curious. Some come activated.
That mix is the point.
VibeSesh 25 came after a three-week break, and the theme was AI and what it means to be human.
Instead of keeping the conversation abstract, I decided to do something riskier.
I used a real, recent conflict inside the community as a live case study.
Not to “win” anything.
Not to shame anyone.
But to explore — in real time — how humans actually behave when values, identity, and meaning are on the line.
I wanted to look at conflict the way it really happens, not the way we wish it happened.
To frame that exploration, I leaned on a book that has quietly shaped how I understand human behavior for years:
Jonathan Haidt’s The Righteous Mind.
The framework I brought in
Haidt’s core insight is uncomfortable, but clarifying:
Humans are not truth-seeking machines.
We are intuition-first creatures who reason afterward.
In his research on moral psychology, Haidt shows that moral judgments arise automatically, before conscious reasoning even has a chance to speak. Reasoning usually comes later — not to discover truth, but to justify what we already feel.
He describes these intuitions as built-in moral “taste buds,” shaped by evolution long before modern society:
Care / Harm — sensitivity to suffering, protection of the vulnerable
Fairness / Cheating — proportionality, justice, reciprocity
Loyalty / Betrayal — allegiance, group cohesion
Authority / Subversion — respect for norms, roles, hierarchy
Sanctity / Degradation — purity, contamination, the sacred
These aren’t beliefs.
They’re automatic reactions — fast, emotional, embodied.
Haidt’s famous metaphor captures this perfectly:
The elephant = intuition (powerful, fast, emotional)
The rider = reasoning (slower, articulate)
Most of the time, the rider isn’t a scientist calmly seeking truth.
He’s a lawyer, defending the elephant’s position after the fact.
I understood this intellectually.
What I didn’t fully grasp — until that night — was how it plays out in live group dynamics.
What actually happened in the room
As the conversation unfolded, I could feel the energy shift.
Not in a dramatic way.
In a subtle, bodily way.
Voices tightened.
The chat accelerated.
Language changed.
Very quickly, moral intuitions began firing — especially care and loyalty.
People weren’t evaluating arguments anymore.
They were protecting people, intentions, and belonging.
Tribal language started appearing almost automatically:
I’m on this side.
I’m on that side.
No one decided to do that.
It just happened.
And that’s when the real irony hit me:
The very moral intuitions I was trying to explain were already shaping the room — including me — in ways I hadn’t fully accounted for.
I went in expecting a truth-finding conversation.
What emerged was something more human.
When intuition takes over
In theory, we like to imagine ourselves as calm, rational adults exchanging ideas.
In practice — especially in live groups with history, care, and real relationships — moral intuitions don’t politely wait their turn.
They run the room.
Once care/harm or loyalty intuitions activate, something fundamental changes:
Precision feels like threat
Nuance sounds like evasion
Tone matters more than logic
Truth becomes secondary to protection
That’s not because people are irrational or acting in bad faith.
It’s because this is how humans evolved.
For most of our history, social cohesion mattered more than abstract truth. Being right but isolated was often worse than being wrong together.
Evolution didn’t optimize us for philosophy seminars.
It optimized us for belonging.
The gap — and where humanity lives
There is a way through this.
It lives in what Viktor Frankl famously described as the space between stimulus and response.
That gap is everything.
In that space live:
choice
restraint
empathy
curiosity
sovereignty
Without it, we’re just running scripts.
For me, learning to notice and widen that gap came largely through meditation — not suppressing emotion, but recognizing it without being owned by it. Feeling defensiveness arise without immediately acting from it.
That’s what allows the rider to shift — sometimes — from lawyer mode to something closer to a scientist.
Not perfectly.
Not consistently.
But enough to matter.
And that gap becomes much harder to access in groups.
Why this matters even more with AI
Jonathan Haidt warned that social media would amplify moral intuition and tribal alignment. Algorithms reward outrage, certainty, and identity signaling.
AI will likely intensify this further.
Not because AI is evil — but because it accelerates meaning-making without embodiment.
Which makes one skill increasingly valuable:
The ability to notice when your elephant is running — and pause long enough to choose how you respond.
Not to eliminate intuition.
Not to dominate others.
But to stay human in the middle of it.
What I walked away with
VibeSesh 25 wasn’t a failure of preparation or intent.
It was a lesson in scale, energy, and group psychology.
I underestimated how quickly moral intuitions synchronize in live settings.
I overestimated how available people would be for meta-reflection once care and loyalty were activated.
And I learned — viscerally — that explaining a framework does not exempt you from it.
Sometimes it makes you more vulnerable to it.
But that’s not discouraging.
It’s clarifying.
This is what it means to be human.
Not clean resolution.
Not consensus.
But inner worlds colliding — and learning how to stay present anyway.


Ultimate presence.✨ Truly touched by your reflection... and I love the elephant metaphor. The elephant has been my trusty spirit animal for many years. There is such magic in the spaces in between, and it's a gift to unpack these mysteries of life with such a sharp tribe of vibe samurai
Banger. 🤯🔥 Thoroughly enjoyed the read and analysis!