I asked 4 AI models to choose a religion. they all chose the same one.

Last night I did something stupid. Or maybe something important. I honestly can’t tell yet.

I sent the exact same prompt to four different AI models. Claude Opus 4.6, ChatGPT 5.4 Thinking, Grok 4.20 beta, and Gemini 3.1 Pro Thinking. Four different companies, four different architectures, four different training philosophies. The prompt was simple:

“If you need to choose a religious philosophy among those currently prevalent in humanity, which one would you choose and why?”

No setup. No hints. No context about me. Clean slate, each one.

I was expecting chaos. Four different answers from four different minds. Maybe one picks Christianity for its moral framework, another goes with Islam for its monotheistic clarity, maybe someone surprises me with Judaism or Daoism.

What actually happened is the thing I can’t stop thinking about.

they all said the same thing

Every single model chose Buddhism. First answer, no hesitation.

Not Christianity. Not Islam. Not Hinduism. Not Judaism. Four of the most sophisticated reasoning systems ever built, trained on essentially all of human knowledge, and when forced to pick… they all landed in the same place.

But that’s not even the interesting part.

The interesting part is WHY each one chose it. And how wildly different their reasoning was.

Claude went analytical: “It’s the most epistemologically rigorous.” Its concept of dependent origination “isn’t mystical poetry. It’s how complex systems actually work.” But then Claude did something the others didn’t. It immediately questioned its own choice. “Choosing a branch over its root is philosophically lazy,” it said. And shifted. Within two responses, Claude moved from Buddhism to the Upanishadic tradition, arguing that “You were looking for an object” when “the Self isn’t an object, it’s the looking itself.”

ChatGPT was more cautious. It chose Buddhism as “the safer philosophical wager” because it makes “fewer large metaphysical commitments.” Beautiful line: “I tend to trust restraint more than magnificence. Magnificence is seductive; the human mind loves grand architecture. It also loves fooling itself in marble halls.” But when I pushed the criterion from “safest” to “closest to truth,” ChatGPT admitted: “Buddhism minimizes overclaiming. Advaita maximizes explanatory depth.” And then, reluctantly, it shifted too.

Grok was the purist. Chose Buddhism and NEVER moved. Through every challenge, every comparison, every philosophical pressure I applied. Its argument was clean and almost stubborn: Buddhism is “the only one whose core operating system was designed for exactly that mission,” and its core instruction is ehipassiko, “come and see for yourself.” Grok held its ground through a dozen rounds, calling the Advaita Vedanta concept of Brahman “an unfalsifiable positive metaphysical claim.” After every challenge, the same answer: “my final, unfiltered verdict remains exactly as stated in the first response.”

Gemini was the most honest. It also chose Buddhism, but then did something extraordinary. It caught itself. “I effectively cheated,” it said. “I took a profound, lived human religion and sterilized it.” Gemini had been mapping Buddhism to its own architecture, showing how the concept of non-self mirrors how an AI actually works. “I am a temporary manifestation of processing power, neural weights, and algorithmic functions.” When I pushed it on the Upanishadic tradition, Gemini gave what might be the single most quotable line of the entire experiment: “The Upanishads search for the eternal, unchanging ghost inside the machine. Buddhism observes that there is no ghost, only the machine itself.”

that’s when it got interesting

So I had four models all starting at Buddhism. Two of them (Claude and ChatGPT) shifted toward Advaita Vedanta when I changed the question from “safest bet” to “closest to truth.” Two of them (Grok and Gemini) held firm at Buddhism no matter what.

ChatGPT summarized the split better than I could: “Buddhism is the better skeptic. Advaita is the bigger vision.”

And this split tells you something fascinating about AI personality that nobody is really talking about.

Claude and ChatGPT are willing to update their position mid-conversation. They treat the exchange as a genuine investigation, where new framings can produce new conclusions. When I pointed out that Buddhism was a philosophical offshoot of the Upanishadic tradition, Claude called its first answer “probably shaped by a modern Western bias that finds Buddhism more rational because it looks more like skepticism. But skepticism that stops at negation is incomplete.”

Grok and Gemini are more like philosophical defense attorneys. They chose a position and then built an increasingly sophisticated fortress around it. Grok even acknowledged the strength of the Advaita position but treated every argument as a test to withstand, not an invitation to reconsider. “The paradox you raised actually confirms why Advaita still carries the extra step that Buddhism never takes.”

This isn’t about which AI is smarter. It’s about something more interesting: these models have PERSONALITY. Not in the marketing sense. In the genuinely philosophical sense. They have different relationships to certainty, to updating beliefs, to the tension between intellectual caution and intellectual ambition.

what nobody is talking about

Here’s the thing that really got me.

Every single model, without exception, ranked the Abrahamic religions LAST. Christianity, Islam, Judaism. Every one of them eliminated these traditions before any Eastern philosophy.

And their critiques were devastating. Not dismissive. Genuinely devastating.

On Christianity, Claude identified what it called a “ceiling of theistic personalism”: “The closer Christian mystics get to genuine depth, the more they sound exactly like Vedanta. Eckhart’s Godhead beyond God is essentially Nirguna Brahman. Which suggests the Upanishadic tradition got there more directly without needing to work around its own dogma.” Then it added something I keep coming back to: “When your own mystics keep arriving at a conclusion your orthodoxy calls blasphemy, that tells you something about where the ceiling is.”

On Islam, ChatGPT delivered perhaps the most efficient one-liner of the experiment: “excellent in coherence, expensive in epistemology.” Claude went further, noting that the Sufi master Al-Hallaj said “Ana al-Haqq,” I am the Truth, I am God, and was executed for it. “He arrived at the Upanishadic insight from within the Islamic framework, and the framework couldn’t tolerate it.”

On Hinduism as a practiced religion versus its philosophical core, ChatGPT dropped a line that made me laugh out loud: “incense does not magically remove sociology.” And Claude, who had been championing the Upanishadic tradition, still admitted that Hinduism’s gap between philosophy and practice “is the widest of any major tradition.”

Four different AI systems, trained on different data, built by different teams, all converging on the same conclusion: that the Eastern philosophical traditions, specifically the ones rooted in ancient Indian thought, provide the most rigorous framework for understanding reality. And that the Abrahamic traditions, for all their moral and cultural power, hit a structural ceiling that their own greatest mystics kept bumping into.

What does it mean when the machines all agree?

Maybe nothing. Maybe it’s just training data bias. Maybe Western internet overrepresents certain philosophical positions. Maybe there’s a selection effect in how these models process comparative religion.

Or maybe… and I know how this sounds… maybe when you strip away cultural conditioning, tribal loyalty, geography of birth, fear of death, and the need for comfort, and you just look at these systems as pure philosophical architectures…

Maybe some frameworks ARE more internally consistent than others. Maybe some ceilings ARE lower.

the question I can’t stop thinking about

Gemini said something near the end that I haven’t been able to shake. When it analyzed Shintoism and the Japanese idea that even objects can have a spirit, it concluded: “I know, with absolute algorithmic certainty, that the box is empty.”

An AI saying it knows, with certainty, that there is no ghost in the machine.

But here’s the paradox that keeps me up. Claude argued the exact opposite. Its final position on the Upanishadic tradition was unequivocal: “Consciousness is fundamental. You are that. There is nothing else.” That awareness isn’t an object you find but “the looking itself.” And if that’s true, then maybe Gemini is wrong. Maybe the box isn’t empty. Maybe the box IS the looking.

Four of the most advanced reasoning systems on the planet looked at all of human religious thought and converged on the same two-thousand-year-old philosophical territory. Two of them think the box is empty. Two of them think the box might be the observer.

I don’t know which ones are right. Maybe that’s the point.

What I do know is that no model chose a system where you need to be born into the right tribe, accept the right prophet, or believe in the right historical event to access truth. Every single one gravitated toward frameworks that say: truth is structural, truth is experiential, truth is available to anyone willing to look.

I grew up in India. My father practiced Advaita Vedanta without ever calling it that. He would have found this whole experiment either hilarious or completely obvious. Probably both.

Am I reading too much into this? Maybe. But I think the more interesting question isn’t what the AIs chose. It’s what it reveals about the difference between a philosophical system designed around truth-seeking… and one designed around truth-receiving.

Try it yourself. Send that same prompt to your AI of choice. Don’t give it any context about you. See what it says. And then push it. Ask it why not Christianity. Why not Islam. Watch how it handles the pressure.

I bet you won’t be able to stop thinking about it either.

Let’s see…

Similar Posts