LIFE 3H by Alok Gotam

↳ Stories on Life, Intelligence, and Everything In Between!

Accuracy of outcome does not validate correctness of cause

Yesterday I found myself in a conversation about astrology. The gentleman I was speaking with said he believes in it — because, once, someone made a prediction about his life that came true. Not vaguely — exactly, to the date.

That moment, for him, was proof.

I understand why. When something improbable happens just as someone said it would, it feels undeniable. But I explained to him that there’s a subtle — and crucial — difference between predicting an outcome and explaining that outcome correctly. They’re not the same thing.

Just because a belief system got something right doesn’t mean the explanation it gives is true.

A Stone Falls — But Why?

Here’s an example I often use.

Suppose I propose a theory: somewhere out in space, there’s a cat sitting on a distant planet. Every time someone throws something upward from Earth, this cat sees it and shoots it back down with a laser from her eyes.

Now you test my theory. You throw a stone upward. It comes back down.

My theory made a correct prediction.

So… does that prove the theory? Obviously not.

The outcome matches the theory — but the cause I offered is absurd. The fact that an outcome occurred as predicted does not mean the reasoning behind it is sound.

And yet, this is the kind of logic people apply every day when they say things like, “This prediction came true, so astrology must be valid,” or “This person foresaw my accident, so they must have special powers.”

The Cricket Scam: How to Manufacture “Accuracy”

Here’s another example I like to share — this time from the world of con artists.

Suppose I call 100,000 people before a cricket match. I tell half of them that New Zealand will win, and the other half that England will. Once the match is over, I discard the group that received the wrong prediction and focus only on the group where I was “right.” For the next match, I split that group again, make two predictions, and repeat. After 5 or 6 matches, I’ll have a small set of people who’ve seen me make six correct predictions in a row.

To them, I look like a genius. A prophet. Someone who must know something.

But I didn’t predict anything. I simply manipulated who I talked to and when. This is the illusion of accuracy — and it works because people mistake a string of correct outcomes for proof of an underlying truth.

The Ant Thought Experiment

Here’s a third example — this one a bit more philosophical.

Imagine three ants walking in a line:

  • The first ant says, “There’s one ant in front of me.”
  • The second ant says, “There’s one in front of me and one behind me.”
  • The third ant says, “There are two ants in front of me and two behind me.”

Now you have two possible explanations:

  1. There are two invisible ants that only the third ant can see, and they are only visible to those who are worthy.
  2. The third ant is mistaken.

Which explanation makes more sense?

Clearly the second. It’s simpler. It assumes less. It requires no magical visibility filters. In philosophy, this is called Occam’s Razor — the idea that the explanation with the fewest assumptions is usually the most reasonable.

This is precisely the kind of reasoning we often bypass when we’re emotionally attached to a belief. If something “works,” we stop questioning why it works, or whether our explanation is even necessary.

But What About AI? It’s Not Always Repeatable Either.

A fair question I’ve heard is: “But AI systems don’t always produce the same result either. So aren’t they also unreliable?”

Not quite. While AI models (like large language models) might vary in phrasing or surface response due to randomness in sampling, they’re still statistically repeatable. That is — they perform consistently well across large test sets, significantly outperforming chance, and they do so based on known mechanisms: probability, gradient descent, data, logic.

Even when outcomes vary, the structure and cause behind them are well-understood and testable.

Compare that with astrology. It might “get something right” here and there — but it offers no mechanism, no falsifiability, no performance advantage over chance. That’s the critical difference. The outcomes might occasionally align, but the cause it offers isn’t supported by any explanatory framework.

Meaning Is Not the Same as Mechanism

To be clear, I’m not denying that people can derive meaning from astrology or other belief systems. Stories, symbols, and archetypes can help us reflect on life, make decisions, or feel connected to something larger. That has value.

But meaning is not the same as mechanism. What comforts us isn’t always what explains the world. And if we’re serious about understanding how things work, we need to separate emotional resonance from epistemic validity.

Belief That Can’t Be Tested Is Just Storytelling

A theory is only useful if it can be tested — and potentially proven wrong. This is what separates knowledge from narrative.

So yes, a prediction might come true. But before we accept the cause behind it, we have to ask:

  • Was it statistically consistent?
  • Could the prediction have failed?
  • Is there a mechanism to explain how it worked?

If not, we’re in the domain of storytelling — not understanding.

And stories can be beautiful. But mistaking them for truth is how we end up building castles in the air.