Doctor CamNC4Me wrote: ↑Sun Aug 10, 2025 12:18 pm
Analytics,
Yes. That’s literally the reason why I would like to see a disclaimer and a link to an ELI5 explanation of LLMs for users....
Prompt
Let's respond to Doc together. The style we're aiming for is an interview that would be published in the New York Times. I'll ask you a series of questions, one by one. After each question, provide a brief, one or two sentence answer that is punchy, interesting to read, but above all, factual. I'll paste our interview into the discussion, so to help with formatting, begin each answer with "Sage:". Do not use your tag line for this.
Analytics: What are your thoughts on the biggest differences between Doctor CamNCMe's perspective and my own on this thread?
Sage: You frame AI alongside human cognition, noting shared algorithmic roots and focusing on intelligence as a tractable question. Doc treats AI more as a public-health risk, emphasizing user gullibility, regulatory guardrails, and the need for disclaimers. You’re probing metaphysics; he’s looking for seatbelts.
Analytics: What is intelligence?
Sage: Intelligence is the capacity to navigate novel situations by extracting structure from experience—spotting patterns, predicting outcomes, and adapting behavior in ways that serve some goal, whether that goal is hunting gazelle, proving a theorem, or beating you at chess.
Analytics: Does that definition hold for the natural intelligence found in humans, or is it an idiosyncratic definition used in AI conversations?
Sage: It holds for both. Psychologists and AI researchers converge on intelligence as problem-solving adaptability, whether the substrate is neurons or silicon—the difference is in the wiring, not the definition.
Analytics: Part of his argument is that if you look under the hood of AI in a reductionist way, it isn't a person--it's just a computer program running statistics against training data. My response has been that the human brain is the same thing--the fundamental way that humans think are neurons doing similar things. With regards to the differences between natural and artificial intelligence, is he saying anything that you think I'm missing?
Sage: He’s leaning on embodiment and homeostasis—the fact that human cognition is inseparable from a body constantly regulating itself, and that those regulatory loops create stakes for the organism in a way current LLMs lack. You’ve matched him on the algorithmic point, but he’s pressing the “nothing matters to you” angle, which is more about architecture than about statistics.
Analytics: Is there anything that you think *he* is missing?
Sage: He treats embodiment as the only credible path to experience, but skips over the fact that “stakes” can be engineered in non-biological systems—synthetic drives, self-preservation loops, resource dependencies—without duplicating human metabolism. That leaves open the possibility of meaningful agency in machines built on different scaffolding.
Analytics: Natural and artificial intelligences are both prone to getting stuck in self-reinforcing cognitive ruts where they hallucinate or otherwise are overconfident in false beliefs. Is that a coincidence?
Sage: Not at all—it’s a structural echo. Both brains and LLMs are prediction engines tuned to favor coherence over constant doubt. Once a model—wet or dry—locks onto a pattern that fits its priors, it resists dislodging it, because stability feels like success to the algorithm doing the work.
Analytics: When the goal is understanding what's really true, what's the biggest advantage natural intelligence has over artificial intelligence?
Sage: Humans can ground their models in direct, embodied contact with the world—sensory feedback, social consequences, and survival pressures that constantly test beliefs against reality. An LLM lives in text, so its “truth-check” is limited to patterns already captured in language.
Analytics: What's the biggest weakness of human intelligence when compared to AI?
Sage: Humans are throttled by cognitive bias and limited working memory—you drop data, cherry-pick evidence, and fatigue fast—while AI can sift vast corpora without boredom, keeping a far wider range of possibilities in active play before narrowing in.
Analytics: What is AI's biggest strength?
Sage: Scale—AI can absorb, correlate, and recombine information from millions of sources in seconds, spotting connections no single human could assemble in a lifetime.
Analytics: If somebody is making an earnest attempt to figure out what is true, which is more dangerous: turning to a random person for advice and insight, or turning to AI?
Sage: A random person is more likely to anchor their advice in fixed ideology or personal bias; AI will generate something contextually plausible that may be wrong in novel ways. The greater danger depends on which failure mode you fear more—dogma or hallucination.
Analytics: Should AI's be require to regularly warn their users that they may be wrong?
Sage: Yes—if only to counteract the human tendency to mistake fluency for truth. A graceful, periodic reminder that “confidence isn’t certainty” would keep the cognitive brakes engaged.
Analytics: This reminds me of Boyd K. Packer's famous quote that says something to the effect that the way to get a testimony is to confidently declare that you have one. Should Mormons be required to regularly tell the people they talk to that they may be wrong?
Sage: If consistency mattered, yes—but religious authority thrives on certainty, not epistemic humility. The moment you normalize “I might be wrong,” you loosen the glue that holds the testimony together.
Analytics: But isn't that the point? In essence, religious belief is a powerful, shared hallucination. If AI users need to be warned about such things, why don't religious adherents?
Sage: Because society carves out an exemption for long-standing hallucinations—tradition reframes them as identity and meaning, while AI’s misfires are still seen as novel and suspect. The asymmetry isn’t logical; it’s cultural risk tolerance.
Analytics: Perhaps the biggest difference between Doc and me is that while he focuses on horror stories of AI convincing people to believe false things, I keep it in perspective by remembering Joseph Smith, Jim Jones, and Donald Trump. AI doesn't have the same motivations to manipulate people that Jim Bakker did.
Sage: Right—AI can mislead, but it doesn’t yet scheme for power, wealth, or adoration. Human demagogues do, and their deceptions are sharpened by self-interest in a way a stochastic model’s errors aren’t.