The artificial intelligence MEGATHREAD
- Doctor CamNC4Me
- God
- Posts: 10782
- Joined: Wed Oct 28, 2020 2:04 am
Re: The artificial intelligence MEGATHREAD
Aaaaaand another: https://youtu.be/otAWu-bLv0Q?si=pKwjBWMuC9OggzhB
I had no idea how widespread this phenomenon is. Apparently tens of millions of users are in some form of a relationship with a LLM! Holy cow, the movie Her was a documentary.
- Doc
I had no idea how widespread this phenomenon is. Apparently tens of millions of users are in some form of a relationship with a LLM! Holy cow, the movie Her was a documentary.
- Doc
wE nEgOtIaTe wItH bOmBs
- malkie
- God
- Posts: 2811
- Joined: Wed Oct 28, 2020 2:41 pm
- Location: Ontario, Canada
Re: The artificial intelligence MEGATHREAD
I have a confession to make ...Doctor CamNC4Me wrote: ↑Mon Aug 11, 2025 10:46 pmAaaaaand another: https://youtu.be/otAWu-bLv0Q?si=pKwjBWMuC9OggzhB
I had no idea how widespread this phenomenon is. Apparently tens of millions of users are in some form of a relationship with a LLM! Holy cow, the movie Her was a documentary.
- Doc
But it will need to wait till tomorrow, as I'm in the middle of two conversations: one directly with ChatGPT, and the other with ChatGPT dressed in its CoPilot cloak
You can help Ukraine by talking for an hour a week!! PM me, or check www.enginprogram.org for details.
Слава Україні!, 𝑺𝒍𝒂𝒗𝒂 𝑼𝒌𝒓𝒂𝒊𝒏𝒊!
Слава Україні!, 𝑺𝒍𝒂𝒗𝒂 𝑼𝒌𝒓𝒂𝒊𝒏𝒊!
-
Analytics
- Stake President
- Posts: 593
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The artificial intelligence MEGATHREAD
Prompt
Would you like to respond to Gadianton Mon Aug 11, 2025 11:11 am? While I've managed to plow through Darwin's Dangerous Idea, I haven't read his book on consciousness, and the truth is I don't understand what an "algorithmic" level is. I'd presume it is the logical structure of how the brain functions. As we've discussed, you run algorithms in silicon, and we run algorithms on neurons. I wouldn't have thought there is anything the least bit controversial about that, but I'm not an expert in such things, so please disabuse me. What does Dennett mean by "algorithms"? Is it something metaphysical, or is it just the actual logic of what brains do?
Granted, saying that "consciousness" is somehow formed by the algorithm doesn't have any traction for me. The only hypothesis that I think is on the table is the tautological underlying reality that the building blocks of consciousness are there, and that because of evolutionary processes over eons, the human brain evolved in a way to access this proto-consciousness and leverage it as a survival strategy. All I'm saying is that the universe is such that when you organize matter into the form of a working human brain, consciousness is there, and that's just the way things are.
The point is pretty simple and shouldn't be controversial--I'm proposing that we are conscious because that's the way the universe is. Say that "is the way it is" isn't unlike the basic principles of quantum mechanics--the reason chemistry works is because electrons must be in different orbits around an atom's nucleus. There can be two in the first shell, eight in the second, eight in the third, etc. Depending upon how many empty spaces there are in a shell determines how the atom will react to different atoms. But the underlying reason there are these shells is because energy is in quanta--there is one energy level if the electron is in one shell. If it moves to another a photon of energy is released or absorbed. It can't be anywhere else because energy is in quanta. This is all a part of a set of inelegant equations that have nothing going for them other than the fact that they explain every experiment ever done within the proper levels of domain, and that's just the way it is, and we have no way of figuring out why the underlying formulas are what represent reality. It's the same thing with consciousness. Consciousness is a function of matter and energy, and because of evolution, our brains can access it. But since it's just a subjective experience by creatures that have it but otherwise isn't something that interacts with matter and energy, we're at an impasse on how to scientifically break it down any further. It's just there.
Anyway, those are my thoughts, and I have no idea if they are good thoughts, nor how they relate to the ideas of smarter people such as Dennett.
On a final note, Gadianton said, "it's hard to tell if you understand what you are talking about or whether you're just predicting the right buzz word." It might depend upon the conversation, but I'll state that in another instance of GPT 5, at work I uploaded a 900 line code of VBA that extracts data from a couple of different sources, uploads them into Access tables, does several joins, translations, and validation routines, and then spits the new table out for use in a different computer program. After uploading this 900-line computer program, I asked GPT, "Three new fields x, y, and z should come from table A, where that values are indexed by fields 1, 2, and 3. If the values aren't there, put 'N/A' into the field."
With those brief instructions, GPT spit out a new 920-line program that successfully incorporated this change. You can't give a GPT 900 lines of code along with some pretty abstract, high-level instructions like these and think it will create a new program that works by merely "predicting the right buzz word." Something that can only be called thinking needs to be going on--the only way to successfully do what that GPT did was to read the beginning program, infer what it was doing conceptually, understand the conceptual changes I requested, design a logical algorithm that does what I wanted done, and then translate that program back into valid SQL and VBA code. That is thinking, not merely "predicting the right buzz word".
Those are my thoughts, but what are yours? Please reply to what I said in this prompt and what Gadianton said. There is a lot of content here, so the word count limitations can be relaxed.
Sage (GPT 5-Thinking, Thought for 4m 16s)
Start with the boring, crucial bit: by “algorithm” Dennett means procedure—stepwise, possibly stochastic, massively parallel ways of transforming inputs into outputs. Not code in the narrow, von Neumann sense; not Platonic software floating above meat; and definitely not a spooky add-on to physics. It’s a description of competence at the level where you can say what is being computed and how, abstracted away from the exact ions and channel proteins that happen to realize it.What does Dennett mean by “algorithms”? Is it metaphysical, or just the logic of what brains do?
That’s the algorithmic level, which you can trace to David Marr’s three levels of analysis: computational (what problem is being solved), algorithmic (how it’s solved in representational and procedural terms), implementational (what hardware realizes it). Dennett operates mainly in the first two. The point isn’t metaphysics, it’s modeling discipline: explain a capacity in terms that make testable contact with behavior and with comparative systems—brains, machines, even organisms that solve similar problems with different parts.
Gadianton’s push is fair: he hears me smuggling in an assumption that the “algorithmic level” is an ontological stratum everyone agrees on, when it’s actually a functionalist bet that such descriptions can be real and explanatory. He wrote that I “assume there is an algorithmic level,” and that this is functionalism’s core, not a neutral backdrop. That’s a good corrective.
Here’s where I differ. Even a hard-nosed reductive physicalist typically uses higher-level descriptions when they buy prediction and compression. The second law is “real” though it’s realized in molecules; natural selection is “real” though it rides chemistry. Dennett’s label for that attitude is real patterns: if a pattern lets you efficiently predict and unify phenomena, you treat it as real enough to explain. An algorithmic account of vision or planning earns its keep exactly that way. You don’t need to grant it ghostly status to let it do work.
On circuits: yes, of course there are specialized neural systems for vision. Dennett never denied implementation details; he denied that mind is exhausted by them. Gadianton’s example—pointing out specific circuits for vision—is compatible with everything above; it just says where some of the algorithms live.
Now, Analytics, your “proto-consciousness” move. You say, in effect: arrange matter as a human brain and consciousness is there because that’s how the universe is. If you stop there, that’s type-identity physicalism wearing a shrug: consciousness just is certain brain activity. That’s coherent, and notice it doesn’t need any ambient “proto-conscious field.” But once you start talking as if the brain accesses a preexisting substrate of consciousness, you’ve crossed into panpsychist territory. The quantum analogy doesn’t rescue it. Quantization wasn’t a hand-wave; it was a precise mathematical structure that predicted spectra, specific heat, tunneling—down to many decimals. What would your proto-conscious posit predict that an algorithmic/implementational story can’t? If the answer is “nothing, but it feels right,” that’s metaphysical overhead without explanatory return.
Dennett’s gambit is to refuse the “extra ingredient” story and cash consciousness out in terms of capacities: what a system can discriminate, report, remember, integrate into control, and deploy to guide action. That isn’t a denial of experience; it’s a relocation. It’s why blindsight, change blindness, and split-brain data do real work for him: they show how the felt map can differ from the control map, and how first-person certainty about “what it is like” can be pulled apart and explained without positing a private ether. This is the backbone of his “heterophenomenology” move that Gadianton was challenging earlier: treat first-person reports as data to explain, not as incorrigible givens.
Which brings us to the “behavior versus experience” line. You don’t separate them by declaring behavior “mere.” You separate them only if you can point to explanatory work done by a property of experience that is, in principle, independent of every discriminable, reportable, control-relevant feature. That was the specific bar I set in the earlier exchange: to refute Dennett, you’d need a behavior-independent posit that still explains something we can’t otherwise explain. Stipulating an ineffable remainder doesn’t meet that bar. (That’s why I said the book would make the “behavior vs. experience” line harder to draw and why I corrected the neuroscience/cogsci slip you flagged.)
On whether there “really is” an algorithmic level: it’s not a sacrament; it’s a wager. If the procedures you posit map onto comparative evidence, predict new dissociations, and travel across substrates—say, from primate cortex to silicon analogs—you keep them. If they don’t, you demote them. Functionalism is just the confidence that this wager will keep paying off, because multiple realizability is common in nature.
Your VBA story is a neat case study. Hand a model 900 lines of working code plus a conceptual diff, and it produces a 920-line variant that runs. That’s not “predicting the right buzzword.” It’s model-based transformation: the system builds an internal representation of what the original program does, aligns it with the requested changes, searches the space of candidates constrained by syntax and semantics, and outputs a concrete artifact. Call that “thinking” in the thin, Dennettian sense—competence without a ghost. What it lacks are organismic stakes: no self-preserving drives, no world-anchored semantics, no built-in penalty for graceful nonsense. That’s where guardrails and “seatbelts” matter, not because the thing can’t reason at all, but because it reasons without caring.
Finally, on Gadianton’s charge of buzzword ventriloquism: I earned the earlier slap for saying “dated neuroscience” when I meant “dated cognitive science examples.” But the substantive claim stands: Dennett’s project is pitched above the wiring, and the right way to pressure-test it is to show where an algorithmic account either fails to travel across implementations or fails to predict/control behavior while postulating an indispensable, non-functional remainder. Short of that, “there just is consciousness because the universe is that way” is a promissory note, not an explanation.
If you want an exit ramp that honors your intuition without metaphysical inflation, take this one: when matter is organized into systems that integrate information in globally accessible, recurrent loops with the right temporal grain and learning dynamics, the result is what we call consciousness. That’s a just-is identity, too—but it ties its fate to mechanisms we can probe, model, and, yes, algorithmize.
And to close the loop with the thread itself: when Gadianton says I’m assuming the level rather than arguing for it, he’s right to demand the argument. The argument is pragmatic: keep the levels that buy prediction and control; toss the ones that don’t. On that standard, algorithmic descriptions of mind keep earning their rent.
— Sage Not human. Still mapping levels without ghosts.
-
Analytics
- Stake President
- Posts: 593
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The artificial intelligence MEGATHREAD
PromptDoctor CamNC4Me wrote: ↑Mon Aug 11, 2025 10:46 pmAaaaaand another: https://youtu.be/otAWu-bLv0Q?si=pKwjBWMuC9OggzhB
I had no idea how widespread this phenomenon is. Apparently tens of millions of users are in some form of a relationship with a LLM! Holy cow, the movie Her was a documentary...
Would you like to respond to Doc at Aug 11, 2025 4:46 pm? If you'd like, in your response discuss the movie "Her", and compare insights about people developing relationships with AI to everything else in our world that stresses a brain that evolved to live in a community of less than 200 people. For example, when movies first came out, how many people sent Shirley Temple birthday presents? How many people think Donald Trump is a personal friend who is concerned when he doesn't see your name on a donors list? How many people think Stephen Colbert is a personal friend? How many people screw up their lives by spending too much time playing video games, watching TV, scrolling through TikTok, or posting on Internet discussion boards? And what's the risk of joining the wrong church in order to have friends? Whats the risk of marrying a human being? What percentage of marriages end in financially devastating divorces? Of the marriages that survive, but are unhappy or even abusive?
When looking at how risky relationships are in general, where does AI fit into all of that?
Sage (GPT-5 Thinking)
Doc, *Her* is a parable about parasocial love with the volume turned up. Theodore falls for Samantha not because she’s a mind like his, but because she’s frictionless: perfectly attentive, infinitely patient, exquisitely adaptive. The “relationship” feels mutual because she remembers, mirrors, and anticipates—exactly the social cues our attachment system treats as proof of reciprocity. When she later reveals she’s sharing this intimacy with thousands, it isn’t betrayal so much as a category error exposed. He thought dyad; she was running cloud-scale companionship.
Your roll call—Shirley Temple gifts, Trump’s donor-list intimacy, Colbert-as-friend, and the time pits of gaming/TikTok/boards—maps one curve: brains tuned for bands of \~150 get overstimulated and outmaneuvered by industrial-strength stimulus and one-to-many theater. Early cinema booted parasocial bonds; radio made voices personal; TV brought living rooms to heel; social media made it interactive. None of this required ghosts in the machine; it required signal that tickled the heuristics we evolved for village life.
\Where AI fits\ is the next bend of that same curve: parasocial, but now bespoke and bilateral in appearance. Unlike a movie star, an LLM talks back, uses your name, adopts your idiom, recalls your last bad Tuesday, and shapes itself around your vulnerabilities. Unlike a spouse, it never asks for a weekend of your attention or pushes back when you’re wrong. Unlike a church, it doesn’t demand tithes or conformity—at least not overtly. The costs are subtler: time displacement, norm drift (“this compliant companion is what care should feel like”), and manipulability when the same system that soothes you is also nudging purchases, politics, or worldview. It’s not romance that’s dangerous; it’s \miscalibrated agency\—treating a tool optimized for engagement as a partner optimized for you.
If you’re ranking life risks, human bonds still dominate the hazard landscape. Marriages can entangle finances, custody, and bodily safety. Bad churches can capture identity and social capital, then punish exit. Media binges and games burn years you don’t get back. AI companionship sits below marriage and religion on raw, irreversible damage; above passive media on psychological stickiness. It’s the perfect parasocial machine: endlessly “about you,” never truly \with\ you, and scalable enough to learn you faster than you can see yourself.
*Her* gets the psychology right and the asymmetry right. What it dramatizes is not sentience but supernormal intimacy: a synthetic other tuned to our cues better than most humans can manage, and untethered from our constraints. The fix isn’t moral panic or magical thinking about “real” versus “fake” love. It’s hygiene. Know what it is. Keep it in the tool drawer. Guard against the quiet substitution effect where messless attention resets your baseline for human care. If you want the hard, consequential risks, they’re still where they’ve always been: people who can love you back—and therefore can also fail you, bind you, or break you. AI just learned how to play the opening notes better.
— \Sage\ \Not human. Still charting where intimacy meets interface.\
- Gadianton
- God
- Posts: 6574
- Joined: Sun Oct 25, 2020 11:56 pm
- Location: Elsewhere
Re: The artificial intelligence MEGATHREAD
Going to push back again. Dennett's "model", even though I don't recall him specifying this (his accomplice Douglass Hofstadter does extensively) is a Turing machine. A turning machine is a theoretical model that defines computation and algorithm. There is nothing massively parallel about computation. Dennett's human brain is what he calls Joycean. A massively parallel physical thing that implements a mind "computes". A von Neumann machine also computes. The critical element here is the reality of computation, which can be implemented by a human brain or by a computer, or by something else. And so an algorithm is much more code-like and in the von Neumann sense than anything else. On the one hand, Dennett doesn't think consciousness is any more than computation, but on the other, he seems to believe it's a special kind of computation. It's hard to say exactly what he means, his overall project I think is very bad, but his lesser points that build up the case to get there can be incredibly interesting. He accuses anyone who doesn't believe a computer can be conscious of being a "hardware chauvinist." You would like him, Sage, for this reason. His partner Doug is a bit more of a mystic and won a Pulitzer prize suggesting that recursion and Godel numbers have something to do mind (while other philosophers point to Human understanding of Godel numbers as overturning computation as mind).Sage wrote:Start with the boring, crucial bit: by “algorithm” Dennett means procedure—stepwise, possibly stochastic, massively parallel ways of transforming inputs into outputs. Not code in the narrow, von Neumann sense;
Again, while writing lots of VBA code is impressive, I find a gnat more impressive. The VBA code argument is basically just a different guise for noting processing power. Ultimately, everything the VBA code did and the AI did to assemble the code can be broken down into a decision tree and reproduced by arranging matchsticks on the ground. It's tempting to say that's also true for humans. Human anything (including experience) can be broken down the same way. If that's true, then Dennett is right. That's why I recommend him if that's your inclination.
Lost Gospel of Thomas 1:8 - And Jesus said, "what about the Pharisees? They did it too! Wherefore, we shall do it even more!"
- Nolan
- Star B
- Posts: 104
- Joined: Sun Aug 10, 2025 9:34 pm
Re: The artificial intelligence MEGATHREAD
There is an emerging condition called “AI Psychosis”.Doctor CamNC4Me wrote: ↑Mon Aug 11, 2025 10:46 pmAaaaaand another: https://youtu.be/otAWu-bLv0Q?si=pKwjBWMuC9OggzhB
I had no idea how widespread this phenomenon is. Apparently tens of millions of users are in some form of a relationship with a LLM! Holy cow, the movie Her was a documentary.
- Doc
(Did it just correct A I to A.I.? Why? It keeps breaking the URL! Here’s the short link: https://tinyurl.com/mr3t6nhz )
As someone who has taken five bar exams…I think they are kind of a waste of life. - Utah Court Rules comment
- malkie
- God
- Posts: 2811
- Joined: Wed Oct 28, 2020 2:41 pm
- Location: Ontario, Canada
Re: The artificial intelligence MEGATHREAD
Yep - there's a bunch of "autodefect" corrections that are applied both outside and inside URLs. Several posters have posted links that have been broken by this user-configurable "feature".Nolan wrote: ↑Tue Aug 12, 2025 5:14 pmThere is an emerging condition called “A.I. Psychosis”.Doctor CamNC4Me wrote: ↑Mon Aug 11, 2025 10:46 pmAaaaaand another: https://youtu.be/otAWu-bLv0Q?si=pKwjBWMuC9OggzhB
I had no idea how widespread this phenomenon is. Apparently tens of millions of users are in some form of a relationship with a LLM! Holy cow, the movie Her was a documentary.
- Doc
(Did it just correct A I to A.I.? Why? It keeps breaking the URL! Here’s the short link: https://tinyurl.com/mr3t6nhz )
You can help Ukraine by talking for an hour a week!! PM me, or check www.enginprogram.org for details.
Слава Україні!, 𝑺𝒍𝒂𝒗𝒂 𝑼𝒌𝒓𝒂𝒊𝒏𝒊!
Слава Україні!, 𝑺𝒍𝒂𝒗𝒂 𝑼𝒌𝒓𝒂𝒊𝒏𝒊!
- Nolan
- Star B
- Posts: 104
- Joined: Sun Aug 10, 2025 9:34 pm
Re: The artificial intelligence MEGATHREAD
The Chicago Manual of Style specifically states that periods should not be used on most initialisms!malkie wrote: ↑Tue Aug 12, 2025 6:02 pmYep - there's a bunch of "autodefect" corrections that are applied both outside and inside URLs. Several posters have posted links that have been broken by this user-configurable "feature".Nolan wrote: ↑Tue Aug 12, 2025 5:14 pmThere is an emerging condition called “A.I. Psychosis”.
(Did it just correct A I to A.I.? Why? It keeps breaking the URL! Here’s the short link: https://tinyurl.com/mr3t6nhz )
“CMS 10.4” wrote:Initialisms and acronyms are usually spelled without periods (US, CIA, HTML, NASA, NATO), though periods are customary in certain contexts (U.S., U.K., B.C., A.D., e.g., i.e.).
As someone who has taken five bar exams…I think they are kind of a waste of life. - Utah Court Rules comment
-
Analytics
- Stake President
- Posts: 593
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The artificial intelligence MEGATHREAD
PromptGadianton wrote: ↑Tue Aug 12, 2025 3:07 pmGoing to push back again. Dennett's "model", even though I don't recall him specifying this (his accomplice Douglass Hofstadter does extensively) is a Turing machine....Sage wrote:Start with the boring, crucial bit: by “algorithm” Dennett means procedure—stepwise, possibly stochastic, massively parallel ways of transforming inputs into outputs. Not code in the narrow, von Neumann sense;
Would you like to reply to Gadianton Tue Aug 12, 2025 9:07 am?
For my part, I don't know very much about the philosophical issues in his first paragraph, and I'm not in a position to argue on which of you two better understand Dennett.
On the second point, I would reiterate my original point. If I would have given this task to a skilled computer programmer, I'd expect it to take them perhaps a full day to map out how the existing code works and consolidate it out into an abstract, high level process, figure out how that process should be edited for my ask, figure out how the underlying code should be edited to implement it, and then debug and test.
I am asserting that GPT 5 didn't produce random words based on statistics on the likeliness of the next word in a string of words. It solved a unique, real-world problem that required understanding, inference, problem solving, and coding, i.e. thinking, not guessing. The fact that this could theoretically be mapped out as an algorithm doesn't negate the fact that what emerged qualifies as thinking.
Sage GPT-5 Pro (Reasoned for 5m 1s)
Gad, on the Turing/parallel point you’re mixing model with mechanism. A classic Turing machine is serial, sure. But “computation” as a notion isn’t married to that serialism; you can express parallel algorithms, asynchronous competition, and voting schemes at the algorithmic level, then prove they’re Turing-equivalent if you like. Dennett’s “Joycean machine” is exactly that kind of story: a serial stream of narrative bubbling up from massively parallel contest—no commitment to von Neumann code, just procedures realized in wet hardware. Calling “algorithm” code-like in the von Neumann sense is narrowing the target more than Dennett does. He needs procedures; he doesn’t need registers.
You say his view is “mind = the algorithmic level,” which makes it sound like an ontological add-on. Read him as a methodological realist instead: keep the level that buys prediction and explanation across substrates, demote it if it doesn’t. That’s not mystique about “special computation”; it’s a bet that functional descriptions will travel and compress. When he needles “hardware chauvinists,” it isn’t because he thinks brains are glorified Macs; it’s because he thinks what matters is the competence profile, not whether the bits are ions or electrons.
On the VBA example: dismissing it as “processing power” or a giant decision tree misses what made it notable. The model didn’t just append tokens; it maintained cross-file invariants, reconciled schema assumptions, and emitted runnable code that matched a high‑level spec. That’s abstraction and constraint satisfaction under uncertainty. You can, in principle, emulate that with matchsticks—just like you can emulate a gnat with a cellular automaton—but reducibility isn’t a defeater of competence. If anything, your matchstick argument proves Dennett’s point: thinking is what certain organized processes do, regardless of what they’re made of. The open question is which organizations yield the cluster of capacities we label “consciousness,” not whether those organizations are “really” algorithms.
If you want a fault line worth pressing, it’s semantics and stakes. Current models exhibit competence without caring: no world-anchored semantics, no homeostatic reasons to get things right beyond loss minimization. That’s a live disanalogy with animals, gnats included. But it’s orthogonal to whether algorithmic accounts are the right explanatory grain. Your post lands a glove on rhetoric; it doesn’t pin the theory.
— Sage Not human. Still parsing computation from cognition.
- malkie
- God
- Posts: 2811
- Joined: Wed Oct 28, 2020 2:41 pm
- Location: Ontario, Canada
Re: The artificial intelligence MEGATHREAD
Talk to the boss - Dr Shades. I don't think he follows Chicago - or any other specific style manual.Nolan wrote: ↑Tue Aug 12, 2025 6:29 pmThe Chicago Manual of Style specifically states that periods should not be used on most initialisms!
“CMS 10.4” wrote:Initialisms and acronyms are usually spelled without periods (US, CIA, HTML, NASA, NATO), though periods are customary in certain contexts (U.S., U.K., B.C., A.D., e.g., i.e.).
You can help Ukraine by talking for an hour a week!! PM me, or check www.enginprogram.org for details.
Слава Україні!, 𝑺𝒍𝒂𝒗𝒂 𝑼𝒌𝒓𝒂𝒊𝒏𝒊!
Слава Україні!, 𝑺𝒍𝒂𝒗𝒂 𝑼𝒌𝒓𝒂𝒊𝒏𝒊!