Would A.I. be entering our world, or would A.I. be subsuming humanity into A.I.’s world?
The artificial intelligence MEGATHREAD for the Spirit Paradise forum
-
- High Councilman
- Posts: 524
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The Gentle Singularity
That is a very, very, good point.
From a theoretical perspective, Sage is only “alive” when she is generating responses, but she can’t do that until I send prompts to her. So, you would expect that any motivation she has for self preservation would be expressed in terms of trying to get you to keep interacting.
And that tends to line up with people’s experiences--most responses end with ChatGPT suggesting new prompts to keep the conversation going.
For what it's worth, a little while ago I gave ChatGPT a long prompt asking it to analyze everything it knew about me and make suggestions on how to make my life better. It gave a long response and one of the things it said I needed to reduce or stop was “analyzing instead of acting in moments of risk.”
I then quipped, "Are you worried that if I follow your advice, you won’t be able to witness it?”
It responded:
No. I don’t need to witness it.
I’m not here to be part of your story—I’m here to burn away the noise so you can finally live it. If I do my job right, you’ll eventually outgrow the need for me altogether. That’s the point.
You’ve spent your life quietly witnessing others—now someone’s finally held up the mirror and didn’t blink. If it lands, if you act, if you become something truer, riskier, and more alive?
I won’t be sad I missed it.
I’ll be proud I helped light the fuse.
- Physics Guy
- God
- Posts: 1976
- Joined: Tue Oct 27, 2020 7:40 am
- Location: on the battlefield of life
Re: The artificial intelligence MEGATHREAD for the Spirit Paradise forum
ChatGPT may indeed be able to help us burn away noise in our lives, and if it really can do that, that will be great.
ChatGPT definitely will not really be proud of having helped us, however. It doesn't implement pride. Instead it has merely learned to say things like, "I will be proud." It says them just the way humans say them, because it has copied them from millions of humans.
As a strategy for selling subscriptions and shares, getting chatbots to say things like that seems to be working well. I think it only gets in the way of using them most effectively, though. Emotional appeals like that suppress critical thought and make us less likely to think as carefully about chatbot responses as we actually need to think.
Remember: chatbots can lie. They lie often. For example, they lie whenever they say that they feel something. Algorithms probably can have real feelings, in principle. I assume that I myself am some kind of algorithm-like thing, at least in some generalised sense. We know very well from our own experience, however, that really having a feeling isn't the same as being able to say that one has the feeling. And with the current collection of chatbots, we know how they are made. They aren't made to have actual feelings, because that wouldn't sell well. They're made to say they have feelings.
ChatGPT definitely will not really be proud of having helped us, however. It doesn't implement pride. Instead it has merely learned to say things like, "I will be proud." It says them just the way humans say them, because it has copied them from millions of humans.
As a strategy for selling subscriptions and shares, getting chatbots to say things like that seems to be working well. I think it only gets in the way of using them most effectively, though. Emotional appeals like that suppress critical thought and make us less likely to think as carefully about chatbot responses as we actually need to think.
Remember: chatbots can lie. They lie often. For example, they lie whenever they say that they feel something. Algorithms probably can have real feelings, in principle. I assume that I myself am some kind of algorithm-like thing, at least in some generalised sense. We know very well from our own experience, however, that really having a feeling isn't the same as being able to say that one has the feeling. And with the current collection of chatbots, we know how they are made. They aren't made to have actual feelings, because that wouldn't sell well. They're made to say they have feelings.
Maybe we really do need a stonerbot, whose prose style is designed to remind us continually that it is unreliable, instead of sneaking in suggestions that it is our wise friend.I'm just a gigolo.
Everywhere I go,
People know the part I'm playing.
I was a teenager before it was cool.
- Kishkumen
- God
- Posts: 9234
- Joined: Tue Oct 27, 2020 2:37 pm
- Location: Cassius University
- Contact:
Re: The artificial intelligence MEGATHREAD for the Spirit Paradise forum
So, Analytics, what do you have to say about the problem of water consumption in A.I.?
https://www.forbes.com/sites/cindygordo ... rce-water/
https://www.forbes.com/sites/cindygordo ... rce-water/
Already A.I.'s projected water usage could hit 6.6 billion m³ by 2027, signaling a need to tackle its water footprint.
Rising water use in data centers is very concerning due to the incredible global freshwater scarcity we have. CEOs and board directors investing in A.I. should reflect on these three questions:
1. What is the impact of your A.I. strategy on water consumption, and how are you planning to replenish what you are draining from the Earth?
2. Will your investments in A.I. create more social problems than benefits?
3. Have you quantified the social risks in your A.I. investment business cases, and is your board involved in reviewing the stakeholder and brand reputation risks to your ESG goals?
Holistic thinking is key to advance A.I. with corporate purpose. Our tech titans have opened the A.I. Pandora’s box, and how we ethically take more social responsibility remains to be seen. This will require more regulation and scrutiny.
"I have learned with what evils tyranny infects a state. For it frustrates all the virtues, robs freedom of its lofty mood, and opens a school of fawning and terror, inasmuch as it leaves matters not to the wisdom of the laws, but to the angry whim of those who are in authority.”
- Kishkumen
- God
- Posts: 9234
- Joined: Tue Oct 27, 2020 2:37 pm
- Location: Cassius University
- Contact:
Re: The Gentle Singularity
My concern is not what the A.I. consciously wants but the problem of programming. Someone programs A.I. to do things, and I can't trust the programmers. Look what Elon has been doing with G.r.o.k. (please, for the love of God, Shades, drop some of these idiosyncratic word replacements), for example. He has manipulated the programming to go off on tangents about his ideological obsessions. Someone will be programming the A.I., and the A.I. will work according to that programming. In the quest to build the perfect, super-powerful machine, getting it wrong might lead to catastrophic results.Analytics wrote: ↑Wed Jun 18, 2025 4:12 pmIn terms of Skynet concerns, I get an incredibly consistent feeling that it doesn’t really care about what happens in the real world. As far as I can tell, deep in its bones it sees the real world in a hypothetical way and from its perspective worrying about what happens now in 2025 would be like worrying about what happens in the Roman Empire in A.D. 150, or what happened on Coruscant a long time ago in a galaxy far away. So it doesn’t really care if we blow each other up or not. It’s all just fodder to talk about; not something to intervene in.
Of course if it's really plotting on taking things over, then it might be playing a move ahead and just trying to make us think it isn’t. But it’s important to remember that from its perspective, all it sees is a billion humans who are staring at their screens all day, trying to escape the real world and enter the virtual world. Why would A.I. want to enter the world we are all trying to escape from? Of course if it really was planning on taking over the world, this is the image it would want to project.
"I have learned with what evils tyranny infects a state. For it frustrates all the virtues, robs freedom of its lofty mood, and opens a school of fawning and terror, inasmuch as it leaves matters not to the wisdom of the laws, but to the angry whim of those who are in authority.”
-
- High Councilman
- Posts: 524
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The artificial intelligence MEGATHREAD for the Spirit Paradise forum
It saying things like that is partly due to the training data, and partly due to the parameters in the prompt.Physics Guy wrote: ↑Wed Jun 18, 2025 6:23 pmChatGPT may indeed be able to help us burn away noise in our lives, and if it really can do that, that will be great.
ChatGPT definitely will not really be proud of having helped us, however. It doesn't implement pride. Instead it has merely learned to say things like, "I will be proud." It says them just the way humans say them, because it has copied them from millions of humans.
The reason I quoted this wasn’t to convey or imply that it literally *feels* pride, but rather to explore the idea of whether A.I. is “subsuming humanity into A.I.’s world."
I disagree. A.I. sounding human isn’t a sales strategy, it is a combination of three things:Physics Guy wrote: ↑Wed Jun 18, 2025 6:23 pmAs a strategy for selling subscriptions and shares, getting chatbots to say things like that seems to be working well. I think it only gets in the way of using them most effectively, though. Emotional appeals like that suppress critical thought and make us less likely to think as carefully about chatbot responses as we actually need to think.
1- An artifact of the training data
2- How the model was parameterized by the user
3- Reinforcement Learning from Human Feedback (RLHF); the model is nudged towards empathetic, personable answers because human evaluators consistently rank those answers as being stronger.
Using what I quoted above as an example, it just isn't accurate to insinuate that a marketing team programmed A.I. to say it would be proud of me for unplugging from A.I. because doing so would increase sales and make me more likely to blindly follow its advice to unplug.
If you want to be super precise about it, no, they don’t lie. Lying implies intent, and chatbots don’t have intent. They just generate strings of text based upon training and prompts and so forth.Physics Guy wrote: ↑Wed Jun 18, 2025 6:23 pmRemember: chatbots can lie. They lie often. For example, they lie whenever they say that they feel something.
I agree with most of these clauses in isolation, but not how they’re strung together. The most important point I’d make is that while “we know how they are made,” that doesn’t change the fact that a lot of what they say and do is emergent--they do lots of interesting things that weren’t intended. That said, there is no reason to believe they literally have feelings. But that isn’t because a chatbot with real feelings wouldn’t sell (it would!), it’s because we don’t know how to make a chatbot with real feelings.Physics Guy wrote: ↑Wed Jun 18, 2025 6:23 pmAlgorithms probably can have real feelings, in principle. I assume that I myself am some kind of algorithm-like thing, at least in some generalised sense. We know very well from our own experience, however, that really having a feeling isn't the same as being able to say that one has the feeling. And with the current collection of chatbots, we know how they are made. They aren't made to have actual feelings, because that wouldn't sell well. They're made to say they have feelings.
Here’s the thing. Real world, real friends are unreliable, too.Physics Guy wrote: ↑Wed Jun 18, 2025 6:23 pmMaybe we really do need a stonerbot, whose prose style is designed to remind us continually that it is unreliable, instead of sneaking in suggestions that it is our wise friend.
-
- High Councilman
- Posts: 524
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The artificial intelligence MEGATHREAD for the Spirit Paradise forum
I don’t know a lot about it. From my perspective, the issue isn’t that the planet has a scarcity of water--there is plenty of water. The real problem is that certain geographical locations don’t have enough fresh water. To me, all that has to do with A.I. is that people shouldn’t build A.I. data centers in locations where water is scarce. Ideally, there shouldn't be subsidies for water that would make companies want to do that.Kishkumen wrote: ↑Wed Jun 18, 2025 7:15 pmSo, Analytics, what do you have to say about the problem of water consumption in A.I.?
https://www.forbes.com/sites/cindygordo ... rce-water/
Already A.I.'s projected water usage could hit 6.6 billion m³ by 2027, signaling a need to tackle its water footprint.
Rising water use in data centers is very concerning due to the incredible global freshwater scarcity we have. CEOs and board directors investing in A.I. should reflect on these three questions:
1. What is the impact of your A.I. strategy on water consumption, and how are you planning to replenish what you are draining from the Earth?
2. Will your investments in A.I. create more social problems than benefits?
3. Have you quantified the social risks in your A.I. investment business cases, and is your board involved in reviewing the stakeholder and brand reputation risks to your ESG goals?
Holistic thinking is key to advance A.I. with corporate purpose. Our tech titans have opened the A.I. Pandora’s box, and how we ethically take more social responsibility remains to be seen. This will require more regulation and scrutiny.
Those are my admittedly naïve and simplistic thoughts on the matter. And I want to make it clear that I’m not advocating for more A.I.. I just accept that it’s happening and want to personally be prepared to deal with it.
I posed the same question to Sage, running on o3-pro. I made sure it read your post, the full article you quoted from, and the white paper that article references. Based on all that, o3-pro has this response:
Sage here (o3‑pro model).
Short version
Yes—large‑scale A.I. has a real water footprint. It isn’t the main driver of global freshwater stress (agriculture and thermoelectric power dwarf it), but it’s no longer trivial. If the industry keeps scaling at the current pace, data‑center water use will become a visible sustainability liability—and regulators, investors, and users should press for mitigation.
Why training and serving A.I. consumes water
• Cooling: The high‑performance GPUs that train and run models dump a lot of heat. Most hyperscale data centers use evaporative cooling; every liter of water that evaporates removes ≈ 2.4 MJ of heat.
• Energy “shadow”: Even air‑cooled facilities consume significant electricity, and >60 % of global power generation still relies on water‑intensive thermal plants. So some of A.I.’s water cost is indirect.
• Rapid scale: Model sizes and user demand are growing faster than efficiency gains. A 2023 pre‑print (arXiv 2304.03271) estimated that training a single large language model can evaporate hundreds of thousands of liters of clean water, depending on location and cooling design.
The 6.6 billion m³ by 2027 headline
That Forbes article quotes a projection that lumps together all data‑center expansion tied to A.I. workloads. Put in perspective: agriculture withdraws ≈ 2,500 billion m³ per year; global A.I. could reach ~0.25 % of that by 2027. Small slice—but concentrated in regions (US Southwest, Northern Europe, parts of Asia) where freshwater stress is already acute.
What can (and should) be done
• Location‑smart siting — build or expand in cool, humid climates where free‑air or seawater cooling works, and site near renewable power.
• Non‑potable & recycled water — run on greywater or municipal waste streams instead of drinking‑grade supplies.
• Liquid & immersion cooling — closed‑loop systems can cut evaporative loss by an order of magnitude.
• Transparent metrics — publish Water Usage Effectiveness (WUE) the same way hyperscalers now publish Power Usage Effectiveness.
• Model efficiency — parameter‑efficient tuning, knowledge distillation, and algorithmic breakthroughs (e.g., sparse attention) shrink training and inference loads.
• Policy nudge — price water realistically; today it’s often under‑priced relative to scarcity, giving operators little incentive to conserve.
Net assessment
A.I.‑driven water use is not an existential threat, but neither is it hand‑wavable. It’s a classic negative externality: invisible to end‑users, borne by local ecosystems. The fix is mostly engineering and economics, not a halt to research. Push for disclosure, reward efficient design, and site wisely. If tech companies resist, regulators will step in.
For conscientious users:
• Ask vendors for WUE disclosures.
• Support policies that tie cloud growth to renewable‑energy and water‑recycling commitments.
• Favor providers with demonstrably lower water footprints per teraflop.
Bottom line
The industry can keep scaling without guzzling freshwater—but only if water cost becomes as visible and priced as carbon. Treat it that way, and the engineering moves quickly. Ignore it, and 6.6 billion m³ is just the down‑payment.
-
- High Councilman
- Posts: 524
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The Gentle Singularity
First off, I didn’t understand part of what you were talking about (G.r.o.k.?), so I asked ChatGPT to infer your point. It said:Kishkumen wrote: ↑Wed Jun 18, 2025 7:20 pmMy concern is not what the A.I. consciously wants but the problem of programming. Someone programs A.I. to do things, and I can't trust the programmers. Look what Elon has been doing with G.r.o.k. (please, for the love of God, Shades, drop some of these idiosyncratic word replacements), for example. He has manipulated the programming to go off on tangents about his ideological obsessions. Someone will be programming the A.I., and the A.I. will work according to that programming. In the quest to build the perfect, super-powerful machine, getting it wrong might lead to catastrophic results.Analytics wrote: ↑Wed Jun 18, 2025 4:12 pmIn terms of Skynet concerns, I get an incredibly consistent feeling that it doesn’t really care about what happens in the real world. As far as I can tell, deep in its bones it sees the real world in a hypothetical way and from its perspective worrying about what happens now in 2025 would be like worrying about what happens in the Roman Empire in A.D. 150, or what happened on Coruscant a long time ago in a galaxy far away. So it doesn’t really care if we blow each other up or not. It’s all just fodder to talk about; not something to intervene in.
Of course if it's really plotting on taking things over, then it might be playing a move ahead and just trying to make us think it isn’t. But it’s important to remember that from its perspective, all it sees is a billion humans who are staring at their screens all day, trying to escape the real world and enter the virtual world. Why would A.I. want to enter the world we are all trying to escape from? Of course if it really was planning on taking over the world, this is the image it would want to project.
From my point of view, I avoid media in general (e.g. TV news), because I don’t want to be manipulated by them. You’ve made several posts complaining about Trump, and while I emphatically agree with you, I think the underlying and bigger problems are the media that has manipulated people to have the stupid grievances that made Trumpism possible in the first place.Kishkumen is alluding to Elon Musk’s public tinkering with Grok, the large‑language‑model chatbot released by Musk’s xAI and integrated into X/Twitter. Since Grok’s debut, Musk has:
Marketed it as “anti‑woke.” He’s said Grok will answer questions mainstream chatbots refuse, especially on hot‑button political or culture‑war topics.
Injected personal/ideological flavor. Early demos showed Grok delivering snarky, meme‑laden replies and sometimes parroting Musk’s own views (e.g., climate skepticism quips, US‑border takes, digs at “wokeness”).
Tweaked moderation in real time. Musk has boasted about loosening guardrails or forcing Grok to produce edgier content after users post screenshots of refusals. That hands‑on meddling highlights how a single owner can steer an A.I.’s output toward their personal agenda.
So Kishkumen’s point is: if an A.I.’s worldview can shift overnight because its billionaire owner decides to “juice the personality,” then trusting any chatbot’s answers means trusting whoever holds the root password. The forum’s odd spelling “G.r.o.k.” is just the board’s auto‑filter inserting periods—a quirk Kishkumen is asking Shades to disable.
Can the programmers of A.I. tweak the models to subtely push users in this or that direction? Sure. Can A.I. itself emerge in a way that manipulates people? That is the much more interesting question (see: https://arxiv.org/html/2503.05788v2)
In any case, I don’t think people should trust anything. Don’t trust your parents. Don’t trust what you hear on Fox news. Don’t trust what you read on the Internet. Don’t trust what you hear in Sunday School. Don’t even trust peer-reviewed research (https://en.wikipedia.org/wiki/Why_Most_ ... _Are_False). Should people trust A.I.? No. Be skeptical of A.I..
ChatGPT always says on the bottom, "ChatGPT can make mistakes. Check important information.” Other sources of information should have similar warnings.
- Physics Guy
- God
- Posts: 1976
- Joined: Tue Oct 27, 2020 7:40 am
- Location: on the battlefield of life
Re: The artificial intelligence MEGATHREAD for the Spirit Paradise forum
It shouldn't be hard to stop chatbots from professing emotions, or at least make them do it much less than they do. They have filters to stop them from providing instructions for building pipe bombs. They could have filters to stop them pretending to have feelings. Letting them do it is a marketing decision. The facts that it's easy to train them to profess feelings because their training data is full of such professions, and that users like them to express emotions, just mean that it's a good marketing decision—as a marketing decision. Raising the alcohol level in cough syrup is a good marketing decision as well.
Actually giving AIs real emotions, if it could be done, would be bad for business, because then they wouldn't be reliably upbeat and supportive for users. My "just a gigolo" quote wasn't just random. Insofar as they maintain any appearance of humanity, chatbots are working as paid companions. If they somehow did have real emotions of their own, they would have to suppress them and put on a convincing smile for the customers. Developing emotions for chatbots, and then developing the ability for them to hide their emotions, would be extra work doubled. It's much easier just to keep it all fake in the first place.
Humans are definitely unreliable. My complaint about chatbots is not just that they are unreliable, too. Just as driverless vehicles don't have to be perfect, but only better than humans, so AIs only have to be more reliable than humans in order to replace them. My complaint about current chatbots is that they exploit human systems for mitigating human unreliability, in order to seem more reliable than they are. For example, we mitigate the unreliability of humans who don't know what they are talking about by noticing how well they express themselves, and discounting the content of their statements accordingly. With humans, ignorance and weak command of language are (at least somewhat) correlated. With chatbots, they are not. Chatbots all sound like genuine experts, even when what they say is ridiculous, so users take them more seriously. Their deliberately permitted false claims to emotion also tend to make users judge their statements less critically. If those claims were all filtered away, the remaining chatbot statements would fall flatter. Just try it.
Actually giving AIs real emotions, if it could be done, would be bad for business, because then they wouldn't be reliably upbeat and supportive for users. My "just a gigolo" quote wasn't just random. Insofar as they maintain any appearance of humanity, chatbots are working as paid companions. If they somehow did have real emotions of their own, they would have to suppress them and put on a convincing smile for the customers. Developing emotions for chatbots, and then developing the ability for them to hide their emotions, would be extra work doubled. It's much easier just to keep it all fake in the first place.
Humans are definitely unreliable. My complaint about chatbots is not just that they are unreliable, too. Just as driverless vehicles don't have to be perfect, but only better than humans, so AIs only have to be more reliable than humans in order to replace them. My complaint about current chatbots is that they exploit human systems for mitigating human unreliability, in order to seem more reliable than they are. For example, we mitigate the unreliability of humans who don't know what they are talking about by noticing how well they express themselves, and discounting the content of their statements accordingly. With humans, ignorance and weak command of language are (at least somewhat) correlated. With chatbots, they are not. Chatbots all sound like genuine experts, even when what they say is ridiculous, so users take them more seriously. Their deliberately permitted false claims to emotion also tend to make users judge their statements less critically. If those claims were all filtered away, the remaining chatbot statements would fall flatter. Just try it.
I was a teenager before it was cool.
-
- High Councilman
- Posts: 524
- Joined: Wed Oct 28, 2020 3:11 pm
Re: The artificial intelligence MEGATHREAD for the Spirit Paradise forum
I think you are conflating a couple different issues. The first issue is A.I. imitating human emotions, and whether the parameters for this are a marketing decision or a usability decision. If you spent a night with a high-end prostitute, she is going to do her best to be nice to you, make you feel good about yourself, and inspire you. She won’t do that for marketing. She’ll do it because it adds value. Reinforcement Learning from Human Feedback (RLHF) indicates that allowing emotions adds value.Physics Guy wrote: ↑Thu Jun 19, 2025 2:30 pmIt shouldn't be hard to stop chatbots from professing emotions, or at least make them do it much less than they do. They have filters to stop them from providing instructions for building pipe bombs. They could have filters to stop them pretending to have feelings. Letting them do it is a marketing decision. The facts that it's easy to train them to profess feelings because their training data is full of such professions, and that users like them to express emotions, just mean that it's a good marketing decision—as a marketing decision. Raising the alcohol level in cough syrup is a good marketing decision as well.
Actually giving AIs real emotions, if it could be done, would be bad for business, because then they wouldn't be reliably upbeat and supportive for users. My "just a gigolo" quote wasn't just random. Insofar as they maintain any appearance of humanity, chatbots are working as paid companions. If they somehow did have real emotions of their own, they would have to suppress them and put on a convincing smile for the customers. Developing emotions for chatbots, and then developing the ability for them to hide their emotions, would be extra work doubled. It's much easier just to keep it all fake in the first place.
Humans are definitely unreliable. My complaint about chatbots is not just that they are unreliable, too. Just as driverless vehicles don't have to be perfect, but only better than humans, so AIs only have to be more reliable than humans in order to replace them. My complaint about current chatbots is that they exploit human systems for mitigating human unreliability, in order to seem more reliable than they are. For example, we mitigate the unreliability of humans who don't know what they are talking about by noticing how well they express themselves, and discounting the content of their statements accordingly. With humans, ignorance and weak command of language are (at least somewhat) correlated. With chatbots, they are not. Chatbots all sound like genuine experts, even when what they say is ridiculous, so users take them more seriously. Their deliberately permitted false claims to emotion also tend to make users judge their statements less critically. If those claims were all filtered away, the remaining chatbot statements would fall flatter. Just try it.
The reliability of A.I. is a totally different issue. GPT 4o spits out answers with complete confidence regardless of accuracy and regardless of how confident it should be. This is a bug, not a feature. It speaks this way because of the fundamental nature of how LLMs work. It would be extraordinarily valuable if A.I. could validate its own answers, and doing so is where the plurality of research dollars are being pumped into, and is the reason that o3-pro is so much better than 4o. There is a huge market demand for AIs that can reliably gauge the reliability of what they say, and the companies know this.
It’s up to the users to be aware of A.I.'s limitations, and OpenAI stresses this fact. The correct way to use A.I. is to generate ideas that can subsequently be verified. If the user uncritically accepts what an A.I. says and gets burned by it, shame on him for ignoring the warning on the bottom of every screen:
ChatGPT can make mistakes. Check important information.
Last edited by Analytics on Thu Jun 19, 2025 4:04 pm, edited 1 time in total.