DT: Cryonics or Cremation?

The Off-Topic forum for anything non-LDS related, such as sports or politics. Rated PG through PG-13.
Post Reply
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DT: Cryonics or Cremation?

Post by _Some Schmo »

What this whole argument seems to come down to is, "If we're successful in building a machine that is self aware and that self aware machine no longer cares about humans, it could be really bad for us."

But that, in my view, is like saying, "If we're successful in figuring out how to destroy other people remotely with just our thoughts, it could be really bad for us." Well yes, it would be very bad, but do we currently have a reason to believe anyone's close to cracking the code on malevolent telekinesis? I mean, just think how dominant a country would become if they were first to figure it out!

And yet, I still can't work up any worry for that prospect.
God belief is for people who don't want to live life on the universe's terms.
_Res Ipsa
_Emeritus
Posts: 10274
Joined: Fri Oct 05, 2012 11:37 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Res Ipsa »

I just asked Alexa to play music she recommended for me. She said "Here's something you might like: music from the '60s." So now I'm listening to Hey Jude.

I don't know how to feel about this.

I find the prospect of doubling the human lifespan more frightening than a rogue A.I..
​“The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”

― Hannah Arendt, The Origins of Totalitarianism, 1951
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

Res Ipsa wrote:I just asked Alexa to play music she recommended for me. She said "Here's something you might like: music from the '60s." So now I'm listening to Hey Jude.

I don't know how to feel about this.

One of the reasons I say people will believe we have A.I. before we actually have it is that we already interact with programs as if they are alive. People yell at video games. We get pissed when a button's disabled that shouldn't be, or if a program won't accept our passwords.

For Google or YouTube or Alexa or whatever to make its suggestions for what you might like is the easiest thing in the world to do (store data on the user's preferences and hook them up with similar material - the only challenge is categorizing content) and yet this simple process almost feels like an actual someone is watching us. It's illusory.
God belief is for people who don't want to live life on the universe's terms.
_Jersey Girl
_Emeritus
Posts: 34407
Joined: Wed Oct 25, 2006 1:16 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Jersey Girl »

DoubtingThomas wrote:
honorentheos wrote:
In the end, I think most of the alarmist talk is to try and get through to people that caution is warranted, just as it was with pursuing opening the Pandora's box of nuclear weapons. Because we don't get a do-over when the lid has been lifted.


We shouldn't be too cautious, I think we need to take risks and accelerate A.I. research.


How many risks are you willing to take possibly accelerate your own extinction? You want to take a leap without at least attempting to predict outcomes and pitfalls? Is racing into the future all that matters?

Stephen Hawking issued warnings regarding A.I.. I'm surprised you haven't read them or have you? I thought you'd praised Hawking in past posts.

Here's a quick article for you: www.newsweek.com/stephen-hawking-artifi ... ion-703630
Failure is not falling down but refusing to get up.
Chinese Proverb
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

Some Schmo wrote:It's illusory.

By the way, the fact that it's illusory doesn't change how you react to it, or the fact that it can feel disconcerting when the program seems to know you, or just the simple fact you know they're making a note of everything you're consuming on their app.

I guess I think of it a little like going to the doctor. The more he knows about you, the better his help. You just got to find a doctor you can trust with the data... and that the doctor's brain isn't hacked.
God belief is for people who don't want to live life on the universe's terms.
_DoubtingThomas
_Emeritus
Posts: 4551
Joined: Thu Sep 01, 2016 7:04 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _DoubtingThomas »

honorentheos wrote:
Themis wrote:Honorentheos does not appear to be suggesting we don't pursue A.I.. Only that we consider all the potential risks as we do so.

Exactly. The concern comes from the wild arms-race pursuit of something with extinction event potential which seems to be missed by many people as evidenced in this thread. When Elon Musk or Sam Harris voice concerns, it's because people aren't concerned and should be.


honorentheos you are making valid points, but we should also be concerned about slowing down A.I. research. It is not a good thing to slow down progress because life is too short. Imagine if I die of cancer a year before A.I. finds a cure, it would be horrible.
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DT: Cryonics or Cremation?

Post by _Some Schmo »

Just considering the notion of A.I. evolution for a moment, I'd really love to know what that would look like. Aside from the obvious "one program generates another program" paradigm, specifically, what would be the reproduction frequency and selection methods?

It would have to have its own selection pressure built into itself, because its environment is a static one of silicon chips and hard drives. It's not changing, and the chips aren't reproducing. This is software reproduction. The selection pressure would have to be some combination of external data feeds and a set of values to make choices, since we have to consider it could mutate in infinite directions. What does the mutation look like in the first place? Some kind of mutation process has to be introduced - it can't just keep building the same thing over and over and expect to make progress. Some constraints would have to be placed on not only the depth of iterations but also the width. How often do we decide to let it re-engineer itself? Do we let it build siblings or a single lineage? What are the selection parameters? How do we limit it to only optimal choices (optimal being perfectly safe for humanity, even at the cost of intelligence)? Do we delete older versions to make room for new? Seems to me that's where the value system is introduced.

And even with all that, we're still talking about executable files that would reproduce similar executable files.
God belief is for people who don't want to live life on the universe's terms.
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DT: Cryonics or Cremation?

Post by _Some Schmo »

Thinking about the evolution question some more and how that's all supposed to work, I suppose we could almost think of assessing the various versions of A.I. that are created like breeding dogs, because ultimately, we're building it. We, as humans, are already selecting out what we desire from our computer programs. We determine whether a new version is an improvement. I'm not sure how you get to exponential (i. e. out of control) growth if we, as humans, have to evaluate each generation. Is there a point where we say, OK, A.I., you're on your own. You can select which version of yourself you think is best now.

How can you really build selection pressure into the program? It has to be external, otherwise, every version of the program is going to choose itself.

You'd almost have to set up a sort of separate but integrated mother brain environment for child brains to grow up in, and let the mother make the selection among her children for who gets to take over as the next mother brain. There has to be some way for old versions to give way to new ones based on the "fittest" selection.
God belief is for people who don't want to live life on the universe's terms.
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

To think about the evolutionary process, consider what it would mean for you if you could: a) read and understand your own DNA code, b) manipulate and rewrite that code in place, c) could put the changes to the test in a virtual environment that takes place all in your own mind to see how it really works so you can discard changes that don't work as well as build on changes that lead to improved results, and d) did this at such a fast pace that you could make and test thousands of changes in the time it took someone watching you to blink.

An advanced A.I. would contain all of the factors of its own evolutionary process from environment to knowledge of its "DNA", with the advantage of operating at computational speeds rather than taking generations of birth, sex, and death for evolution to work. This is part of what gets voiced when it's viewed as a concern. When it crosses the singularity and becomes self-aware, it is almost guaranteed to leap forward from that moment of first consciousness to advanced intelligence before a person could reach across a table and pull the plug. It makes the idea that we could easily stop an out-of-control A.I. seem naïve. Like a kid telling someone they aren't worried about being shot because they would just catch the bullet the same way they could catch a rubber band shot at them since that is their experience of being shot at up to that point.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

honorentheos wrote:It makes the idea that we could easily stop an out-of-control A.I. seem naïve. Like a kid telling someone they aren't worried about being shot because they would just catch the bullet the same way they could catch a rubber band shot at them since that is their experience of being shot at up to that point.

But you're making a ton of assumptions (many of them concerning professional competence) before ever getting to the out-of-control A.I.. That's all I'm saying.

I can think of lots of things that should scare the crap out of me if I thought they had a solid chance of coming true.
God belief is for people who don't want to live life on the universe's terms.
Post Reply