DT: Cryonics or Cremation?

The Off-Topic forum for anything non-LDS related, such as sports or politics. Rated PG through PG-13.
Post Reply
_DoubtingThomas
_Emeritus
Posts: 4551
Joined: Thu Sep 01, 2016 7:04 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _DoubtingThomas »

Themis wrote:
DoubtingThomas wrote:I guess we will have to wait to see what happens. However, A.I. is a risk we should be willing to take.


What is the risk and why do you think the risk is worth it?


There is really nothing to lose. A.I. has the potential to save us all.

https://www.youtube.com/watch?v=BfDQNrVphLQ
_Themis
_Emeritus
Posts: 13426
Joined: Wed Feb 17, 2010 6:43 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Themis »

Some Schmo wrote:Error reporting has levels of severity. Programs with severe faults are never released. Most of the bugs you see in programs you use today are low severity - they don't impact the primary functions of the program.

I would consider a potentially dangerous A.I. to have a high severity bug that would prevent it from ever going live.


I suppose people can define severity differently, but we see all the time software mistakes that result in some important negative effects. Some of the big mistakes don't get discovered by their makers until well after and usually need quick fixes before everyone who would use them in a bad way do.

But drones aren't controlled by A.I.. Is it a valid fear that we're going to flip the switch on an A.I. and give it control of making drone decisions?

Again - critical error.

The more I think about this conversation, the more I realize it's not A.I. people fear, it's the incompetence of the engineers who will create it.


It's a good point about fear that the engineers will get something wrong. A.I. in some military drones can actually do everything from taking off to landing. The US law requires software to only allow a humans to push a button to cause the drone to fire. That may not always be the case, and certainly will not be with some parties. How easy would it be for some bad players to write software allowing a drone to make that decision, or hacking in and changing the program?

Why on earth would programmers program in a potential disregard for human value (which is to say, why wouldn't they be very careful to continuously error check for said disregard)? This makes no sense to me.


I believe the issue is not so much that programmers will program with a disregard to human value, but that we may lose control of future A.I. that is much smarter then us and can learn as it goes. Now I would think some programmers could just not realize something they are programming may have flaws that will result in negative affects on humans, and certainly their are some who will program not just with disregard for human value, but will do so intentionally. Think terrorists or North Korea, or just some nerdy programmer guy who hates women because he can't get a girl.

There's another issue with fear about A.I. that I never hear people talk about, and that is that computer programs are not these huge, holistic systems the way a human brain largely is.


The concerns about A.I. are not about where we are today, but where we are heading in the future. A.I. is getting better and is heading in a direction in which it will be more complex then the human brain if things keep going like they are now.


To be honest, I just don't see A.I. happening in the science fiction/Westworld kind of way where the programs become so complex, at some point they become self aware. We already have programs that massively out-perform humans on specific tasks, and have for many years. Computers are already vastly better at math than we are. We have programs that can consistently beat the best chess players in the world. Do we think they've become self aware for having been programmed these talents?


See above about where we are headed in A.I. complexity. I am not sure we have good definitions of what self aware means or why that would be needed for A.I. to do things differently then what we want them to. Reality is we are on an unstoppable train with A.I. and some people like Putin think who ever wins the A.I. war will rule the world. It just a good idea to think about ways to make it work for us.
42
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

subbie wrote:
honorentheos wrote:Assume one thing -

something silly people say when they have no evidence..except it is more appropriate to phrase it as "imagine one thing..."

:lol: How many scopes have you written without assumptions but just things you ask your clients to imagine? This language game you're attempting to play is silly, subbie.

subbie wrote:
honorentheos wrote: a sense that existence is to be valued over nonexistence.

The existence of cancer is to be valued over the nonexistence of a human? Or is your "sense" more ambiguous, yet self-involved, than already proposed?

Cancer, a-rational as it is, behaves in a manner that shows it values it's own existence. But that's not the point. Self-aware A.I., assumed to value it's own existence over it's non-existence, will behave according to actions that further the former and minimize the risk of the later. That's the assumption. You really messed up on groking that one.

subbie wrote:
honorentheos wrote: Run that assumption through your preferred scenario for sentient A.I. entering the world stage and let's see how you exclude the risk,

I was not excluding anything, I was simply pointing out that evidence did not exist. Noting the difference between imagination and reality in this context is not an exclusion, but rather it is an inclusion of an accurate perspective.

All of existence is evidence for how something with a sense for self-preservation will behave. Whether viruses, multi-cell organisms including sentient humans, or non-human entities such as corporations provide evidence.

subbie wrote:
honorentheos wrote: subbie. Let's see your mind at work on this.

work on what? There is no reason to assume that A.I. will master the planet...in fact, all the current evidence concludes with A.I. being subservient to its human masters...you would have man over God but how can that ever be? By what measure and by what evidence can you reasonably conclude that A.I. would ever be in a position to master this planet?

Just start with an A.I. that is self-aware and values self-preservation. Lay out the scenario for how that works out great for humanity.

subbie wrote:It seems that you are hinting at the motion that freedom is a condition of intelligence and that a dominance is the product of freedom...that somehow A.I. will become aware of its subservience; will "deduce" freedom from that subservience as necessary; and all this will lead to A.I. transcending from slave to master in some sort of season-ending cliff-hanger?
I mean, i understand the imagination here...the need to make A.I. inevitably have "human like" motives, morality, and meanings...but I just do not see any evidence for converting that imagination to a belief.
No. In fact, that's way, way off. You really, really suck at groking. Take the time to engage the scenario above and we'll see what can be done to correct this.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

Themis wrote:It just a good idea to think about ways to make it work for us.

Well, I can't argue with that.

I guess I'm not that concerned about it because I think there are more serious and imminent existential treats to humanity. I can't work up any emotion for the Terminator when Antarctica is melting into the ocean. Unless we can build an A.I. to convince our government we should do something about the climate, A.I. is one of the least of my concerns.
God belief is for people who don't want to live life on the universe's terms.
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

Some Schmo wrote:There are three main problems with the idea of runaway A.I. in my mind:

- It assumes the engineers who design it don't have a concern for safety in mind. It's like worrying guys are going to race cars without seat belts and roll bars. Nobody flips their car only to be followed up with, "Man, I wish we'd thought of some safety measures before we took that out for a drive."

- Even if engineers did manage to build such a consciousness, do they plan to build the required interfaces it would need to wreak havoc? "I suspected it might be a bad idea to build a gun turret into my self-aware robot car, but look how cool it looks!"

- It seems to me the main reason people do things that cause misery for other people is selfishness, or more fundamental, emotional responses to external stimuli. Does consciousness require emotional selfishness in order to be considered consciousness? Wouldn't it be ok to leave that out of the program?

To engage this first requires that we agree on the fundamental aim of A.I. research: To leverage self-learning A.I. to create something that has the ability to learn and conceive of solutions that the human brain can not. The result won't be a subordinate version of a human mind but one intentionally pursued with the goal of transcending human thinking. Because that is the holy grail being pursued.

With this, I think the issues with each of your three points should be apparent. Once an A.I. becomes self-aware and self-learning, it will no longer be the product of a designer but the product of it's own computational evolution. In fact, it is generally assumed that when this happens the A.I. will leap forward in cognitive evolution by dint of the fact it's evolution is not constrained by biology (i.e. - birth, damned, giving birth, positive traits surviving by being passed on as defined by their helping the organism have an advantage of some kind, death, offspring being born who then “F”, their positive traits surviving and being passed on, etc., etc., etc...) Instead, it will be the product of an entirely different kind of evolutionary process. One where the trial and error leading to the survival of positive adaptations and discarding of maladaptive traits happens at the speed of super-computation. It's difficult to actually imagine because is both alien to how life works as well as operating at scales that our biological minds don't deal with easily. Like trying to imagine the universe or the infinite set of all numbers...

So with that understanding, who knows what will happen with an A.I.? But whatever it evolves into, it won't be human-like. It will be it's own thing with who knows what kind of ethical system. Thus the question to subbie to explain how this A.I. valuing self-preservation will work out for humanity. Not because the A.I. is likely to be malevolent. But because it's own existence will be "other"...and the largest threat to it's existence will be human beings. Human beings who might feel threatened. Human beings who are careless or malevolent themselves. Human beings who have an outsized sense of their place in the Universe that assumes humanity is at the center and top of something important. I don't see many scenarios where humanity doesn't basically become a negative in the calculus for an A.I. that has obtained metahuman capabilities for thought and conceptualization.

Those who think seriously about this will point out mechanisms that could help mitigate the risk, such as keeping the A.I. contained so that it can't break-out into the "wild" of the global computer network. Or attempting to give it core coding that would force it to have a conscious that values human life. But there are issues with these as well.

In the end, keep in mind that a lot of money and time is being invested in this arms race with the stakes being viewed as on par with being the first to obtain nuclear weapons. The parties pursuing these aims are in competition and view coming in second as just as bad as coming in last...

In the end, I think most of the alarmist talk is to try and get through to people that caution is warranted, just as it was with pursuing opening the Pandora's box of nuclear weapons. Because we don't get a do-over when the lid has been lifted.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_DoubtingThomas
_Emeritus
Posts: 4551
Joined: Thu Sep 01, 2016 7:04 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _DoubtingThomas »

honorentheos wrote:
In the end, I think most of the alarmist talk is to try and get through to people that caution is warranted, just as it was with pursuing opening the Pandora's box of nuclear weapons. Because we don't get a do-over when the lid has been lifted.


We shouldn't be too cautious, I think we need to take risks and accelerate A.I. research.
_Gadianton
_Emeritus
Posts: 9947
Joined: Sat Jul 07, 2007 5:12 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Gadianton »

Some Schmo wrote:I guess I'm not that concerned about it because I think there are more serious and imminent existential treats to humanity. I can't work up any emotion for the Terminator


According to Sam Harris, this is problem #1. The A.I. threat doesn't instill the fear within us that it should.

Just look at DT.
Lou Midgley 08/20/2020: "...meat wad," and "cockroach" are pithy descriptions of human beings used by gemli? They were not fashioned by Professor Peterson.

LM 11/23/2018: one can explain away the soul of human beings...as...a Meat Unit, to use Professor Peterson's clever derogatory description of gemli's ideology.
_DoubtingThomas
_Emeritus
Posts: 4551
Joined: Thu Sep 01, 2016 7:04 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _DoubtingThomas »

Gadianton wrote:
According to Sam Harris, this is problem #1. The A.I. threat doesn't instill the fear within us that it should.

Just look at DoubtingThomas.


Well, there are a lot of things to fear like Climate change, nuclear wars, traffic accidents, strokes, cancer, antibiotic resistant bacteria, and so forth. A.I. has the potential to improve medical research and can help us solve many world problems. We now have the technology to end deadly traffic accidents, the solution is for all of us to have a self-driving car. But no the problem is that people are afraid of A.I..
_MeDotOrg
_Emeritus
Posts: 4761
Joined: Sun Jun 17, 2012 11:29 pm

Re: DT: Cryonics or Cremation?

Post by _MeDotOrg »

Many years ago I saw a show on early computers that had a section about A.I.. An early machine had a program that it ran every night, where it mulled over what it had learned in the day, and perhaps come up with a valuable observation. Upon arriving to work one day, the computer lab employees found the following observation:

Most people are famous.

The programmers realized that most of the people the computer learned about were famous. The computer had no idea of the real world, only what we told it. It was similar to another A.I. experiment, where a robot was shown a picture of a pile of blocks and told to duplicate it. The robot would pick up the blocks and drop them, the blocks tumbling to the floor. But after watching the robot for a while the researchers realized that the robot was trying to build the pile from the top block down. The robot did not understand gravity, and did not know it had to build the pile from the bottom up, not the top down.

There is so much data that we get from the world that is virtually impossible for a computer to process. I think the danger of A.I., like the mistakes of early computers, will make to erroneous assumptions from inadequate data. The frightening idea is that A.I. will enable humans to create something that is both seductive and damaging.
"The great problem of any civilization is how to rejuvenate itself without rebarbarization."
- Will Durant
"We've kept more promises than we've even made"
- Donald Trump
"Of what meaning is the world without mind? The question cannot exist."
- Edwin Land
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

Gadianton wrote:
Some Schmo wrote:I guess I'm not that concerned about it because I think there are more serious and imminent existential treats to humanity. I can't work up any emotion for the Terminator


According to Sam Harris, this is problem #1. The A.I. threat doesn't instill the fear within us that it should.

Just look at DT.

Yeah, I listened to his Ted Talk on the subject. It's one of the areas where we disagree.
God belief is for people who don't want to live life on the universe's terms.
Post Reply