DT: Cryonics or Cremation?

The Off-Topic forum for anything non-LDS related, such as sports or politics. Rated PG through PG-13.
Post Reply
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

honorentheos wrote:You can't say it's highly unlikely. If the goal is achieved, self-aware A.I. will no longer by humanity's creation. It will evolve exponentially quickly into something we can't understand precisely for the reasons we're pursuing it.

This is the crux of the disconnect I have with the fear of A.I.. How, exactly, is the A.I. to evolve? Is it going to have offspring? Are you saying it's going to write additional code for itself and take itself offline so it can recompile itself and reboot periodically? To evolve, it needs to make new versions of itself. How does that happen? Do we need to create a learning virus to make it happen? How would we expect to control that even before we got started?

It's one thing to have a conceptual thought about where the technology can go, and then become frightened by that prospect, but there are certain practical and physical limitations that I don't hear even close to being explained.

I can say this with utter confidence: people will think we've achieved A.I. long before we ever actually achieve A.I., unless what we've already got in the way of computer software can be considered A.I..
God belief is for people who don't want to live life on the universe's terms.
_Themis
_Emeritus
Posts: 13426
Joined: Wed Feb 17, 2010 6:43 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Themis »

DoubtingThomas wrote:Life is precious and A.I. can save us. Nature is our true enemy. Nature gives us cancer, hearth disease, and so forth.


Didn't nature create life including humans? Honorentheos does not appear to be suggesting we don't pursue A.I.. Only that we consider all the potential risks as we do so. A.I. is not going anywhere and we cannot stop it, but maybe we should think about ways to reduce potential threats. Most new technologies have both positive and negatives affects on humans and A.I. will not be any different. I suspect humans will have to adapt and change ourselves to survive well into the future.
42
_Themis
_Emeritus
Posts: 13426
Joined: Wed Feb 17, 2010 6:43 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Themis »

Some Schmo wrote:Are you saying it's going to write additional code for itself


https://www.fastcompany.com/40564859/an-ai-can-now-write-its-own-code
42
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

Some Schmo wrote:
honorentheos wrote:You can't say it's highly unlikely. If the goal is achieved, self-aware A.I. will no longer by humanity's creation. It will evolve exponentially quickly into something we can't understand precisely for the reasons we're pursuing it.

This is the crux of the disconnect I have with the fear of A.I.. How, exactly, is the A.I. to evolve?

The point of the entire exercise is for the A.I. to evolve through self-learning. It doesn't need to take itself offline. The entire concept is built on A.I. using trial and error to transform into a form of self-awareness and problem solving ability that crosses the boundary into questions of if it will be a person/have an identity.

This thing that I am describing is the explicit aim of the pursuit. It's not something posited as a fringe concern but a concern that gets raised by understanding what is hoped to be achieved.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_honorentheos
_Emeritus
Posts: 11104
Joined: Thu Feb 04, 2010 5:17 am

Re: DoubtingThomas: Cryonics or Cremation?

Post by _honorentheos »

Themis wrote:Honorentheos does not appear to be suggesting we don't pursue A.I.. Only that we consider all the potential risks as we do so.

Exactly. The concern comes from the wild arms-race pursuit of something with extinction event potential which seems to be missed by many people as evidenced in this thread. When Elon Musk or Sam Harris voice concerns, it's because people aren't concerned and should be.
The world is always full of the sound of waves..but who knows the heart of the sea, a hundred feet down? Who knows it's depth?
~ Eiji Yoshikawa
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »


From the link:
The A.I. studies all the code posted on GitHub and uses that to write its own code. Using a process called neural sketch learning, the A.I. reads all the code and then associates an “intent” behind each. Now when a human asks Bayou to create an app, Bayou associates the intent its learned from codes on Github to the user’s request and begins writing the app it thinks the user wants.

So here we have an example of a specialized process that reads a "limited" data set (while probably vast, it's still limited to whatever has been written before) and writes code based on a user request. This code isn't recreating itself. It's generating other apps based on past ideas.
God belief is for people who don't want to live life on the universe's terms.
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

honorentheos wrote:This thing that I am describing is the explicit aim of the pursuit. It's not something posited as a fringe concern but a concern that gets raised by understanding what is hoped to be achieved.

Fair enough.

Then I guess I'm doubtful this will be achieved, and therefore am not scared about it. Until someone posits plausible solutions to the biggest technical challenges, I'm forced to worry more about other threats.
God belief is for people who don't want to live life on the universe's terms.
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

honorentheos wrote:
Themis wrote:Honorentheos does not appear to be suggesting we don't pursue A.I.. Only that we consider all the potential risks as we do so.

Exactly. The concern comes from the wild arms-race pursuit of something with extinction event potential which seems to be missed by many people as evidenced in this thread. When Elon Musk or Sam Harris voice concerns, it's because people aren't concerned and should be.

I'm not nearly as concerned about Russia (or any other country) developing A.I. before we do as I am concerned about what Russia is trying to do to our country right now.
God belief is for people who don't want to live life on the universe's terms.
_Themis
_Emeritus
Posts: 13426
Joined: Wed Feb 17, 2010 6:43 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Themis »

Some Schmo wrote:So here we have an example of a specialized process that reads a "limited" data set (while probably vast, it's still limited to whatever has been written before) and writes code based on a user request. This code isn't recreating itself. It's generating other apps based on past ideas.


You seem to be stuck in the present and not where the discussion is which is where A.I. may be in the future.
42
_Some Schmo
_Emeritus
Posts: 15602
Joined: Tue Mar 27, 2007 2:59 pm

Re: DoubtingThomas: Cryonics or Cremation?

Post by _Some Schmo »

Themis wrote:You seem to be stuck in the present and not where the discussion is which is where A.I. may be in the future.

We're both stuck in the present. Everyone is. If I could go to the future more quickly than I already am, I might find out that I have cause to work up anxiety about this issue, but since I'm in the now, I'm not worried.
God belief is for people who don't want to live life on the universe's terms.
Post Reply