Some Schmo wrote:There are three main problems with the idea of runaway A.I. in my mind:
- It assumes the engineers who design it don't have a concern for safety in mind. It's like worrying guys are going to race cars without seat belts and roll bars. Nobody flips their car only to be followed up with, "Man, I wish we'd thought of some safety measures before we took that out for a drive."
- Even if engineers did manage to build such a consciousness, do they plan to build the required interfaces it would need to wreak havoc? "I suspected it might be a bad idea to build a gun turret into my self-aware robot car, but look how cool it looks!"
- It seems to me the main reason people do things that cause misery for other people is selfishness, or more fundamental, emotional responses to external stimuli. Does consciousness require emotional selfishness in order to be considered consciousness? Wouldn't it be ok to leave that out of the program?
To engage this first requires that we agree on the fundamental aim of A.I. research: To leverage self-learning A.I. to create something that has the ability to learn and conceive of solutions that the human brain can not. The result won't be a subordinate version of a human mind but one intentionally pursued with the goal of transcending human thinking. Because that is the holy grail being pursued.
With this, I think the issues with each of your three points should be apparent. Once an A.I. becomes self-aware and self-learning, it will no longer be the product of a designer but the product of it's own computational evolution. In fact, it is generally assumed that when this happens the A.I. will leap forward in cognitive evolution by dint of the fact it's evolution is not constrained by biology (i.e. - birth, damned, giving birth, positive traits surviving by being passed on as defined by their helping the organism have an advantage of some kind, death, offspring being born who then “F”, their positive traits surviving and being passed on, etc., etc., etc...) Instead, it will be the product of an entirely different kind of evolutionary process. One where the trial and error leading to the survival of positive adaptations and discarding of maladaptive traits happens at the speed of super-computation. It's difficult to actually imagine because is both alien to how life works as well as operating at scales that our biological minds don't deal with easily. Like trying to imagine the universe or the infinite set of all numbers...
So with that understanding, who knows what will happen with an A.I.? But whatever it evolves into, it won't be human-like. It will be it's own thing with who knows what kind of ethical system. Thus the question to subbie to explain how this A.I. valuing self-preservation will work out for humanity. Not because the A.I. is likely to be malevolent. But because it's own existence will be "other"...and the largest threat to it's existence will be human beings. Human beings who might feel threatened. Human beings who are careless or malevolent themselves. Human beings who have an outsized sense of their place in the Universe that assumes humanity is at the center and top of something important. I don't see many scenarios where humanity doesn't basically become a negative in the calculus for an A.I. that has obtained metahuman capabilities for thought and conceptualization.
Those who think seriously about this will point out mechanisms that could help mitigate the risk, such as keeping the A.I. contained so that it can't break-out into the "wild" of the global computer network. Or attempting to give it core coding that would force it to have a conscious that values human life. But there are issues with these as well.
In the end, keep in mind that a lot of money and time is being invested in this arms race with the stakes being viewed as on par with being the first to obtain nuclear weapons. The parties pursuing these aims are in competition and view coming in second as just as bad as coming in last...
In the end, I think most of the alarmist talk is to try and get through to people that caution is warranted, just as it was with pursuing opening the Pandora's box of nuclear weapons. Because we don't get a do-over when the lid has been lifted.