honorentheos wrote:You can't say it's highly unlikely. If the goal is achieved, self-aware A.I. will no longer by humanity's creation. It will evolve exponentially quickly into something we can't understand precisely for the reasons we're pursuing it.
This is the crux of the disconnect I have with the fear of A.I.. How, exactly, is the A.I. to evolve? Is it going to have offspring? Are you saying it's going to write additional code for itself and take itself offline so it can recompile itself and reboot periodically? To evolve, it needs to make new versions of itself. How does that happen? Do we need to create a learning virus to make it happen? How would we expect to control that even before we got started?
It's one thing to have a conceptual thought about where the technology can go, and then become frightened by that prospect, but there are certain practical and physical limitations that I don't hear even close to being explained.
I can say this with utter confidence: people will think we've achieved A.I. long before we ever actually achieve A.I., unless what we've already got in the way of computer software can be considered A.I..