I was watching a show about this, about when a robot becomes a living being. They where talking about ways to develop an artificial soul, or if a robot could actually develop one itself. It was interesting thats all I can say on that.
Nami88,that is such an in depth explanation but what if the Failsafe fails, what if the AI disables it before it detects the Self Awareness? Now I am sure there would be a failsafe for the failsafe if the AI tried tampering with it but you have to think of everything. You just explained why it wouldn't be able to not why it wouldn't "want" to. The question was "why would they WANT to" not "Why would they be able or not be able to" It would mean they already have consciousness if they want to. So I will ask again in a different way. Why wouldn't a robot "want" to destroy human life?