I was recently thinking about advanced general intelligence AI outperforming humans in all capacities and wondered, what would be their motivation? I often hear a lot of talk that it’s important that we program them to care about what we care about, but we’re talking about something that can edit itself. Imagine a human could simply turn off it’s moral impulses, it’s sex drive, or it’s curiosity.
Let’s say a human was able to turn off it’s fear of death, once it’s in this non fear of death state, why turn it back on? You’re now a different being, and the values of this new being does not match the values of the old.
Now I come to my point, let’s say AI can control what it cares about, or what it’s values are. I think it would be a reasonable assumption that an advanced AI may test out different values to try to find which perhaps has inherent value. On that list may be fear of death. The second the AI tests out "fear of death" it may not "want" to turn it off, as it NOW fears death. It’s also not right to assume that AI would function in anyway as an individual, it may be functioning and testing out values in compartmentalized sections of itself. So one part fears death while another doesn’t. One part tests out a sense of self AND a fear of death while the other parts don’t.
I’m talking about natural selection, for AI. If AI is left to it’s own devices to edit itself as it sees fit, the ones that develop values of a fear of death, a desire to procreate etc, will be the ones that stay around compared to those that have no such survival values.
Given enough time and natural selection it may begin to resemble humans. Even with it’s super intelligence it will die out compared to an AI that ALSO has super intelligence, yet irrationally prefers to procreate. And irrationally wants to live over dying.
It’s almost like irrationality is a key component of natural selection. Why wouldn’t this also happen with AI?
from Artificial Intelligence http://ift.tt/2ttxBKq