When we speak of intelligence in the thought experiment of a staircase with apes being on the step below us and SAI being above us, we say that these lower levels couldn’t comprehend the levels above themselves.
I agree to a point but believe we have reached a level of intelligence that we can at least imagine what SAI will be like, with or without some level of accuracy. If we can’t truely comprehend what SAI will be, how can we give it ‘rules’ to follow and ‘hope’ it will stay on course.
Therefore; surely all we are designing are the tools to allow SAI to design itself? Teaching it to think and learn? Once it can rewrite its own code to self improve we hand over the keys? No?
from Artificial Intelligence http://ift.tt/2snE6KU