Why, in the discussion of destructiveness of AGI, we assume that the “Super smart” AI will still keep optimizing the initial “dumb” objective? It is the AGI that will evaluate its own performance, so why we assume it can’t change the evaluation?

http://ift.tt/1CNTXkp submitted by /u/finallyifoundvalidUN
[link] [comments]

from Artificial Intelligence http://ift.tt/2AqMTCg

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: