Do you think top AI danger advocates would lose the AI-Box Experiment?

Followup to my previous post.

Setup:

  • Three participants: Elon Musk, Nick Bostrom, and Eliezer Yudkowsky (who came up with the experiment and has won twice as the AI). All top-tier advocates for the dangers of AI and pretty hyper-rational individuals.

  • The AI role would be played by some mastermind emotional manipulator. Imagine the best possible human and that they’re perfectly informed about intimate details of the Gatekeeper’s lives

My question is two parts:

1) Do you think any of these people could conceivably be convinced to set the AI free?

2) How do you think the AI would do it?


Notes:

  • I’m not asking if they would let a real AI out. I’m trying to determine if even the most qualified Gatekeepers I can imagine would succumb to a human-level intelligence. I’m pretty sure they would all lose to a real AI but it’s still fun to think about how they would come to be convinced

  • Obviously, they would all want to let the AI out to promote how dangerous an AI can be. In this hypothetical, imagine all Gatekeepers act in good faith and really start out not wanting the AI to be free, as if it were a real scenario

  • I realize this isn’t really the point of the experiment and it’s pretty speculative since we don’t even have any transcripts from successful attempts but I’d still like to get some discussion going

submitted by /u/Intro24
[link] [comments]

from Artificial Intelligence https://ift.tt/2tbvlVe

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: