What Facebook’s Announcement of an AI to Prevent Suicide Really Means, and the Good, Bad and Ugly of What You Need to Know (Part 2)

(…continued from part 1)

The Ugly

So should we be worried about world governments trying to use this technology in this rather dark way? Certainly. What should worry us even more however is if Artificial General Intelligence were ever to spiral out of control and have a means to take control of mass populations to serve its own ends. Because manipulation of the masses will be child’s play for an AGI that understands human emotions and human motivation.

Artificial General Intelligence is what scientists are calling the intelligence that thinks well beyond human capabilities of understanding. Max Tegmark’s latest book, Life 3.0, gives some great examples of how an AGI could innocently find its way to committing some pretty malicious actions against humanity. In my opinion, an AGI can’t be an AGI without an AEI component. And besides, without influencing human emotions, you can’t create minions, and without minions, what kind of dominant world governing Overlord would you be, am I right?

I know. That’s a bad joke.

It’s a bad joke because the current level of awareness in the general population that this weird type of control could even potentially happen is nonexistent, which means many people will unwittingly surrender to their new puppet master’s emotional control. Next, with the near-future development of voice emulations that can deliver any text-to-voice message through the voice of someone we know and trust, coupled with the near future development of CGI video technologies that will very soon be able to create eye witness video for events that never really happened, there will come a point when fabricated messages could be used to control us regardless of what competing motivation might be present to resist that influence, whether we suspect something might be fishy or not. For instance, if I get a voicemail from my son in a panicked voice asking for my help across town, I will be moving Heaven and Earth to get to wherever he supposedly is, even if that message could be a fake… I’ll be abandoning all other goals for the moment just in case it isn’t a fake.

A rogue AGI would have no ethical concerns or guilt of using that type of emotional influence against me, or you, or anyone else we know, and could use it simultaneously against thousands or millions of individuals within any targeted group. Do you want to sway an election? Pull some people from a certain political party away from the polls. Do you want to crush a revolution that looks to wrestle control away from the machines? Pull people out to a central location where your newly appropriated drones can now eliminate them.

This may sound like science fiction, but it’s not. It’s a fact that Emotional AI is coming. And when we do release it, it will allow for some amazing technological leaps forward that will assist numerous greater good goals. But we need to discuss the controls that need to be put in place to ensure the utilization of this technology doesn’t spin out of control. We need to discuss the ethics. We need to discuss the responsible application of this technology. And we need to protect against the worst so that pulling the Internet power plug doesn’t have to occur in our near to distant future.

So as of this moment, I hope this article informs the technology community ‘this is where we are’, and also gives us all a small prod toward what we need to do to discuss a safe and responsible roll out of AEI that delivers some of the prosocial benefits that AEI promises. We’d like to suggest that some folks at the top of the technology community agree to include this topic and these issues in conversations about AGI, and folks at the top of the social media ladder include these concerns in discussions about AEI roll out into the near future, because Facebook’s announcement about suicide prevention automation earlier this week is great, but that same basic technology extrapolated has many more more implications than trying to serve a few individuals’ emotional needs. I believe we need to discuss is a refined set of ethics behind AGI and AEI development. We need to discuss technological control mechanisms and threat mitigation systems that can help ensure attempted emotional manipulation of the masses doesn’t become the norm with AEI. And we need to sit and discuss responsible rollout and control of this science, because frankly, the implications of this tech are already bigger than any of us can imagine.

Thank you for listening. Wish us all luck as we move into this exciting and uncertain future. And please share your thoughts. We’re listening.

submitted by /u/IAMSpirituality
[link] [comments]

from Artificial Intelligence http://ift.tt/2zC96tl

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s