How to Stop Artificial Intelligence Being Biased

Bias in artificial intelligence can be hard to root out.

Researchers have developed a new method to avoid embedding bias into machine learning algorithms.

Credit: Getty Images

Niki Kilbertus and colleagues at the Max Planck Institute for Intelligent Systems in Germany have developed a new method to avoid embedding bias into machine-learning algorithms.

Their technique involves incorporating sensitive data in the training process while including an independent regulator and applying encryption mathematics.

When training the artificial intelligence (AI), an organization can use as much non-sensitive data as required, but both the organization and the regulator only receive sensitive data in an encrypted form. This is sufficient for the regulator to check whether the AI is making biased decisions that are shaped by anything inferred from non-sensitive data.

Once assured, the regulator can assign a fairness certificate to the organization, which has the benefit of making the regulator’s knowledge of any of the AI’s inner workings unnecessary, maintaining the confidentiality of trade secrets.

From New Scientist
View Full Article – May Require Paid Subscription


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: