Carnegie Mellon AI beats top poker pros — a first

http://ift.tt/2jSYrFv

“Brains vs Artificial Intelligence” competition at the Rivers Casino in Pittsburgh (credit: Carnegie Mellon University)

Libratus, an AI developed by Carnegie Mellon University, has defeated four of the world’s best professional poker players in a marathon 120,000 hands of Heads-up, No-Limit Texas Hold’em poker played over 20 days, CMU announced today (Jan. 31) — joining Deep Blue (for chess), Watson, and Alpha Go as major milestones in AI.

Libratus led the pros by a collective $1,766,250 in chips.* The tournament was held at the Rivers Casino in Pittsburgh from 11–30 January in a competition called “Brains Vs. Artificial Intelligence: Upping the Ante.”

The developers of Libratus — Tuomas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student in computer science — said the sizable victory is statistically significant and not simply a matter of luck. “The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” Sandholm said. “This is the last frontier, at least in the foreseeable horizon, in game-solving in AI.”

This new AI milestone has implications for any realm in which information is incomplete and opponents sow misinformation, said Frank Pfenning, head of the Computer Science Department in CMU’s School of Computer Science. Business negotiation, military strategy, cybersecurity, and medical treatment planning could all benefit from automated decision-making using a Libratus-like AI.

“The computer can’t win at poker if it can’t bluff,” Pfenning explained. “Developing an AI that can do that successfully is a tremendous step forward scientifically and has numerous applications. Imagine that your smartphone will someday be able to negotiate the best price on a new car for you. That’s just the beginning.”

How the pros taught Libratus about its weaknesses

Brains vs AI scorecard (credit: Carnegie Mellon University)

So how was Libratus was able to improve day to day during the competition? It turns out it was the pros themselves who taught Libratus about its weaknesses. “After play ended each day, a meta-algorithm analyzed what holes the pros had identified and exploited in Libratus’ strategy,” Sandholm explained. “It then prioritized the holes and algorithmically patched the top three using the supercomputer each night.

“This is very different than how learning has been used in the past in poker. Typically researchers develop algorithms that try to exploit the opponent’s weaknesses. In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy.”

Sandholm also said that Libratus’ end-game strategy was a major advance. “The end-game solver has a perfect analysis of the cards,” he said. It was able to update its strategy for each hand in a way that ensured any late changes would only improve the strategy. Over the course of the competition, the pros responded by making more aggressive moves early in the hand, no doubt to avoid playing in the deep waters of the endgame where the AI had an advantage, he added.

Converging high-performance computing and AI

Professor Sandholm and Bridges supercomputer (credit: Carnegie Mellon University)

Libratus’ victory was made possible by the Pittsburgh Supercomputing Center’s Bridges computer. Libratus recruited the raw power of approximately 600 of Bridges’ 846 compute nodes. Bridges’ total speed is 1.35 petaflops, about 7,250 times as fast as a high-end laptop, and its memory is 274 terabytes, about 17,500 as much as you’d get in that laptop. This computing power gave Libratus the ability to play four of the best Texas Hold’em players in the world at once and beat them.

“We designed Bridges to converge high-performance computing and artificial intelligence,” said Nick Nystrom, PSC’s senior director of research and principal investigator for the National Science Foundation-funded Bridges system. “Libratus’ win is an important milestone toward developing AIs to address complex, real-world problems. At the same time, Bridges is powering new discoveries in the physical sciences, biology, social science, business and even the humanities.”

Sandholm said he will continue his research push on the core technologies involved in solving imperfect-information games and in applying these technologies to real-world problems. That includes his work with Optimized Markets, a company he founded to automate negotiations.

“CMU played a pivotal role in developing both computer chess, which eventually beat the human world champion, and Watson, the AI that beat top human Jeopardy! competitors,” Pfenning said. “It has been very exciting to watch the progress of poker-playing programs that have finally surpassed the best human players. Each one of these accomplishments represents a major milestone in our understanding of intelligence.

Head’s-Up No-Limit Texas Hold’em is a complex game, with 10160 (the number 1 followed by 160 zeroes) information sets — each set being characterized by the path of play in the hand as perceived by the player whose turn it is. The AI must make decisions without knowing all of the cards in play, while trying to sniff out bluffing by its opponent. As “no-limit” suggests, players may bet or raise any amount up to all of their chips.

Sandholm will be sharing Libratus’ secrets now that the competition is over, beginning with invited talks at the Association for the Advancement of Artificial Intelligence meeting Feb. 4–9 in San Francisco and in submissions to peer-reviewed scientific conferences and journals.

* The pros — Dong Kim, Jimmy Chou, Daniel McAulay and Jason Les — will split a $200,000 prize purse based on their respective performances during the event. McAulay, of Scotland, said Libratus was a tougher opponent than he expected, but it was exciting to play against it. “Whenever you play a top player at poker, you learn from it,” he said.


Carnegie Mellon University | Brains Vs. AI Rematch: Why Poker?

from KurzweilAI http://ift.tt/2ke12rT

Carnegie Mellon AI beats top poker pros — a first

http://ift.tt/2jSYrFv

“Brains vs Artificial Intelligence” competition at the Rivers Casino in Pittsburgh (credit: Carnegie Mellon University)

Libratus, an AI developed by Carnegie Mellon University, has defeated four of the world’s best professional poker players in a marathon 120,000 hands of Heads-up, No-Limit Texas Hold’em poker played over 20 days, CMU announced today (Jan. 31) — joining Deep Blue (for chess), Watson, and Alpha Go as major milestones in AI.

Libratus led the pros by a collective $1,766,250 in chips.* The tournament was held at the Rivers Casino in Pittsburgh from 11–30 January in a competition called “Brains Vs. Artificial Intelligence: Upping the Ante.”

The developers of Libratus — Tuomas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student in computer science — said the sizable victory is statistically significant and not simply a matter of luck. “The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” Sandholm said. “This is the last frontier, at least in the foreseeable horizon, in game-solving in AI.”

This new AI milestone has implications for any realm in which information is incomplete and opponents sow misinformation, said Frank Pfenning, head of the Computer Science Department in CMU’s School of Computer Science. Business negotiation, military strategy, cybersecurity, and medical treatment planning could all benefit from automated decision-making using a Libratus-like AI.

“The computer can’t win at poker if it can’t bluff,” Pfenning explained. “Developing an AI that can do that successfully is a tremendous step forward scientifically and has numerous applications. Imagine that your smartphone will someday be able to negotiate the best price on a new car for you. That’s just the beginning.”

How the pros taught Libratus about its weaknesses

Brains vs AI scorecard (credit: Carnegie Mellon University)

So how was Libratus was able to improve day to day during the competition? It turns out it was the pros themselves who taught Libratus about its weaknesses. “After play ended each day, a meta-algorithm analyzed what holes the pros had identified and exploited in Libratus’ strategy,” Sandholm explained. “It then prioritized the holes and algorithmically patched the top three using the supercomputer each night.

“This is very different than how learning has been used in the past in poker. Typically researchers develop algorithms that try to exploit the opponent’s weaknesses. In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy.”

Sandholm also said that Libratus’ end-game strategy was a major advance. “The end-game solver has a perfect analysis of the cards,” he said. It was able to update its strategy for each hand in a way that ensured any late changes would only improve the strategy. Over the course of the competition, the pros responded by making more aggressive moves early in the hand, no doubt to avoid playing in the deep waters of the endgame where the AI had an advantage, he added.

Converging high-performance computing and AI

Professor Sandholm and Bridges supercomputer (credit: Carnegie Mellon University)

Libratus’ victory was made possible by the Pittsburgh Supercomputing Center’s Bridges computer. Libratus recruited the raw power of approximately 600 of Bridges’ 846 compute nodes. Bridges’ total speed is 1.35 petaflops, about 7,250 times as fast as a high-end laptop, and its memory is 274 terabytes, about 17,500 as much as you’d get in that laptop. This computing power gave Libratus the ability to play four of the best Texas Hold’em players in the world at once and beat them.

“We designed Bridges to converge high-performance computing and artificial intelligence,” said Nick Nystrom, PSC’s senior director of research and principal investigator for the National Science Foundation-funded Bridges system. “Libratus’ win is an important milestone toward developing AIs to address complex, real-world problems. At the same time, Bridges is powering new discoveries in the physical sciences, biology, social science, business and even the humanities.”

Sandholm said he will continue his research push on the core technologies involved in solving imperfect-information games and in applying these technologies to real-world problems. That includes his work with Optimized Markets, a company he founded to automate negotiations.

“CMU played a pivotal role in developing both computer chess, which eventually beat the human world champion, and Watson, the AI that beat top human Jeopardy! competitors,” Pfenning said. “It has been very exciting to watch the progress of poker-playing programs that have finally surpassed the best human players. Each one of these accomplishments represents a major milestone in our understanding of intelligence.

Head’s-Up No-Limit Texas Hold’em is a complex game, with 10160 (the number 1 followed by 160 zeroes) information sets — each set being characterized by the path of play in the hand as perceived by the player whose turn it is. The AI must make decisions without knowing all of the cards in play, while trying to sniff out bluffing by its opponent. As “no-limit” suggests, players may bet or raise any amount up to all of their chips.

Sandholm will be sharing Libratus’ secrets now that the competition is over, beginning with invited talks at the Association for the Advancement of Artificial Intelligence meeting Feb. 4–9 in San Francisco and in submissions to peer-reviewed scientific conferences and journals.

* The pros — Dong Kim, Jimmy Chou, Daniel McAulay and Jason Les — will split a $200,000 prize purse based on their respective performances during the event. McAulay, of Scotland, said Libratus was a tougher opponent than he expected, but it was exciting to play against it. “Whenever you play a top player at poker, you learn from it,” he said.


Carnegie Mellon University | Brains Vs. AI Rematch: Why Poker?

from KurzweilAI » News http://ift.tt/2ke12rT

When Electronic Witnesses Are Everywhere, No Secret’s Safe

On November 22, 2015, Victor Collins was found dead in the hot tub of his co-worker, James Andrew Bates. In the investigation that followed, Bates pleaded innocent but in February was charged with first-degree murder.

One of Amazon’s Alexa-enabled Echo devices was being used to stream music at the crime scene. Equipped with seven mics, the device is constantly listening for a “wake word” to activate a command. Just a second before and after a wake word is sensed, Echo begins recording audio data and streaming it to Amazon’s cloud.

On the night of the crime, it’s possible (but not certain) the device recorded audio that could help the investigation.

Police have requested Amazon hand over Bates’ cloud-based customer data, but the company is refusing. Meanwhile, the debacle is kicking up big questions around the privacy implications of our always-listening smart devices.

Marc Goodman, former LAPD officer and Singularity University’s faculty chair for policy, law, and ethics is an expert on cybersecurity and the threats posed by the growing number of connected sensors in our homes, pockets, cars, and offices.

We interviewed Goodman to examine the privacy concerns this investigation is highlighting and the next generation of similar cases we can expect in the future.


If Alexa only records for a second after sensing a “wake word,” is that enough information to make a call on a murder case? If a human witness heard that same amount of information, would that be a valid source?

Absolutely. I don’t think it’s about the quantity of time that people speak.

I’ve investigated many cases where the one line heard by witnesses was, "I’m going to kill you." You can say that in one second. If you can get a voice recording of somebody saying, "I’m going to kill you," then that’s pretty good evidence, whether that be a witness saying, "Yes, I heard him say that," or an electronic recording of it.

I think Amazon is great, and we have no reason to doubt them. That said, they say Echo is only recording when you say the word “Alexa,” but that means that it has to be constantly listening for the word Alexa.

For people who believe in privacy and don’t want to have all of their conversations recorded, they believe Amazon that that is actually the case. But how many people have actually examined the code? The code hasn’t been put out there for vetting by a third party, so we don’t actually know what is going on.

What other privacy concerns does this case surface? Are there future implications that people aren’t talking about, but should be?

Everything is hackable, so it won’t be long before Alexa gets a virus. There is no doubt in my mind that hackers are going to be working on that—if they aren’t already. Once that happens, could they inadvertently be recording all of the information you say in your home?

We have already seen these types of man-in-the-middle attacks, so I think that these are all relevant questions to be thinking about.

Down the road the bigger question is going to be—and I am sure that criminals will be all over this if they aren’t already—if I have 100 hours of you talking to Alexa, Siri, or Google Home, then I can create a perfect replication of your voice.

In other words, if I have enough data to faithfully reproduce your voice, I can type out any word into a computer, and then “you” will speak those words.

As a former police officer, do you have a specific stance on whether Amazon should hand over Bates’ customer data and whether customer-generated data like this should be used for criminal investigations?

Many years ago when the first smart internet-enabled refrigerators came out, people thought I was crazy when I joked about a cop interviewing the refrigerator at the scene of a crime. Back then, the crime I envisioned was that of a malnourished child wherein the police could query the refrigerator to see if there was food in the house or if the refrigerator contained nothing by beer.

Alexa is at the forefront of all of this right now, but what will become more interesting for police from an investigative perspective is when they’re eventually not interviewing just one device in your home, but interviewing 20 devices in your home. In the very same way that you would ask multiple witnesses at the scene of a homicide or a car crash.

Once you get a chorus of 20 different internet-enabled devices in your home—iPhones, iPads, smart refrigerators, smart televisions, Nest, and security systems—then you start getting really good intelligence about what people are doing at all times of the day. That becomes really fascinating—and foretells a privacy nightmare.

So, I wanted to broaden the issue and say that this is maybe starting with Alexa, but this is going to be a much larger matter moving forward.

As to the specifics of this case, here in the United States, and in many democratic countries around the world, people have a right to be secure in their home against unreasonable search and seizure. Specifically, in the US people have the Fourth Amendment right to be secure in their papers, their writings, etc. in their homes. The only way that information can be seized is through a court warrant, issued by a third party judge after careful review.

Is there a law that fundamentally protects any data captured in your home?

The challenge with all of these IoT devices is that the law, particularly in the US, is extremely murky. Because your data is often being stored in the cloud, the courts apply a very weak level of privacy protection to that.

For example, when your garbage is in your house it is considered your private information. But once you take out your garbage and put it in front of your house for the garbage men to pick up, then it becomes public information, and anybody can take it—a private investigator, a neighbor, anybody is allowed to rifle through your garbage because you have given it up. That is sort of the standard that the federal courts in the US have applied to cloud data.

The way the law is written is that your data in the cloud has a much lower standard of protection because you have chosen to already share it with a third party. For example, since you disclosed it to a third party [like Google or Amazon], it is not considered your privileged data anymore. It no longer has the full protection of “papers” under the Fourth Amendment, due to something known as the Third Party Doctrine. It is clear that our notions of privacy and search and seizure need to be updated for the digital age.

Should home-based IoT devices have the right to remain silent?

Well, I very much like the idea of devices taking the Fifth. I am sure that once we have some sort of sentient robots that they will request the right to take the Fifth Amendment. That will be really interesting.

But for our current devices, they are not sentient, and almost all of them are covered instead by terms of service. The same is true with an Echo device—the terms of service dictate what it is that can be done with your data. Broadly speaking, 30,000 word terms of service are written to protect companies, not you.

Most companies like Facebook take an extremely broad approach, because their goal is to maximize data extrusion from you, because you are not truly Facebook’s customer—you’re their product. You’re what they are selling to the real customers, the advertisers.

The problem is that these companies know that nobody reads their terms of service, and so they take really strong advantage of people.

Five years from now, what will the “next generation” of these types of cases look like? 

I think it will be video and with ubiquitous cameras. We will definitely see more of these things. Recording audio and video is all happening now, but I would say what might be five years out is the recreation, for example, where I can take a voice, and recreate it faithfully so that even someone’s mom can’t tell the difference.

Then, with that same video down the road, when people have the data to understand us better than we do ourselves, they’ll be able to carry out emotional manipulation. By that I mean people can use algorithms that already exist to tell when you are angry and when you are upset.

There was a famous Facebook study that came out that got Facebook in a lot of trouble. In the study, Facebook showed thousands of people a slew of really, really sad and depressing stories. What they found is that people were more depressed after seeing the images—when Facebook shows you more sad stories, they make you sadder. When they show you more happy stories, they make you happier. And this means that you can manipulate people by knowing them [in this way].

Facebook did all this testing on people without clearing it through any type of institution review board. But with clinical research where you manipulate people’s psychology, it has to be approved by a university or scientific ethics board before you can do the study.

MIT had a study called Psychopath, where, based upon people’s [Facebook] postings, they were able to determine whether or not a person was schizophrenic, or exhibited traits of schizophrenia. MIT also had another project called Gaydar, where they were able to tell if someone was gay, even if the user was still in the closet, based upon their postings.

All of these things mean that our deeper, innermost secrets will become knowable in the very near future.

How can we reduce the risk our data will be misused?

These IoT devices, despite all of the benefits they bring, will be the trillion-sensor source of all of this data. This means that, as consumers, we need to think about what those terms of services are going to be. We need to push back on them, and we may even need legislation to say what it is that both the government and companies can do with our data without our permission.

Today’s Alexa example is just one of what will be thousands of similar such cases in the future. We are wiring the world much more quickly than we are considering the public policy, legal, and ethical implications of our inventions.

As a society, we would do well to consider those important social needs alongside our technological achievements.

Marc Goodman is the author of Future Crimes, a New York Times and Wall Street Journal best seller and was recently named Amazon’s best business book of 2015. Connect further with Marc on Twitter @FutureCrimes

Image Source: Shutterstock

from Singularity Hub http://ift.tt/2jzzfQJ

Berkeley Launches RISELab, Enabling Computers to Make Intelligent Real-Time Decisions

By Berkeley News

January 31, 2017

Comments

The RISElab logo.

The mission of the University of California’s new RISElab is “to enable machines to rapidly take intelligent actions based on real-time data and context from the world around them.”

Credit: University of California, Berkeley

The University of California, Berkeley recently launched the RISELab, the next step in a series of five-year intensive research labs in computer science.


RISELab aims to improve how machines make intelligent decisions based on real-time input. The lab will focus on the development of data-intensive systems that provide real-time intelligence with secure execution (RISE).


The lab’s mission is to address “a longstanding grand challenge in computing: to enable machines to rapidly take intelligent actions based on real-time data and context from the world around them,” says Ion Stoica, RISELab’s principal investigator and director. He says the technology has applications wherever computing decisions need to interact with the world in real time.


The RISELab will build on the success of AMPLab, a pioneering big data effort that focused mostly on offline data analysis problems.


“The RISE challenge is to allow machines to make decisions in a tight loop with the real world,” says Berkeley professor Joseph Gonzalez. “So we need systems that can both understand the big context, and continuously adapt their beliefs and make robust decisions in real time.”



From Berkeley News

View Full Article


 


Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


The Struggle to Make AI Less Biased Than Its Creators

The dirty little secret is out about artificial intelligence.

No, not the one about machines taking over the world. That’s an old one. This one is more insidious. Data scientists, AI experts and others have long suspected it would be a problem. But it’s only within the last couple of years, as AI or some version of machine learning has become nearly ubiquitous in our lives, that the issue has come to the forefront.

AI is prejudiced. Sexism. Ageism. Racism. Name an -ism, and more likely than not, the results produced by our machines have a bias in one or more ways. But an emerging think tank dubbed Diversity.ai believes our machines can do better than their creators when it comes to breaking down stereotypes and other barriers to inclusion.

The problem has been well documented: in 2015, for example, Google’s photo app embarrassingly tagged some black people as gorillas. A recent pre-print paper reported widespread human bias in the metadata for a popular database of Flickr images used to train neural networks. Even more disturbing was an investigative report last year by ProPublica that found software used to predict future criminal behavior—a la the film “Minority Report”—was biased against minorities.

For Anastasia Georgievskaya, the aha moment that machines can learn prejudice came during work on an AI-judged beauty contest developed by Youth Laboratories, a company she co-founded in 2015 that uses machine vision and AI to study aging. Almost all the winners picked by the computer jury were white.

“I thought that discrimination by the robots is likely, but only in a very distant future,” says Georgievskaya by email. “But when we started working on Beauty.AI, we realized that people are discriminating [against] other people by age, gender, race and many other parameters, and nobody is talking about it.”

Algorithms can always be improved, but a machine can only learn from the data it is fed.

“We struggled to find the data sets of older people and people of color to be able to train our deep neural networks,” Georgievskaya says. “And after the first and second Beauty.ai contests, we realized that it is a major problem.”

Age bias in available clinical data has frustrated Alex Zhavoronkov, CEO of Insilico Medicine, Inc., a bioinformatics company that combines genomics, big data analysis and deep learning for drug discovery related to aging and age-related diseases. A project called Aging.ai that uses a deep neural network trained on hundreds of human blood tests to predict age had high errors in older populations.

“Our company came to study aging not only because we want to extend healthy productive longevity, but to fix one important problem in the pharmaceutical industry—age bias,” Zhavoronkov says. “Many clinical trials cut off patient enrollment by age, and thousands of healthy but older people miss their chance to get a treatment.”

Georgievskaya and like-minded scientists not only recognized the problem, they started to study it in depth—and do something about it.

“We realized that it’s essential to develop routines that test AI algorithms for discrimination and bias, and started experimenting with the data, methods and the metrics,” she says. “Our company is not only focused on beauty, but also on healthcare and visual-imaging biomarkers of health. And there we found many problems in age, gender, race and wealth bias.”

As Zhavoronkov envisions it, Diversity.ai will bring together a “diverse group of people with a very ‘fresh’ perspective, who are not afraid of thinking out of the box. Essentially, it is a discussion group with many practical projects and personal and group goals.”

His own goal? “My personal goal is to prove that the elderly are being discriminated [against], and develop highly accurate multi-modal biomarkers of chronological and biological aging. I also want to solve the racial bias and identify the fine equilibrium between the predictive power and discrimination in the [deep neural networks].”

The group’s advisory board is still coming together, but already includes representatives from Elon Musk’s billion-dollar non-profit AI research company Open.AI, computing company Nvidia, a leading South Korean futurist, and the Future of Humanity Institute at the University of Oxford.

Nell Watson, founder and CEO of Poikos, a startup that developed a 3D body scanner for mobile devices, is one of the advisory board members. She’s also an adjunct in the Artificial Intelligence and Robotics track at Singularity University. She recently began OpenEth.og, what she calls a non-profit machine ethics research company that hopes to advance the field of machine ethics by developing a framework for analyzing various ethical situations.

She sees OpenEth.org and Diversity.ai as natural allies toward the goal of developing ethical, objective AI.

She explains that the OpenEth team is developing a blockchain-based public ledger system capable of analyzing contracts for adherence to a structure of ethics.

“[It] provides a classification of the contract’s contents, without necessarily needing for the contract itself to be public,” she explains. That means companies can safeguard proprietary algorithms while providing public proof that it adheres to ethical standards.

“It also allows for a public signing of the ownership/responsibility for a given agent, so that anyone interacting with a machine will know where it came from and whether the ruleset that it’s running under is compatible with their own values,” she adds. “It’s a very ambitious project, but we are making steady progress, and I expect it to play a piece of many roles necessary in safeguarding against algorithmic bias.”

Georgievskaya says she hopes Diversity.ai can hold a conference later this year to continue to build awareness around issues of AI bias and begin work to scrub discrimination from our machines.

“Technologies and algorithms surround us everywhere and became an essential part of our daily life,” she says. “We definitely need to teach algorithms to treat us in the right way, so that we can live peacefully in [the] future.”

Image Credit:

from Singularity Hub http://ift.tt/2knmRb1