Robot Monk Learns to ‘Speak’ English

By China Daily

August 21, 2017

Comments

Xian'er, robot monk

An English novice, the monk robot asks people to not ask difficult questions.

Credit: China Daily

The WeChat account of a robot “monk” in Beijing that uses artificial intelligence to speak with the public is communicating in English — although it still refers many questions to its master.

Xian’er, which is based at the Longquan Monastery in northwest Beijing, received 42,000 questions in its newly “learned” language after one day using English. 

When asked what it liked most, the robot monk replied, “I like ice cream, and wish I could have 100 ice cream cones at one time.” However, many questions were more likely to receive the response, “I need to ask my master.”

A physical version of Xian’er, which is dressed in a yellow robe like a Buddhist monk, was unveiled in October 2015. It still speaks only Mandarin.

The account on WeChat has about 1.2 million followers. Since the account was established in 2015, many people have chatted with it online.

Xian Qing, a monk from Longquan Monastery, said, “An English-speaking Xian’er can better spread Buddhist wisdom to people around the world.” He added that many overseas Chinese and foreigners like the cute monk.

A team of monks and volunteers, led by Han Yu, came together to create the robot. 

From China Daily
View Full Article


 

No entries found

from Communications of the ACM: Artificial Intelligence http://ift.tt/2wxaK0R

Why We Should Send All Our Politicians to Space

http://img.youtube.com/vi/GO5FwsblpT8/0.jpg

Our world is far from perfect. While the world has been getting better in many ways, we are also continuously faced with challenges. War, political conflict, and social injustices continue to hinder human progress.

All one needs to do is turn on a mainstream news channel and watch the issues that our world is faced with today. Discrimination, political instabilities, climate change, terrorism, cyber-attacks, refugee crises…the list goes on.

We often get so preoccupied with our issues here on Earth that we forget we are part of the grand cosmic arena. Let us zoom out of our planet and observe our actions and values from an objective lens. If an alien species were to observe us, what would they think of us as a species? Are most of our actions justifiable from a cosmic perspective? Are our politicians and leaders pushing humanity forward?

In the words of astronaut Edgar D. Mitchell, seeing Earth from space causes one to “develop an instant global consciousness…” He goes on to point out that “From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch.’ ”

The Overview Effect

There is a remarkable shift in cognitive awareness that many astronauts experience upon leaving Earth. Viewing our “blue marble” from orbit or the lunar surface puts all of our problems and conflicts into a much bigger perspective. We become aware of the fragility of our tiny planet and the weight of our actions in the grand scheme of the universe. This is known as the “overview effect.”

From space, national boundaries and geographic differences disappear, and it becomes clear that at the end of the day, we are all fundamentally human. There are no dividing lines and no boundaries. Nationalism, patriotism, and tribal behavior have no meaning. All we see is just one planet, and it’s the only home we have.

On February 14, 1990, as the spacecraft Voyager 1 was leaving our planetary neighborhood, Carl Sagan suggested NASA engineers turn it around for one last look at Earth from 6.4 billion kilometers away. The picture that was taken depicts Earth as a tiny point of light—a “pale blue dot,” as it was called—only 0.12 pixels in size.

In an inspiring monologue that followed, Carl Sagan asks us to, “Think of the rivers of blood spilled by all those generals and emperors so that in glory and triumph they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner.” Ultimately, the borders that we humans have mapped out on Earth are arbitrary human creations that have no real significance in the vast cosmic arena.

A Cosmic Perspective

What the overview effect leads to is a cosmic perspective. It is recognizing our place in the universe, the fragility of our planet, and the unimaginable potential we have as a species. It involves expanding our perspective of both space and time.

Unfortunately, many world leaders today fail to take such a perspective. Most politicians have yet to develop a reputation for thinking beyond their term limits. Many have yet to prioritize long-term human progress over short-term gains from power or money.

What we need is for our world leaders to unite rather than divide us as human beings and to promote global, and even cosmic, citizenship.

What if every world leader and politician truly experienced the existential transformation of the overview effect? Would they still seek to become “momentary masters of fractions of dot”? Would they continue to build walls and divide us? Probably not. It is likely that their missions and priorities would change for the better.

Obviously, giving everyone a trip to space is impractical—that is, unless space tourism becomes cheap and effective. But there are other ways to promote the much-needed “big-picture thinking.” For instance, we must upgrade the kind of values our education system promotes and equip future generations with a cosmic mindset. We can continue to educate and engage the public on the state of our planet and the need to upgrade our morality in the grand scheme of things.

But there are even other ways. One exciting organization, called The Overview Institute, has developed a virtual reality program that will allow users to experience the overview effect. It is a scalable tool that will make the existential transformation of the overview effect accessible to many.

An Existential Awakening

In the words of Sagan, the image of Earth from space “underscores our responsibility to deal more kindly with one another and to preserve and cherish the pale blue dot, the only home we’ve ever known.’

Experiencing the overview effect and developing a cosmic perspective is known to inspire more compassion for our fellow human beings. It stimulates a determination to successfully resolve all the problems we have here on Earth and focus on the issues that matter. It upgrades our consciousness, our values, and the kind of ambitions that we set forward for ourselves, both as individuals and as a species.

It is a powerful awakening of the mind and  a fundamental redefinition of what it means to be human.

Image Credit: NASA/Visible Earth

from Singularity Hub http://ift.tt/2vSfFJ2

Using Machine Learning to Improve Patient Care

By MIT CSAIL

August 21, 2017

Comments

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

 

 

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

The work is described in “Clinical Intervention Prediction and Understanding with Deep Neural Networks,” presented this month at the Machine Learning for Healthcare Conference in Boston.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says Ph.D. student Harini Suresh, lead author on the paper. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi.

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

EHR Model Transfer was co-developed by lead authors Jen Gong and Tristan Naumann, both PhD students at CSAIL, as well as Szolovits and John Guttag, who is the Dugald C. Jackson Professor in Electrical Engineering. It is described in “Predicting Clinical Outcomes Across Changing Electronic Health Record Systems,” presented at KDD 2017, the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, in Halifax, Canada.

Both models were trained using data from the critical care database MIMIC, which includes de-identified data from roughly 40,000 critical care patients and was developed by the MIT Lab for Computational Physiology.

ICU Intervene

Integrated ICU data is vital to automating the process of predicting patients’ health outcomes.

“Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments,” Suresh says. “In addition, the system is able to use a single model to predict many outcomes.”

ICU Intervene focuses on hourly prediction of five different interventions that cover a wide variety of critical care needs, such as breathing assistance, improving cardiovascular function, lowering blood pressure, and fluid therapy.

At each hour, the system extracts values from the data that represent vital signs, as well as clinical notes and other data points. All of the data are represented with values that indicate how far off a patient is from the average (to then evaluate further treatment).

Importantly, ICU Intervene can make predictions far into the future. For example, the model can predict whether a patient will need a ventilator six hours later rather than just 30 minutes or an hour later. The team also focused on providing reasoning for the model’s predictions, giving physicians more insight.

“Deep neural-network-based predictive models in medicine are often criticized for their black-box nature,” says Nigam Shah, an associate professor of medicine at Stanford University who was not involved in the paper. “However, these authors predict the start and end of medical interventions with high accuracy, and are able to demonstrate interpretability for the predictions they make.”

The team found that the system outperformed previous work in predicting interventions, and was especially good at predicting the need for vasopressors, a medication that tightens blood vessels and raises blood pressure.

In the future, the researchers will be trying to improve ICU Intervene to be able to give more individualized care and provide more advanced reasoning for decisions, such as why one patient might be able to taper off steroids, or why another might need a procedure like an endoscopy.

EHR Model Transfer

Another important consideration for leveraging ICU data is how it’s stored and what happens when that storage method gets changed. Existing machine-learning models need data to be encoded in a consistent way, so the fact that hospitals often change their EHR systems can create major problems for data analysis and prediction.

That’s where EHR Model Transfer comes in. The approach works across different versions of EHR platforms, using natural language processing to identify clinical concepts that are encoded differently across systems and then mapping them to a common set of clinical concepts (such as “blood pressure” and “heart rate”).

For example, a patient in one EHR platform could be switching hospitals and would need their data transferred to a different type of platform. EHR Model Transfer aims to ensure that the model could still predict aspects of that patient’s ICU visit, such as their likelihood of a prolonged stay or even of dying in the unit.

“Machine-learning models in health care often suffer from low external validity, and poor portability across sites,” says Shah. “The authors devise a nifty strategy for using prior knowledge in medical ontologies to derive a shared representation across two sites that allows models trained at one site to perform well at another site. I am excited to see such creative use of codified medical knowledge in improving portability of predictive models.”

With EHR Model Transfer, the team tested their model’s ability to predict two outcomes: mortality and the need for a prolonged stay. They trained it on one EHR platform and then tested its predictions on a different platform. EHR Model Transfer was found to outperform baseline approaches and demonstrated better transfer of predictive models across EHR versions compared to using EHR-specific events alone.

In the future, the EHR Model Transfer team plans to evaluate the system on data and EHR systems from other hospitals and care settings.

Both papers were supported, in part, by the Intel Science and Technology Center for Big Data and the National Library of Medicine. The KDD 2017 paper was additionally supported by the U.S. National Science Foundation and Quanta Computer Inc.

Originally published on MIT News.
Reprinted with permission of MIT News.


 

No entries found

from Communications of the ACM: Artificial Intelligence http://ift.tt/2uY4wHE

The dangerous game of the ‘highwayman’ beetle – Phys.Org


Phys.Org

The dangerous game of the ‘highwayman’ beetle
Phys.Org
ASU postdoctoral researcher Christina Kwapich looks at a sample collection of ants with an ant cricket in the Insect Rearing and Behavior Lab in Tempe. Credit: Charlie Leight/ASU Now. We all know the type. The project co-worker who doesn’t really work

from ants – Google News http://ift.tt/2v73N2N

‘Electronic skin’ takes wearable health monitors to the next level

Researchers have developed a new, electronic skin which can track heart rate, respiration, muscle movement and other health data. The electronic skins offers several improvements over existing trackers, including greater flexibility, portability, and the ability to stick the self-adhesive patch.

from Artificial Intelligence News — ScienceDaily http://ift.tt/2v710XC