Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).
Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.
Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.
“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”
As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.
“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”
The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.
Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.
Kona: linguistically fluent software
But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.
“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”
* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”
from KurzweilAI » News http://ift.tt/2wsxZWH