Future Day 2018

http://ift.tt/1psWieY

Join a Live 24-Hour Video Conversation March 1 on the Future With Futurists From Around the World, Hosted by The Millennium Project

Five international futurist organizations have joined forces to invite their members and the public around the world to come online at 12 noon in their time zone to explore how they can help build a better future.

On March 1, World Future Day, the organizations will conduct a 24-hour conversation about the world’s potential futures, challenges and opportunities. The online video conference is open to the public. This global conversation will be moving across the world, with people entering and leaving the conversation whenever they want. The five organizations will provide facilitators for each of the 24 time zones when possible. The co-sponsors of the event with The Millennium Project are the Association of Professional Futurists, Club of Amsterdam, Humanity+, and the World Futures Studies Federation.

This will be the fifth year The Millennium Project has conducted this global conversation among those of good will who share insights to collaborate to help build a better future.

“Whatever time zone you are in, you are invited at 12:00 noon in your time zone to click on https://hangouts.google.com/call/act3g5fh6vd7deoxq3xvb7zylue,” says Jerome Glenn, CEO of The Millennium Project.

If the limit of interactive video conference participation is reached, new arrivals will be able to see and hear, but not have their video seen and voice heard, but they can type in their questions and comments in the online chat box in the Google Hangout. The facilitators will read these live in the video conference. As people drop out, new video slots will open up.

“This is an open, no-agenda discussion about the future, but in general, people will be encouraged to share their ideas about how to build a better future, and if they can’t come online at 12 noon their time, they are welcome to join before or after that time as well. We will begin in New Zealand at 12 noon March 1, which is Feb. 28 at 6 p.m. in Washington, D.C., USA.”

—Event Producer

from KurzweilAI http://ift.tt/2oHHQ8v

Measuring deep-brain neurons’ electrical signals at high speed with light instead of electrodes

http://ift.tt/2F3huoy

MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to more effectively study how neurons behave, millisecond by millisecond, as the brain performs a particular function. (credit: Courtesy of the researchers)

Researchers at MIT have developed a new approach to measure electrical activity deep in the brain: using light — an easier, faster, and more informative method than inserting electrodes.

They’ve developed a new light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

Better than electrodes. “If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden*, Ph.D., an associate professor of biological engineering and brain and cognitive sciences at MIT and a pioneer in optogenetics (a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins). Boyden is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology.

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” he says. The new method is also more effective than current optogenetics methods, which also use light-sensitive proteins to silence or stimulate neuron activity.

Robot-controlled protein evolution. For the past two decades, Boyden and other scientists have sought a way to monitor electrical activity in the brain through optogenetic imaging, instead of recording with electrodes. But fluorescent molecules used for this kind of imaging have been limited in their speed of response, sensitivity to changes in voltage, and resistance to photobleaching (fading caused by exposure to light).

Instead, Boyden and his colleagues built a robot to screen millions of proteins. They generated the appropriate proteins for the traits they wanted by using a process called “directed protein evolution.” To demonstrate the power of this approach, they then narrowed down the evolved protein versions to a top performer, which they called “Archon1.” After the Archon1 gene is delivered into a cell, the expressed Archon1 protein embeds itself into the cell membrane — the ideal place for accurate measurement of a cell’s electrical activity.

Using light to measure neuron voltages. When the Archon1 cells are then exposed to a certain wavelength of reddish-orange light, the protein emits a longer wavelength of red light, and the brightness of that red light corresponds to the voltage (in millivolts) of that cell at a given moment in time. The researchers were able to use this method to measure electrical activity in mouse brain-tissue slices, in transparent zebrafish larvae, and in the transparent worm C. elegans (being transparent makes it easy to expose these organisms to light and to image the resulting fluorescence).

The researchers are now working on using this technology to measure brain activity in live mice as they perform various tasks, which Boyden believes should allow for mapping neural circuits and discovering how the circuits produce specific behaviors. “We will be able to watch a neural computation happen,” he says. “Over the next five years or so, we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The researchers also showed that Archon1 can be used in conjunction with current optogenetics methods. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Detecting electrical activity at millisecond-speed. Harvard professor Alan Cohen, who developed the predecessor to Archon1, says the new protein brings scientists closer to the goal of imaging electrical activity in live brains at a millisecond timescale (1,000 measurements per second).

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement,” says Cohen, who was not involved in this study.  “The Boyden lab developed a very clever high-throughput screening approach to this problem. Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

“Imaging of neuronal activity using voltage sensors opens up the exciting possibility for simultaneous recordings of large populations of neurons with single-cell single-spike resolution in vivo,” the researchers report in the paper.

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

* Boyden is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar.

** The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2 (previously engineered by Adam Cohen’s lab at Harvard University and based on the molecule Arch, which the Boyden lab reported in 2010). The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness. The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to Archon1.


Abstract of A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters

We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans.

from KurzweilAI » News http://ift.tt/2GQDZh0

Measuring deep-brain neurons’ electrical signals at high speed with light instead of electrodes

http://ift.tt/2F3huoy

MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to more effectively study how neurons behave, millisecond by millisecond, as the brain performs a particular function. (credit: Courtesy of the researchers)

Researchers at MIT have developed a new approach to measure electrical activity deep in the brain: using light — an easier, faster, and more informative method than inserting electrodes.

They’ve developed a new light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

Better than electrodes. “If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden*, Ph.D., an associate professor of biological engineering and brain and cognitive sciences at MIT and a pioneer in optogenetics (a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins). Boyden is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology.

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” he says. The new method is also more effective than current optogenetics methods, which also use light-sensitive proteins to silence or stimulate neuron activity.

Robot-controlled protein evolution. For the past two decades, Boyden and other scientists have sought a way to monitor electrical activity in the brain through optogenetic imaging, instead of recording with electrodes. But fluorescent molecules used for this kind of imaging have been limited in their speed of response, sensitivity to changes in voltage, and resistance to photobleaching (fading caused by exposure to light).

Instead, Boyden and his colleagues built a robot to screen millions of proteins. They generated the appropriate proteins for the traits they wanted by using a process called “directed protein evolution.” To demonstrate the power of this approach, they then narrowed down the evolved protein versions to a top performer, which they called “Archon1.” After the Archon1 gene is delivered into a cell, the expressed Archon1 protein embeds itself into the cell membrane — the ideal place for accurate measurement of a cell’s electrical activity.

Using light to measure neuron voltages. When the Archon1 cells are then exposed to a certain wavelength of reddish-orange light, the protein emits a longer wavelength of red light, and the brightness of that red light corresponds to the voltage (in millivolts) of that cell at a given moment in time. The researchers were able to use this method to measure electrical activity in mouse brain-tissue slices, in transparent zebrafish larvae, and in the transparent worm C. elegans (being transparent makes it easy to expose these organisms to light and to image the resulting fluorescence).

The researchers are now working on using this technology to measure brain activity in live mice as they perform various tasks, which Boyden believes should allow for mapping neural circuits and discovering how the circuits produce specific behaviors. “We will be able to watch a neural computation happen,” he says. “Over the next five years or so, we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The researchers also showed that Archon1 can be used in conjunction with current optogenetics methods. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Detecting electrical activity at millisecond-speed. Harvard professor Alan Cohen, who developed the predecessor to Archon1, says the new protein brings scientists closer to the goal of imaging electrical activity in live brains at a millisecond timescale (1,000 measurements per second).

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement,” says Cohen, who was not involved in this study.  “The Boyden lab developed a very clever high-throughput screening approach to this problem. Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

“Imaging of neuronal activity using voltage sensors opens up the exciting possibility for simultaneous recordings of large populations of neurons with single-cell single-spike resolution in vivo,” the researchers report in the paper.

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

* Boyden is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar.

** The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2 (previously engineered by Adam Cohen’s lab at Harvard University and based on the molecule Arch, which the Boyden lab reported in 2010). The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness. The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to Archon1.


Abstract of A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters

We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans.

from KurzweilAI http://ift.tt/2GQDZh0

How would one separate Artificial Narrow Intelligence from Artificial General Intelligence? (per Elon Musk)

Yesterday, I saw a tweet from Elon Musk replying to an AI expert’s comments about his company and standards: https://twitter.com/elonmusk/status/968560525088055296

So, from my understanding, ANI or AFI is used to automate tasks which are sort of repetitive in nature. However, I don’t see how cars in a dynamic setting such as a road with many different cars is repetitive; there can be millions of different variations on the road which a system would need to account for in its environmental variables. Most importantly, the AI would need to make decisions on the road when in autopilot mode in a car.

This seems like AGI since the system would need to think for itself for collision avoidance and optimal routing with autopilot. I’m not doubting Musk’s expertise on this, but is my understanding fundamentally wrong here? May someone elaborate on this?

Thank you.

submitted by /u/AstrodynamicalMoney
[link] [comments]

from Artificial Intelligence http://ift.tt/2oEoH7G

Personalizing wearable devices

http://ift.tt/2F2T5zw

When it comes to soft, assistive devices — like the exosuit being designed by the Harvard Biodesign Lab — the wearer and the robot need to be in sync. But every human moves a bit differently and tailoring the robot’s parameters for an individual user is a time-consuming and inefficient process.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed an efficient machine learning algorithm that can quickly tailor personalized control strategies for soft, wearable exosuits.

The research is described in Science Robotics.

“This new method is an effective and fast way to optimize control parameter settings for assistive wearable devices,” said Ye Ding, a postdoctoral fellow at SEAS and co-first author of the research. “Using this method, we achieved a huge improvement in metabolic performance for the wearers of a hip extension assistive device.”

When humans walk, we constantly tweak how we move to save energy (also known as metabolic cost).

“Before, if you had three different users walking with assistive devices, you would need three different assistance strategies,” said Myunghee Kim, a postdoctoral research fellow at SEAS and co-first author of the paper. “Finding the right control parameters for each wearer used to be a difficult, step-by-step process because not only do all humans walk a little differently but the experiments required to manually tune parameters are complicated and time-consuming”

The researchers, led by Conor Walsh, the John L. Loeb Associate Professor of Engineering and Applied Sciences, and Scott Kuindersma, Assistant Professor of Engineering and Computer Science at SEAS, developed an algorithm that can cut through that variability and rapidly identify the best control parameters that work best for minimizing the of walking.

The researchers used so-called human-in-the-loop optimization, which uses real-time measurements of human physiological signals, such as breathing rate, to adjust the control parameters of the device. As the algorithm honed in on the best parameters, it directed the exosuit on when and where to deliver its assistive force to improve hip extension. The Bayesian Optimization approach used by the team was first reported in a paper last year in PLOSone.

The combination of the algorithm and suit reduced metabolic cost by 17.4 percent compared to walking without the device. This was a more than 60 percent improvement compared to the team’s previous work.

“Optimization and learning algorithms will have a big impact on future wearable robotic devices designed to assist a range of behaviors,” said Kuindersma. “These results show that optimizing even very simple controllers can provide a significant, individualized benefit to users while walking. Extending these ideas to consider more expressive control strategies and people with diverse needs and abilities will be an exciting next step.”

“With wearable robots like soft exosuits, it is critical that the right assistance is delivered at the right time so that they can work synergistically with the wearer,” said Walsh. “With these online optimization algorithms, systems can learn how do achieve this automatically in about twenty minutes, thus maximizing benefit to the wearer.”

Next, the team aims to apply the optimization to a more complex device that assists multiple joints, such as hip and ankle, at the same time.

“In this paper, we demonstrated a high reduction in metabolic cost by just optimizing hip extension,” said Ding. “This goes to show what you can do with a great brain and great hardware.”

This research was supported by the Defense Advanced Research Projects Agency, Warrior Web Program, the Wyss Institute and the Harvard John A. Paulson School of Engineering and Applied Science.

from Artificial Intelligence News — ScienceDaily http://ift.tt/2HUqyOy

If you could ask Sophia of Hanson Robotics one question, what would it be?

Sophia and David Hanson are coming to speak at my university. There will also be a question and answer with the audience. Assuming I get the chance to ask some questions, do you guys have any you would find interesting to ask / good questions?

I know that Sophia can bring some controversy, but I figured it could be interesting to get ideas from you guys. If there are any good questions, I will post the responses, assuming I am able to ask them. Thanks everyone.

submitted by /u/BlakeIsFat
[link] [comments]

from Artificial Intelligence http://ift.tt/2t66sNT

Teaching Quantum Physics to a Computer

By ETH Zurich

February 28, 2018

Comments

Using neural networks, physicists taught a computer to predict the results of quantum experiments.

An international collaboration led by ETH Zurich scientists used machine learning to teach a computer to predict the outcomes of quantum experiments.

Credit: http://www.colourbox.com

Researchers at ETH Zurich in Switzerland have used machine learning to teach a computer to predict the outcomes of quantum experiments, which could be essential for testing future quantum computers.


The new machine-learning software enables a computer to “learn” the quantum state of a complex physical system based on experimental observations and to predict the outcomes of hypothetical measurements.


The researchers first showed the system handwritten samples, and it learned to replicate each letter, word, and sentence. The computer also calculated a probability distribution that expressed mathematically how often a letter is written in a certain way when it is preceded by some other letter.


Although quantum physics is more complicated than a person’s handwriting, the team says the principle for machine learning is the same.


The standard technique requires about 1 million measurements to achieve the desired accuracy, but the new method realized this accuracy with a smaller number of measurements.



From ETH Zurich

View Full Article


 


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


Don’t want to lose a finger? Let a robot give a hand

http://ift.tt/2F3OeOF

Every year thousands of carpenters injure their hands and fingers doing dangerous tasks like sawing.

In an effort to minimize injury and let carpenters focus on design and other bigger-picture tasks, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has created AutoSaw, a system that lets you customize different items that can then be constructed with the help of robots.

Users can choose from a range of carpenter-designed templates for chairs, desks and other furniture — and eventually could use AutoSaw for projects as large as a deck or a porch.

“If you’re building a deck, you have to cut large sections of lumber to length, and that’s often done on site,” says CSAIL postdoc Jeffrey Lipton, who was a lead author on a related paper about the system. “Every time you put a hand near a blade, you’re at risk. To avoid that, we’ve largely automated the process using a chop-saw and jigsaw.”

The system also gives general users more flexibility in designing furniture to be able to fit space-constrained houses and apartments. For example, it could allow you to modify a desk to squeeze into an L-shaped living room, or customize a table to fit in your micro-kitchen.

“Robots have already enabled mass production, but with artificial intelligence (AI) they have the potential to enable mass customization and personalization in almost everything we produce,” says CSAIL director and co-author Daniela Rus. “AutoSaw shows this potential for easy access and customization in carpentry.”

The paper, which will be presented in May at the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, was co-written by Lipton, Rus and PhD student Adriana Schulz. Other co-authors include MIT professor Wojciech Matusik, PhD student Andrew Spielberg and undergraduate Luis Trueba.

How it works

Software isn’t a foreign concept for many carpenters. “Computer Numerical Control” (CNC) can convert designs into numbers that are fed to specially programmed tools to execute. However, the machines used for CNC fabrication are usually large and cumbersome, and users are limited to the size of the existing CNC tools.

As a result, many carpenters continue to use chop-saws, jigsaws, and other hand tools that are low cost, easy to move, and simple to use. These tools, while useful for customization, still put people at a high-risk of injury.

AutoSaw draws on expert knowledge for designing, and robotics for the more risky cutting tasks. Using the existing CAD system OnShape with an interface of design templates, users can customize their furniture for things like size, sturdiness, and aesthetics. Once the design is finalized, it’s sent to the robots to assist in the cutting process using the jigsaw and chop-saw.

To cut lumber the team used motion tracking software and small mobile robots — an approach that takes up less space and is more cost-effective than large robotic arms.

Specifically, the team used a modified jigsaw-rigged Roomba to cut lumber of any shape on a plank. For the chopping, the team used two Kuka youBots to lift the beams, place it on the chop saw, and cut.

“We added soft grippers to the robots to give them more flexibility, like that of a human carpenter,” says Lipton. “This meant we could rely on the accuracy of the power tools instead of the rigid-bodied robots.”

After the robots finish with cutting, the user then assembles their new piece of furniture using step-by-step directions from the system.

When testing the system, the teams’ simulations showed that they could build a chair, shed, and deck. Using the robots, the team also made a table with an accuracy comparable to a human, without a real hand ever getting near a blade.

“There have been many recent AI achievements in virtual environments, like playing Go and composing music,” says Hod Lipson, a professor of mechanical engineering and data science at Columbia University. “Systems that can work in unstructured physical environments, such as this carpentry system, are notoriously difficult to make. This is truly a fascinating step forward.”

While AutoSaw is still a research platform, in the future the team plans to use materials like wood, and integrate complex tasks like drilling and gluing.

“Our aim is to democratize furniture-customization,” says Schulz. “We’re trying to open up a realm of opportunities so users aren’t bound to what they’ve bought at Ikea. Instead, they can make what best fits their needs.”

This work was supported in part by the National Science Foundation, grant number CMMI-1644558.

from Artificial Intelligence News — ScienceDaily http://ift.tt/2FFDOWz