Future Day 2018

http://ift.tt/1psWieY

Join a Live 24-Hour Video Conversation March 1 on the Future With Futurists From Around the World, Hosted by The Millennium Project

Five international futurist organizations have joined forces to invite their members and the public around the world to come online at 12 noon in their time zone to explore how they can help build a better future.

On March 1, World Future Day, the organizations will conduct a 24-hour conversation about the world’s potential futures, challenges and opportunities. The online video conference is open to the public. This global conversation will be moving across the world, with people entering and leaving the conversation whenever they want. The five organizations will provide facilitators for each of the 24 time zones when possible. The co-sponsors of the event with The Millennium Project are the Association of Professional Futurists, Club of Amsterdam, Humanity+, and the World Futures Studies Federation.

This will be the fifth year The Millennium Project has conducted this global conversation among those of good will who share insights to collaborate to help build a better future.

“Whatever time zone you are in, you are invited at 12:00 noon in your time zone to click on https://hangouts.google.com/call/act3g5fh6vd7deoxq3xvb7zylue,” says Jerome Glenn, CEO of The Millennium Project.

If the limit of interactive video conference participation is reached, new arrivals will be able to see and hear, but not have their video seen and voice heard, but they can type in their questions and comments in the online chat box in the Google Hangout. The facilitators will read these live in the video conference. As people drop out, new video slots will open up.

“This is an open, no-agenda discussion about the future, but in general, people will be encouraged to share their ideas about how to build a better future, and if they can’t come online at 12 noon their time, they are welcome to join before or after that time as well. We will begin in New Zealand at 12 noon March 1, which is Feb. 28 at 6 p.m. in Washington, D.C., USA.”

—Event Producer

from KurzweilAI http://ift.tt/2oHHQ8v

Advertisements

Measuring deep-brain neurons’ electrical signals at high speed with light instead of electrodes

http://ift.tt/2F3huoy

MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to more effectively study how neurons behave, millisecond by millisecond, as the brain performs a particular function. (credit: Courtesy of the researchers)

Researchers at MIT have developed a new approach to measure electrical activity deep in the brain: using light — an easier, faster, and more informative method than inserting electrodes.

They’ve developed a new light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

Better than electrodes. “If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden*, Ph.D., an associate professor of biological engineering and brain and cognitive sciences at MIT and a pioneer in optogenetics (a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins). Boyden is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology.

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” he says. The new method is also more effective than current optogenetics methods, which also use light-sensitive proteins to silence or stimulate neuron activity.

Robot-controlled protein evolution. For the past two decades, Boyden and other scientists have sought a way to monitor electrical activity in the brain through optogenetic imaging, instead of recording with electrodes. But fluorescent molecules used for this kind of imaging have been limited in their speed of response, sensitivity to changes in voltage, and resistance to photobleaching (fading caused by exposure to light).

Instead, Boyden and his colleagues built a robot to screen millions of proteins. They generated the appropriate proteins for the traits they wanted by using a process called “directed protein evolution.” To demonstrate the power of this approach, they then narrowed down the evolved protein versions to a top performer, which they called “Archon1.” After the Archon1 gene is delivered into a cell, the expressed Archon1 protein embeds itself into the cell membrane — the ideal place for accurate measurement of a cell’s electrical activity.

Using light to measure neuron voltages. When the Archon1 cells are then exposed to a certain wavelength of reddish-orange light, the protein emits a longer wavelength of red light, and the brightness of that red light corresponds to the voltage (in millivolts) of that cell at a given moment in time. The researchers were able to use this method to measure electrical activity in mouse brain-tissue slices, in transparent zebrafish larvae, and in the transparent worm C. elegans (being transparent makes it easy to expose these organisms to light and to image the resulting fluorescence).

The researchers are now working on using this technology to measure brain activity in live mice as they perform various tasks, which Boyden believes should allow for mapping neural circuits and discovering how the circuits produce specific behaviors. “We will be able to watch a neural computation happen,” he says. “Over the next five years or so, we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The researchers also showed that Archon1 can be used in conjunction with current optogenetics methods. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Detecting electrical activity at millisecond-speed. Harvard professor Alan Cohen, who developed the predecessor to Archon1, says the new protein brings scientists closer to the goal of imaging electrical activity in live brains at a millisecond timescale (1,000 measurements per second).

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement,” says Cohen, who was not involved in this study.  “The Boyden lab developed a very clever high-throughput screening approach to this problem. Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

“Imaging of neuronal activity using voltage sensors opens up the exciting possibility for simultaneous recordings of large populations of neurons with single-cell single-spike resolution in vivo,” the researchers report in the paper.

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

* Boyden is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar.

** The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2 (previously engineered by Adam Cohen’s lab at Harvard University and based on the molecule Arch, which the Boyden lab reported in 2010). The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness. The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to Archon1.


Abstract of A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters

We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans.

from KurzweilAI » News http://ift.tt/2GQDZh0

Measuring deep-brain neurons’ electrical signals at high speed with light instead of electrodes

http://ift.tt/2F3huoy

MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to more effectively study how neurons behave, millisecond by millisecond, as the brain performs a particular function. (credit: Courtesy of the researchers)

Researchers at MIT have developed a new approach to measure electrical activity deep in the brain: using light — an easier, faster, and more informative method than inserting electrodes.

They’ve developed a new light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

Better than electrodes. “If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden*, Ph.D., an associate professor of biological engineering and brain and cognitive sciences at MIT and a pioneer in optogenetics (a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins). Boyden is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology.

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” he says. The new method is also more effective than current optogenetics methods, which also use light-sensitive proteins to silence or stimulate neuron activity.

Robot-controlled protein evolution. For the past two decades, Boyden and other scientists have sought a way to monitor electrical activity in the brain through optogenetic imaging, instead of recording with electrodes. But fluorescent molecules used for this kind of imaging have been limited in their speed of response, sensitivity to changes in voltage, and resistance to photobleaching (fading caused by exposure to light).

Instead, Boyden and his colleagues built a robot to screen millions of proteins. They generated the appropriate proteins for the traits they wanted by using a process called “directed protein evolution.” To demonstrate the power of this approach, they then narrowed down the evolved protein versions to a top performer, which they called “Archon1.” After the Archon1 gene is delivered into a cell, the expressed Archon1 protein embeds itself into the cell membrane — the ideal place for accurate measurement of a cell’s electrical activity.

Using light to measure neuron voltages. When the Archon1 cells are then exposed to a certain wavelength of reddish-orange light, the protein emits a longer wavelength of red light, and the brightness of that red light corresponds to the voltage (in millivolts) of that cell at a given moment in time. The researchers were able to use this method to measure electrical activity in mouse brain-tissue slices, in transparent zebrafish larvae, and in the transparent worm C. elegans (being transparent makes it easy to expose these organisms to light and to image the resulting fluorescence).

The researchers are now working on using this technology to measure brain activity in live mice as they perform various tasks, which Boyden believes should allow for mapping neural circuits and discovering how the circuits produce specific behaviors. “We will be able to watch a neural computation happen,” he says. “Over the next five years or so, we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The researchers also showed that Archon1 can be used in conjunction with current optogenetics methods. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Detecting electrical activity at millisecond-speed. Harvard professor Alan Cohen, who developed the predecessor to Archon1, says the new protein brings scientists closer to the goal of imaging electrical activity in live brains at a millisecond timescale (1,000 measurements per second).

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement,” says Cohen, who was not involved in this study.  “The Boyden lab developed a very clever high-throughput screening approach to this problem. Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

“Imaging of neuronal activity using voltage sensors opens up the exciting possibility for simultaneous recordings of large populations of neurons with single-cell single-spike resolution in vivo,” the researchers report in the paper.

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

* Boyden is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar.

** The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2 (previously engineered by Adam Cohen’s lab at Harvard University and based on the molecule Arch, which the Boyden lab reported in 2010). The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness. The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to Archon1.


Abstract of A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters

We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans.

from KurzweilAI http://ift.tt/2GQDZh0

How would one separate Artificial Narrow Intelligence from Artificial General Intelligence? (per Elon Musk)

Yesterday, I saw a tweet from Elon Musk replying to an AI expert’s comments about his company and standards: https://twitter.com/elonmusk/status/968560525088055296

So, from my understanding, ANI or AFI is used to automate tasks which are sort of repetitive in nature. However, I don’t see how cars in a dynamic setting such as a road with many different cars is repetitive; there can be millions of different variations on the road which a system would need to account for in its environmental variables. Most importantly, the AI would need to make decisions on the road when in autopilot mode in a car.

This seems like AGI since the system would need to think for itself for collision avoidance and optimal routing with autopilot. I’m not doubting Musk’s expertise on this, but is my understanding fundamentally wrong here? May someone elaborate on this?

Thank you.

submitted by /u/AstrodynamicalMoney
[link] [comments]

from Artificial Intelligence http://ift.tt/2oEoH7G

Personalizing wearable devices

http://ift.tt/2F2T5zw

When it comes to soft, assistive devices — like the exosuit being designed by the Harvard Biodesign Lab — the wearer and the robot need to be in sync. But every human moves a bit differently and tailoring the robot’s parameters for an individual user is a time-consuming and inefficient process.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed an efficient machine learning algorithm that can quickly tailor personalized control strategies for soft, wearable exosuits.

The research is described in Science Robotics.

“This new method is an effective and fast way to optimize control parameter settings for assistive wearable devices,” said Ye Ding, a postdoctoral fellow at SEAS and co-first author of the research. “Using this method, we achieved a huge improvement in metabolic performance for the wearers of a hip extension assistive device.”

When humans walk, we constantly tweak how we move to save energy (also known as metabolic cost).

“Before, if you had three different users walking with assistive devices, you would need three different assistance strategies,” said Myunghee Kim, a postdoctoral research fellow at SEAS and co-first author of the paper. “Finding the right control parameters for each wearer used to be a difficult, step-by-step process because not only do all humans walk a little differently but the experiments required to manually tune parameters are complicated and time-consuming”

The researchers, led by Conor Walsh, the John L. Loeb Associate Professor of Engineering and Applied Sciences, and Scott Kuindersma, Assistant Professor of Engineering and Computer Science at SEAS, developed an algorithm that can cut through that variability and rapidly identify the best control parameters that work best for minimizing the of walking.

The researchers used so-called human-in-the-loop optimization, which uses real-time measurements of human physiological signals, such as breathing rate, to adjust the control parameters of the device. As the algorithm honed in on the best parameters, it directed the exosuit on when and where to deliver its assistive force to improve hip extension. The Bayesian Optimization approach used by the team was first reported in a paper last year in PLOSone.

The combination of the algorithm and suit reduced metabolic cost by 17.4 percent compared to walking without the device. This was a more than 60 percent improvement compared to the team’s previous work.

“Optimization and learning algorithms will have a big impact on future wearable robotic devices designed to assist a range of behaviors,” said Kuindersma. “These results show that optimizing even very simple controllers can provide a significant, individualized benefit to users while walking. Extending these ideas to consider more expressive control strategies and people with diverse needs and abilities will be an exciting next step.”

“With wearable robots like soft exosuits, it is critical that the right assistance is delivered at the right time so that they can work synergistically with the wearer,” said Walsh. “With these online optimization algorithms, systems can learn how do achieve this automatically in about twenty minutes, thus maximizing benefit to the wearer.”

Next, the team aims to apply the optimization to a more complex device that assists multiple joints, such as hip and ankle, at the same time.

“In this paper, we demonstrated a high reduction in metabolic cost by just optimizing hip extension,” said Ding. “This goes to show what you can do with a great brain and great hardware.”

This research was supported by the Defense Advanced Research Projects Agency, Warrior Web Program, the Wyss Institute and the Harvard John A. Paulson School of Engineering and Applied Science.

from Artificial Intelligence News — ScienceDaily http://ift.tt/2HUqyOy