This Tech Could Charge Electric Cars While They Drive

http://img.youtube.com/vi/7nkOgiTxfEs/0.jpg

The global auto industry is worth $2 trillion, but electric and hybrid cars currently make up less than one percent of that figure. However, experts are predicting an explosion in electric car adoption.

Financial services company UBS predicted demand for electric cars will reach an inflection point in 2018 as their cost shrinks to equal (and eventually undercut) the cost of internal combustion engine vehicles. China saw a 53 percent increase in electric car sales from 2015 to 2016, and India is aiming to sell only electric cars by 2030.

Even though they’ll be affordable, and they’ll keep the air cleaner, though, electric cars will still have one major limitation, and that’s…the fact that they’re electric. Electric things run on batteries, and if batteries don’t get recharged every so often, they die.

Tesla’s Model 3 will go 200 miles on one charge, and Chevy’s new Bolt goes 238 miles. These are no small distances, especially when compared to the Volt’s 30-mile range just three years ago. Even so, once the cars’ batteries are drained, recharging them takes hours.

Researchers at Stanford University just took a step toward solving this problem. In a paper published last week in Nature, the team described a new technique that wirelessly transmits electricity to a moving object within close range.

Wireless power transfer works using magnetic resonance coupling. An alternating magnetic field in a transmitter coil causes electrons in a receiver coil to oscillate, with the best transfer efficiency occurring when both coils are tuned to the same frequency and positioned at a specific angle.

That makes it hard to transfer electricity while an object is moving though. To bypass the need for continuous manual tuning, the Stanford team removed the radio-frequency source in the transmitter and replaced it with a voltage amplifier and a feedback resistor.

The system calibrates itself to the required frequency for different distances. Using this system, the researchers were able to wirelessly transmit a one-milliwatt charge of electricity to a moving LED light bulb three feet away. No manual tuning was needed, and transfer efficiency remained stable.

One milliwatt is a far cry from the tens of kilowatts an electric car needs. But now that they’ve established that an amplifier will do the trick, the team is working on ramping up the amount of electricity that can be transferred using this system.

Switching out the amplifier itself could make a big difference—for this test, they used a general-purpose amplifier with about ten percent efficiency, but custom-made amplifiers could likely boost efficiency to over 90 percent.

It will still be a while before electric cars can get zapped with infusions of charge while cruising down the highway, but that’s the future some energy experts envision.

“In theory, one could drive for an unlimited amount of time without having to stop to recharge,” said Shanhui Fan, professor of electrical engineering and senior author of the study. “The hope is that you’ll be able to charge your electric car while you’re driving down the highway. A coil in the bottom of the vehicle could receive electricity from a series of coils connected to an electric current embedded in the road.”

Embedding power lines in roads would be a major infrastructure project, and it wouldn’t make sense to undertake it until electric car adoption was widespread—when, for example, electric cars accounted for at least 50 percent of total vehicles on the road, or more. If charging was easier, though, more drivers might choose to go electric.

Tesla has already made electric car ownership a bit easier by investing heavily in its Supercharger network. There are currently 861 Supercharger stations around the world with 5,655 chargers, and hundreds more are in the works. The stations charge Tesla vehicles for free in a half hour or hour instead of multiple hours.

Ripping up roads to embed power lines that can charge cars while they’re moving seems unnecessary as technologies like the Superchargers continue to proliferate. But as electric vehicles proliferate too, drivers will want their experiences to be as seamless as possible, and that could include not having to stop to charge your car.

Despite the significant hurdles left to clear, charging moving cars is the most exciting potential of the Stanford team’s wireless transfer system. But there are also smaller-scale applications like cell phones and personal medical implants, which will likely employ the technology before it’s used on cars. Fan even mentioned that the system “…may untether robotics in manufacturing.”

Image Credit: Shutterstock

from Singularity Hub http://ift.tt/2sZp8NO

On Voter Fraud, Immigration, and Economic Disparity

== The New “Voter Fraud” Commission ==

As usual, Democrats are right to complain… and they are doing it all wrong.

President Trump declared a commission aimed at justifying his unfounded voter fraud claims.  (“Millions cast illegal ballots, giving Hillary Clinton her huge popular vote margin.”) But instead of appointing a blue-ribbon, bipartisan committee of nationally respected sages, the commission will be spearheaded by Vice President Mike Pence and Kansas Secretary of State Kris Kobach, often tied to white nationalists.

from Ethical Technology http://ift.tt/2rTYfLG

Two drones see through walls in 3D using WiFi signals

http://img.youtube.com/vi/THu3ZvAHI9A/0.jpg

Transmit and receive drones perform 3D imaging through walls using WiFi (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Researchers at the University of California Santa Barbara have demonstrated the first three-dimensional imaging of objects through walls using an ordinary wireless signal.

Applications could include emergency search-and-rescue, archaeological discovery, and structural monitoring, according to the researchers. Other applications could include military and law-enforcement surveillance.

Calculating 3D images from WiFi signals

In the research, two octo-copters (drones) took off and flew outside an enclosed, four-sided brick structure whose interior was unknown to the drones. One drone continuously transmitted a WiFi signal; the other drone (located on a different side of the structure) received that signal and transmitted the changes in received signal strength (“RSSI”) during the flight to a computer, which then calculated 3D high-resolution images of the objects inside (which do not need to move).

Structure and resulting 3D image (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Interestingly, the equipment is all commercially available: two drones with “yagi” antenna, WiFi router, and Tango tablet (for real-time localization); and a Raspberry Pi computer for computations, with network interface.

This development builds on previous 2D work by professor Yasamin Mostofi’s lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. Mostofi says the success of the 3D experiments is due to the drones’ ability to approach the area from several angles, and to new methodology* developed by her lab.

The research is described in an open-access paper published April 2017 in proceedings of the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

A later paper by Technical University of Munich physicists has also reported a system intended for 3D imaging with WiFi, but with only simulated (and cruder) images.

Block diagram of the 3D through-wall imaging system (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

* The researchers’ approach to enabling 3D through-wall imaging utilizes four tightly integrated key components, according to the paper.

(1) They proposed robotic paths that can capture the spatial variations in all three dimensions as much as possible, while maintaining the efficiency of the operation. 

(2) They modeled the three-dimensional unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels. 

(3) To approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.

(4) They took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (less than 4 percent).


Mostofi Lab | X-ray Eyes in the Sky: Drones and WiFi for 3D Through-Wall Imaging


Abstract of 3D Through-Wall Imaging with Unmanned Aerial Vehicles Using WiFi

In this paper, we are interested in the 3D through-wall imaging of a completely unknown area, using WiFi RSSI and Unmanned Aerial Vehicles (UAVs) that move outside of the area of interest to collect WiFi measurements. It is challenging to estimate a volume represented by an extremely high number of voxels with a small number of measurements. Yet many applications are time-critical and/or limited on resources, precluding extensive measurement collection. In this paper, we then propose an approach based on Markov random field modeling, loopy belief propagation, and sparse signal processing for 3D imaging based on wireless power measurements. Furthermore, we show how to design ecient aerial routes that are informative for 3D imaging. Finally, we design and implement a complete experimental testbed and show high-quality 3D robotic through-wall imaging of unknown areas with less than 4% of measurements.

from KurzweilAI » News http://ift.tt/2sRTXDH

Two drones see through walls in 3D using WiFi signals

http://ift.tt/2rDQSDL

Transmit and receive drones perform 3D imaging through walls using WiFi (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Researchers at the University of California Santa Barbara have demonstrated the first three-dimensional imaging of objects through walls using an ordinary wireless signal.

Applications could include emergency search-and-rescue, archaeological discovery, and structural monitoring, according to the researchers. Other applications could include military and law-enforcement surveillance.

Calculating 3D images from WiFi signals

In the research, two octo-copters (drones) took off and flew outside an enclosed, four-sided brick structure whose interior was unknown to the drones. One drone continuously transmitted a WiFi signal; the other drone (located on a different side of the structure) received that signal and transmitted the changes in received signal strength (“RSSI”) during the flight to a computer, which then calculated 3D high-resolution images of the objects inside (which do not need to move).

Structure and resulting 3D image (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

Interestingly, the equipment is all commercially available: two drones with “yagi” antenna, WiFi router, and Tango tablet (for real-time localization); and a Raspberry Pi computer for computations, with network interface.

This development builds on previous 2D work by professor Yasamin Mostofi’s lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. Mostofi says the success of the 3D experiments is due to the drones’ ability to approach the area from several angles, and to new methodology* developed by her lab.

The research is described in an open-access paper published April 2017 in proceedings of the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

A later paper by Technical University of Munich physicists has also reported a system intended for 3D imaging with WiFi, but with only simulated (and cruder) images.

Block diagram of the 3D through-wall imaging system (credit: Chitra R. Karanam and Yasamin Mostofi/ACM)

* The researchers’ approach to enabling 3D through-wall imaging utilizes four tightly integrated key components, according to the paper.

(1) They proposed robotic paths that can capture the spatial variations in all three dimensions as much as possible, while maintaining the efficiency of the operation. 

(2) They modeled the three-dimensional unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels. 

(3) To approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.

(4) They took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (less than 4 percent).


Mostofi Lab | X-ray Eyes in the Sky: Drones and WiFi for 3D Through-Wall Imaging


Abstract of 3D Through-Wall Imaging with Unmanned Aerial Vehicles Using WiFi

In this paper, we are interested in the 3D through-wall imaging of a completely unknown area, using WiFi RSSI and Unmanned Aerial Vehicles (UAVs) that move outside of the area of interest to collect WiFi measurements. It is challenging to estimate a volume represented by an extremely high number of voxels with a small number of measurements. Yet many applications are time-critical and/or limited on resources, precluding extensive measurement collection. In this paper, we then propose an approach based on Markov random field modeling, loopy belief propagation, and sparse signal processing for 3D imaging based on wireless power measurements. Furthermore, we show how to design ecient aerial routes that are informative for 3D imaging. Finally, we design and implement a complete experimental testbed and show high-quality 3D robotic through-wall imaging of unknown areas with less than 4% of measurements.

from KurzweilAI http://ift.tt/2sRTXDH

Is There a Multidimensional Mathematical World Hidden in the Brain’s Computation?

http://ift.tt/2sB3nBi

Two thousand years ago, the ancient Greeks looked into the night sky and saw geometric shapes emerge among the stars: a hunter, a lion, a water vase.

In a way, they used these constellations to make sense of the random scattering of stars in the fabric of the universe. By translating astronomy into shapes, they found a way to seek order and meaning in a highly complex system.

As it turns out, the Greeks were wrong: most stars in a constellation don’t have much to do with one another. But their approach lives on.

This week, the Blue Brain Project proposed a fascinating idea that may explain the complexities of the human brain. Using algebraic topology, a type of mathematics that “projects” complex connections into graphs, they mapped out a path for complex functions to emerge from the structure of neural networks.

And get this: while the brain physically inhabits our three-dimensional world, its inner connections—mathematically speaking—operate on a much higher dimensional space. In human speak: the assembly and disassembly of neural connections are massively complex, more so than expected. But now we may have a language to describe them.

“We found a world that we had never imagined,” says Dr. Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland who led the study.

This may be why it’s been so difficult to understand the brain, he says. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

A high-dimensional world

When we think about the brain, branchy neurons and gooey tissue come to mind—definitely 3D objects. Physically speaking, there are no high-dimensional mini-brains hidden within our own, and our neurons don’t jump into a higher plane of existence when they fire away.

Outside of physics, “dimension” is really just a fancy way of describing complexity. Take a group of three neurons that work together (A, B, and C), for example. Now think about how many ways they can connect together. Because information is generally only passed one way from a neuron to its downstream partner, A can only link to B or C. In topology speak, the dimension here is two.

Similarly, a group of four neurons has dimension three, five neurons dimension four and so on. The more neurons in a group, the higher the dimension—and so the system gets increasingly complex.

“In our study, dimension does not describe spatial dimensions, but rather the topological dimension of the geometric objects we are describing. A 7- or 11-dimensional simplex is still embedded in the physical three-dimensional space,” explains study author Max Notle, a graduate student at EPFL, to Singularity Hub.

Multi-dimensional connections

To begin parsing out the organization of the brain, the team started with functional building blocks called simplices. Each simplex is a special group of neurons connected with each other in a very specific order.

One neuron is very influential and speaks first, one listens to all neurons, and others listen to a few neurons and speak to the ones they’re not listening to, says Nolte. “This specific structure makes sure that the listening neurons can really understand the speaking neurons in a brain where always millions of neurons are talking at the same time, like in a crowded stadium.”

As before, dimensions describe the complexity of a simplex.

In six different virtual brains, each reconstructed from experimental data obtained in rats, the team looked for signs of these abstract mathematical objects. Incredibly, the virtual brains contained extremely complex simplices—up to dimension seven—and roughly 80 million lower dimensional neuron “groups.”

The enormous amount of simplices hidden inside the brain suggests that each neuron is a part of an immense number of functional groups, much more than previously thought, says Nolte.

In their paper, the researchers attempted to mathematically map the brain’s neuronal networks. The image on the left is a digital copy of the neocortex. Next to it is a simplified image of the brain’s multi-dimensional structures and spaces. Image credit: Blue Brain Project

Emerging function

If simplices are building blocks, then how do they come together to form even more complicated networks?

When the team exposed their virtual brain to a stimulus, the neurons assembled into increasingly intricate networks, like blocks of Lego building a castle.

Again, it’s not necessarily a physical connection. Picture groups of neurons linking to others like a social graph, and the graphs associating into a web or other high-dimensional structure.

The fit wasn’t perfect: in between the higher-dimensional structures were “holes,” places where some connections were missing to make a new web.

Like simplices, holes also have dimensions. In a way, says Nolte, “the dimension of a hole describes how close the simplices were to reaching a higher dimension,” or how well the building blocks associated with each other.

The appearance of progressively higher dimensional holes tells us that neurons in the network respond to stimuli in an “extremely organized manner,” says Dr. Ran Levi at the University of Aberdeen, who also worked on the paper.

When we look at the reaction of the brain over time to a stimulus, we see abstract geometric objects forming and then falling apart as it builds functional networks, says Levi.

The brain first recruits simpler neural networks to build a 1D “frame.” These networks then associate into 2D “walls” with “holes” in between. Fast-forward and increasingly higher dimensional structures and holes form, until they reach peak organization—whatever connections the neurons need to get the job done.

Once there, the entire structure collapses, freeing up the simplices for their next tasks, like sand castles materializing and then disintegrating away.

“We don’t know…what the brain is doing when it forms these cavities,” says Levi to Singularity Hub.

What’s clear, however, is that neurons have to fire in a “fantastically ordered” manner for these high-dimensional structures to occur.

“It is quite clear that this hyper-organized activity is not just a coincidence. This could be the key to understanding what is going on when the brain is active,” says Levi.

Talking in sync

The team also worked out how neurons in the same or different groups talked to one another after a stimuli.

It really depends on where they are in the high-dimensional structure and their own groups.

Imagine two “stranger” neurons chatting away, says Nolte. They’ll probably say many unrelated things, because they don’t know each other.

Now, imagine after a stimulus they form high-dimensional networks. Like Twitter, the network allows one neuron to hear the other, and they may begin repeating some of the things the other one said. If they both follow dozens of other people, their tweets may be even more similar because their thoughts are influenced by a shared crowd.

“Using simplices, we don’t only count how many shared people they are following, but also how these people they are following are connected to each other,” says Nolte. The more interconnected two neurons are—that is, the more simplices they are a part of—the more they fire to a stimulus in the same way.

It really shows the importance of the functional structure of the brain, in that structure guides the emergence of correlated activity, says Levi.

Previous studies have found that the physical structure of neurons and synapses influence activity patterns; now we know that their connections in “high-dimensional space” also factor in.

Going forward, the team hopes to understand how these complicated, abstract networks guide our thinking and behaviors.

“It’s like finding a dictionary that translates a totally obscure language to another language that we are actually familiar with, even if we don’t necessarily understand all stories written in this language,” says Levi.

Now it’s time to decipher those stories, he adds.

Image credit: Shutterstock

from Singularity Hub http://ift.tt/2rR360e

Deep Learning at the Speed of Light on Nanophotonic Chips

Deep learning has transformed the field of artificial intelligence, but the limitations of conventional computer hardware are already hindering progress. Researchers at MIT think their new “nanophotonic” processor could be the answer by carrying out deep learning at the speed of light.

In the 1980s, scientists and engineers hailed optical computing as the next great revolution in information technology, but it turned out that bulky components like fiber optic cables and lenses didn’t make for particularly robust or compact computers.

In particular, they found it extremely challenging to make scalable optical logic gates, and therefore impractical to make general optical computers, according to MIT physics post-doc Yichen Shen. One thing light is good at, though, is multiplying matrices—arrays of numbers arranged in columns and rows. You can actually mathematically explain the way a lens acts on a beam of light in terms of matrix multiplications.

This also happens to be a core component of the calculations involved in deep learning. Combined with advances in nanophotonics—the study of light’s behavior at the nanometer scale—this has led to a resurgence in interest in optical computing.

“Deep learning is mainly matrix multiplications, so it works very well with the nature of light,” says Shen. “With light you can make deep learning computing much faster and thousands of times more energy-efficient.”

To demonstrate this, Shen and his MIT colleagues have designed an all-optical chip that can implement artificial neural networks—the brain-inspired algorithms at the heart of deep learning.

In a recent paper in Nature, the group describes a chip made up of 56 interferometers—components that allow the researchers to control how beams of light interfere with each other to carry out mathematical operations.

The processor can be reprogrammed by applying a small voltage to the waveguides that direct beams of light around the processor, which heats them and causes them to change shape.

The chip is best suited to inference tasks, the researchers say, where the algorithm is put to practical use by applying a learned model to analyze new data, for instance to detect objects in an image.

It isn’t great at learning, because heating the waveguides is relatively slow compared to how electronic systems are reprogrammed. So, in their study, the researchers trained the algorithm on a computer before transferring the learned model to the nanophotonic processor to carry out the inference task.

That’s not a major issue. For many practical applications it’s not necessary to carry out learning and inference on the same chip. Google recently made headlines for designing its own deep learning chip, the TPU, which is also specifically designed for inference and most companies that use a lot of machine learning split the two jobs.

“In many cases they update these models once every couple of months and the rest of the time the fixed model is just doing inference,” says Shen. “People usually separate these tasks. They typically have a server just doing training and another just doing inference, so I don’t see a big problem making a chip focused on inference.”

Once the model has been programmed into the chip, it can then carry out computations at the speed of light using less than one-thousandth the energy per operation compared to conventional electronic chips.

There are limitations, though. Because the chip deals with light waves that operate on the scale of a few microns, there are fundamental limits to how small these chips can get.

"The wavelength really sets the limit of how small the waveguides can be. We won’t be able to make devices significantly smaller. Maybe by a factor of four, but physics will ultimately stop us,” says MIT graduate student Nicholas Harris, who co-authored the paper.

That means it would be difficult to implement neural nets much larger than a few thousand neurons. However, the vast majority of current deep learning algorithms are well within that limit.

The system did achieve a significantly lower accuracy on the task than a standard computer implementing the same deep learning model, correctly identifying 76.7 percent of vowels compared to 91.7 percent.

But Harris says they think this was largely due to interference between the various heating elements used to program the waveguides, and that it should be easy to fix by using thermal isolation trenches or extra calibration steps.

Importantly, the chips are also built using the same fabrication technology as conventional computer chips, so scaling up production should be easy. Shen said the group has already had interest in their technology from prominent chipmakers.

Pierre-Alexandre Blanche, a professor of optics at the University of Arizona, said he’s very excited by the paper, which he said complements his own work. But he cautioned against getting too carried away.

“This is another milestone in the progress toward useful optical computing. But we are still far away to be competitive with electronics,” he told Singularity Hub in an email. “The argumentation about scalability, power consumption, speed etc. [in the paper] use a lot of conditional tense and assumptions which demonstrate that, if there is potential indeed, there is still a lot of research to be done.”

In particular, he pointed out that the system was only a partial solution to the problem. While the vast majority of neuronal computation involves multiplication of matrices, there is another component: calculating a non-linear response.

In the current paper this aspect of the computation was simulated on a regular computer. The researchers say in future models this function could be carried out by a so-called “saturable absorber” integrated into the waveguides that absorbs less light as the intensity increases.

But Blanche notes that this is not a trivial problem and something his group is actually currently working on. “It is not like you can buy one at the drug store,” he says. Bhavin Shastri, a post-doc at Princeton whose group is also working on nanophotonic chips for implementing neural networks, said the research was important, as enabling matrix multiplications is a key step to enabling full-fledged photonic neural networks.

“Overall, this area of research is poised to usher in an exciting and promising field,” he added. “Neural networks implemented in photonic hardware could revolutionize how machines interact with ultrafast physical phenomena. Silicon photonics combines the analog device performance of photonics with the cost and scalability of silicon manufacturing.”

Stock media provided by across/Pond5.com

from Singularity Hub http://ift.tt/2sPgPVm

Sauver le symbole

Nous pouvons nous accorder avec certains critiques du transhumanisme pour dire que la convergence NBIC met en question notre rapport au langage et aux symboles. Or, il est probable qu’ils contribuent à ce qui fait émerger l’humain. Une pensée transhumaniste peut-elle relever ce défi ? Le symbolisme s’évanouira-t-il dans nos machines computationnelles ?

from Ethical Technology http://ift.tt/2rKxcCB