The Princess Leia project: ‘volumetric’ 3D images that float in ‘thin air’

http://img.youtube.com/vi/1aAx2uWcENc/0.jpg

Inspired by the iconic Stars Wars scene with Princess Leia in distress, Brigham Young University engineers and physicists have created the “Princess Leia project” — a new technology for creating 3D “volumetric images” that float in the air and that you can walk all around and see from almost any angle.*

“Our group has a mission to take the 3D displays of science fiction and make them real,” said electrical and computer engineering professor and holography expert Daniel Smalley, lead author of a Jan. 25 Nature paper on the discovery.

The image of Princess Leia portrayed in the movie is actually not a hologram, he explains. A holographic display scatters light only on a 2D surface. So you have to be looking at a limited range of angles to see the image, which is also normally static. Instead, a moving volumetric display can be seen from any angle and you can even reach your hand into it. Examples include the 3D displays Tony Stark interacts with in Ironman and the massive image-projecting table in Avatar.*

How to create a 3D volumetric image from a single moving particle

BYU student Erich Nygaard, depicted as a moving 3D image, mimicks the Princess Leia projection in the iconic Star Wars scene (“Help me Obi Wan Kenobi, you’re my only hope”). (credit: Dan Smalley Lab)

The team’s free-space volumetric display technology, called “Optical Trap Display,” is based on photophoretic** optical trapping (controlled by a laser beam) of a rapidly moving particle (of a plant fiber called cellulose in this case). This technique takes advantage of human persistence of vision (at more than 10 images per second we don’t see a moving point of light, just the pattern it traces in space — the same phenomenon that makes movies and video work).

As the laser beam moves the trapped particle around, three more laser beams illuminate the particle with RGB (red-green-blue) light. The resulting fast-moving dot traces out a color image in three dimensions (you can see the vertical scan lines in one vertical slice in the Princess Leia image above) — producing a full-color, volumetric (3D) still image in air with 10-micrometer resolution, which allows for fine detail. The technology also features low noticeable speckle (the annoying specks seen in holograms).***

Applications in the real (and virtual) world

So far, Smalley and his student researchers have 3D light-printed a butterfly, a prism, the stretch-Y BYU logo, rings that wrap around an arm, and an individual in a lab coat crouched in a position similar to Princess Leia as she begins her projected message. The images in this proof-of-concept prototype are still in the range of millimeters. But in the Nature paper, the researchers say they anticipate that the device “can readily be scaled using parallelism and [they] consider this platform to be a viable method for creating 3D images that share the same space as the user, as physical objects would.”

What about augmented and virtual-reality uses? “While I think this technology is not really AR or VR but just ‘R,’  there are a lot of interesting ways volumetric images can enhance and augment the world around us,” Smalley told KurzweilAI in an email. “A very-near-term application could be the use of levitated particles as ‘streamers’ to show the expected flow of air over actual physical objects. That is, instead of looking at a computer screen to see fluid flow over a turbine blade, you could set a volumetric projector next to the actual turbine blade and see particles form ribbons to shown expected fluid flow juxtaposed on the real object.

“In a scaled-up version of the display, a projector could place a superimposed image of a part on an engine showing a technician the exact location and orientation of that part. An even more refined version could create a magic portal in your home where you could see the size of shoes you just ordered and set your foot inside to (visually) check the fit. Other applications would included sparse telepresence, satellite tracking, command and control surveillance, surgical planning, tissue tagging, catheter guidance and other medical visualization applications.”

How soon? “I won’t make a prediction on exact timing but if we make as much progress in the next four years as we did in the last four years (a big ‘if’), then we would have a display of usable size by the end of that period. We have had a number of interested parties from a variety of fields.  We are open to an exclusive agreement, given the right partner.”

* Smalley says he has long dreamed of building the kind of 3D holograms that pepper science-fiction films. But watching inventor Tony Stark thrust his hands through ghostly 3D body armor in the 2008 film Iron Man, Smalley realized that he could never achieve that using holography, the current standard for high-tech 3D display, because Stark’s hand would block the hologram’s light source. “That irritated me,” he says. He immediately tried to work out how to get around that.

** “Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light.” — Wikipedia

*** Previous researchers have created volumetric imagery, but the Smalley team says it’s the first to use optical trapping and color effectively. “Among volumetric systems, we are aware of only three such displays that have been successfully demonstrated in free space: induced plasma displays, modified air displays, and acoustic levitation displays. Plasma displays have yet to demonstrate RGB color or occlusion in free space. Modified air displays and acoustic levitation displays rely on mechanisms that are too coarse or too inertial to compete directly with holography at present.” — D.E. Smalley et al./Nature


Nature video | Pictures in the air: 3D printing with light


Abstract of A photophoretic-trap volumetric display

Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.

from KurzweilAI http://ift.tt/2GBGswy

The Princess Leia project: ‘volumetric’ 3D images that float in ‘thin air’

http://img.youtube.com/vi/1aAx2uWcENc/0.jpg

Inspired by the iconic Stars Wars scene with Princess Leia in distress, Brigham Young University engineers and physicists have created the “Princess Leia project” — a new technology for creating 3D “volumetric images” that float in the air and that you can walk all around and see from almost any angle.*

“Our group has a mission to take the 3D displays of science fiction and make them real,” said electrical and computer engineering professor and holography expert Daniel Smalley, lead author of a Jan. 25 Nature paper on the discovery.

The image of Princess Leia portrayed in the movie is actually not a hologram, he explains. A holographic display scatters light only on a 2D surface. So you have to be looking at a limited range of angles to see the image, which is also normally static. Instead, a moving volumetric display can be seen from any angle and you can even reach your hand into it. Examples include the 3D displays Tony Stark interacts with in Ironman and the massive image-projecting table in Avatar.*

How to create a 3D volumetric image from a single moving particle

BYU student Erich Nygaard, depicted as a moving 3D image, mimicks the Princess Leia projection in the iconic Star Wars scene (“Help me Obi Wan Kenobi, you’re my only hope”). (credit: Dan Smalley Lab)

The team’s free-space volumetric display technology, called “Optical Trap Display,” is based on photophoretic** optical trapping (controlled by a laser beam) of a rapidly moving particle (of a plant fiber called cellulose in this case). This technique takes advantage of human persistence of vision (at more than 10 images per second we don’t see a moving point of light, just the pattern it traces in space — the same phenomenon that makes movies and video work).

As the laser beam moves the trapped particle around, three more laser beams illuminate the particle with RGB (red-green-blue) light. The resulting fast-moving dot traces out a color image in three dimensions (you can see the vertical scan lines in one vertical slice in the Princess Leia image above) — producing a full-color, volumetric (3D) still image in air with 10-micrometer resolution, which allows for fine detail. The technology also features low noticeable speckle (the annoying specks seen in holograms).***

Applications in the real (and virtual) world

So far, Smalley and his student researchers have 3D light-printed a butterfly, a prism, the stretch-Y BYU logo, rings that wrap around an arm, and an individual in a lab coat crouched in a position similar to Princess Leia as she begins her projected message. The images in this proof-of-concept prototype are still in the range of millimeters. But in the Nature paper, the researchers say they anticipate that the device “can readily be scaled using parallelism and [they] consider this platform to be a viable method for creating 3D images that share the same space as the user, as physical objects would.”

What about augmented and virtual-reality uses? “While I think this technology is not really AR or VR but just ‘R,’  there are a lot of interesting ways volumetric images can enhance and augment the world around us,” Smalley told KurzweilAI in an email. “A very-near-term application could be the use of levitated particles as ‘streamers’ to show the expected flow of air over actual physical objects. That is, instead of looking at a computer screen to see fluid flow over a turbine blade, you could set a volumetric projector next to the actual turbine blade and see particles form ribbons to shown expected fluid flow juxtaposed on the real object.

“In a scaled-up version of the display, a projector could place a superimposed image of a part on an engine showing a technician the exact location and orientation of that part. An even more refined version could create a magic portal in your home where you could see the size of shoes you just ordered and set your foot inside to (visually) check the fit. Other applications would included sparse telepresence, satellite tracking, command and control surveillance, surgical planning, tissue tagging, catheter guidance and other medical visualization applications.”

How soon? “I won’t make a prediction on exact timing but if we make as much progress in the next four years as we did in the last four years (a big ‘if’), then we would have a display of usable size by the end of that period. We have had a number of interested parties from a variety of fields.  We are open to an exclusive agreement, given the right partner.”

* Smalley says he has long dreamed of building the kind of 3D holograms that pepper science-fiction films. But watching inventor Tony Stark thrust his hands through ghostly 3D body armor in the 2008 film Iron Man, Smalley realized that he could never achieve that using holography, the current standard for high-tech 3D display, because Stark’s hand would block the hologram’s light source. “That irritated me,” he says. He immediately tried to work out how to get around that.

** “Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light.” — Wikipedia

*** Previous researchers have created volumetric imagery, but the Smalley team says it’s the first to use optical trapping and color effectively. “Among volumetric systems, we are aware of only three such displays that have been successfully demonstrated in free space: induced plasma displays, modified air displays, and acoustic levitation displays. Plasma displays have yet to demonstrate RGB color or occlusion in free space. Modified air displays and acoustic levitation displays rely on mechanisms that are too coarse or too inertial to compete directly with holography at present.” — D.E. Smalley et al./Nature


Nature video | Pictures in the air: 3D printing with light


Abstract of A photophoretic-trap volumetric display

Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.

from KurzweilAI » News http://ift.tt/2GBGswy

MLHC 2018 : Machine Learning for Healthcare 2018

Machine Learning for Healthcare

MLHC is an annual research meeting that exists to bring together two usually insular disciplines: computer scientists with artificial intelligence, machine learning, and big data expertise, and clinicians/medical researchers. MLHC supports the advancement of data analytics, knowledge discovery, and meaningful use of complex medical data by fostering collaborations and the exchange of ideas between members of these often completely separated communities. To pursue this goal, the event includes invited talks, poster presentations, panels, and ample time for thoughtful discussion and robust debate.

MLHC has a rigorous peer-review process and an (optional) archival proceedings through the Journal of Machine Learning Research proceedings track. You can access the inaugural proceedings here: http://ift.tt/2iibFMD

Calls for Papers

Researchers in machine learning — including those working in statistical natural language processing, computer vision and related sub-fields — when coupled with seasoned clinicians can play an important role in turning complex medical data (e.g., individual patient health records, genomic data, data from wearable health monitors, online reviews of physicians, medical imagery, etc.) into actionable knowledge that ultimately improves patient care. For the last seven years, this meeting has drawn hundreds of clinical and machine learning researchers to frame problems clinicians need solved and discuss machine learning solutions.

This year we are calling for papers in two tracks: a research track and a clinical abstract track.

Important Dates

Deadline for submission: April 20, 2018

Author notification: June 20, 2018

Research Track

We invite submissions that describe novel methods to address the challenges inherent to health-related data (e.g., sparsity, class imbalance, causality, temporal dynamics, multi-modal data). We also invite articles describing the application and evaluation of state-of-the-art machine learning approaches applied to health data in deployed systems. In particular, we seek high-quality submissions on the following topics:

Predicting individual patient outcomes

Mining, processing and making sense of clinical notes

Patient risk stratification

Parsing biomedical literature

Bio-marker discovery

Brain imaging technologies and related models

Learning from sparse/missing/imbalanced data

Time series analysis with medical applications

Medical imaging

Efficient, scalable processing of clinical data

Clustering and phenotype discovery

Methods for vitals monitoring

Feature selection/dimensionality reduction

Text classification and mining for biomedical literature

Exploiting and generating ontologies

ML systems that assist with evidence-based medicine

Research Track Proceedings and Review Process: Accepted submissions will be published through the proceedings track of the Journal of Machine Learning Research. All papers will be rigorously peer-reviewed, and research that has been previously published elsewhere or is currently in submission may not be submitted. However, authors will have the option of only archiving the abstract to allow for future submissions to clinical journals, etc.

from CFPs on Artificial Intelligence : WikiCFP http://ift.tt/2noBSts

Every Study We Could Find on What Automation Will Do to Jobs, in One Chart

You’ve seen the headlines: "Robots Will Destroy Our Jobs—and We’re Not Ready for It." "You Will Lose Your Job to a Robot—and Sooner Than You Think." "Robots May Steal as Many as 800 Million Jobs in the Next 13 Years."

from Communications of the ACM: Artificial Intelligence http://ift.tt/2nsRSKj

Applying Machine Learning to the Universe’s Mysteries

By Lawrence Berkeley National Laboratory

January 31, 2018

Comments

digital brain, illustration

Credit: Berkeley Lab

 Computers can beat chess champions, simulate star explosions, and forecast global climate. They are also being trained as infallible problem-solvers and fast learners.


And now, physicists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and their collaborators have demonstrated that computers are ready to tackle the universe’s greatest mysteries. The team used thousands of images from simulated high-energy particle collisions to train computer networks to identify important features.


The researchers programmed powerful arrays known as neural networks to serve as a sort of hive-like digital brain in analyzing and interpreting the images of the simulated particle debris left over from the collisions. During this test run the researchers found that the neural networks had up to a 95 percent success rate in recognizing important features in a sampling of about 18,000 images.


The study is described in “An Equation-of-State-Meter of Quantum Chromodynamics Transition from Deep Learning,” published in the journal Nature Communications.


The next step will be to apply the same machine learning process to actual experimental data.


Powerful machine learning algorithms allow these networks to improve in their analysis as they process more images. The underlying technology is used in facial recognition and other types of image-based object recognition applications.


The images used in this study—relevant to particle-collider nuclear physics experiments at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider and CERN’s Large Hadron Collider—recreate the conditions of a subatomic particle “soup,” which is a superhot fluid state known as the quark-gluon plasma believed to exist just millionths of a second after the birth of the universe. Berkeley Lab physicists participate in experiments at both of these sites.


“We are trying to learn about the most important properties of the quark-gluon plasma,” says Xin-Nian Wang, a nuclear physicist in the Nuclear Science Division at Berkeley Lab who is a member of the team. Some of these properties are so short-lived and occur at such tiny scales that they remain shrouded in mystery.


In experiments, nuclear physicists use particle colliders to smash together heavy nuclei, like gold or lead atoms that are stripped of electrons. These collisions are believed to liberate particles inside the atoms’ nuclei, forming a fleeting, subatomic-scale fireball that breaks down even protons and neutrons into a free-floating form of their typically bound-up building blocks: quarks and gluons.


Researchers hope that by learning the precise conditions under which this quark-gluon plasma forms, such as how much energy is packed in, and its temperature and pressure as it transitions into a fluid state, they will gain new insights about its component particles of matter and their properties, and about the universe’s formative stages.


But exacting measurements of these properties—the so-called “equation of state” involved as matter changes from one phase to another in these collisions—have proven challenging. The initial conditions in the experiments can influence the outcome, so it’s challenging to extract equation-of-state measurements that are independent of these conditions.


“In the nuclear physics community, the holy grail is to see phase transitions in these high-energy interactions, and then determine the equation of state from the experimental data,” Wang says. “This is the most important property of the quark-gluon plasma we have yet to learn from experiments.”


Researchers also seek insight about the fundamental forces that govern the interactions between quarks and gluons, what physicists refer to as quantum chromodynamics.


Long-Gang Pang, the lead author of the latest study and a Berkeley Lab-affiliated postdoctoral researcher at UC Berkeley, says that in 2016, while he was a postdoctoral fellow at the Frankfurt Institute for Advanced Studies, he became interested in the potential for artificial intelligence to help solve challenging science problems.


He saw that one form of AI, known as a deep convolutional neural network—with architecture inspired by the image-handling processes in animal brains—appeared to be a good fit for analyzing science-related images.


“These networks can recognize patterns and evaluate board positions and selected movements in the game of Go,” Pang says. “We thought, ‘If we have some visual scientific data, maybe we can get an abstract concept or valuable physical information from this.'”


Wang adds, “With this type of machine learning, we are trying to identify a certain pattern or correlation of patterns that is a unique signature of the equation of state.” So after training, the network can pinpoint on its own the portions of and correlations in an image, if any exist, that are most relevant to the problem scientists are trying to solve.


Accumulation of data needed for the analysis can be very computationally intensive, Pang says, and in some cases it took about a full day of computing time to create just one image. When researchers employed an array of GPUs that work in parallel—GPUs are graphics processing units that were first created to enhance video game effects and have since exploded into a variety of uses—they cut that time down to about 20 minutes per image.


They used computing resources at Berkeley Lab’s National Energy Research Scientific Computing Center in their study, with most of the computing work focused at GPU clusters at GSI in Germany and Central China Normal University in China.


A benefit of using sophisticated neural networks, the researchers note, is that they can identify features that weren’t even sought in the initial experiment, like finding a needle in a haystack when you weren’t even looking for it. And they can extract useful details even from fuzzy images.


“Even if you have low resolution, you can still get some important information,” Pang says.


Discussions are already underway to apply the machine learning tools to data from actual heavy-ion collision experiments, and the simulated results should be helpful in training neural networks to interpret the real data.


“There will be many applications for this in high-energy particle physics,” Wang says, beyond particle-collider experiments.


Also participating in the study were Kai Zhou, Nan Su, Hannah Petersen, and Horst Stocker from the following institutions: Frankfurt Institute for Advanced Studies, Goethe University, GSI Helmholtzzentrum für Schwerionenforschung (GSI), and Central China Normal University. The work was supported by the U.S Department of Energy’s Office of Science, the U.S. National Science Foundation, the Helmholtz Association, GSI, Samson AG, Goethe University, the National Natural Science Foundation of China, the Major State Basic Research Development Program in China, and the Helmholtz International Center for the Facility for Antiproton and Ion Research.


The Emotional Chatbots Are Here to Probe Our Feelings

By Wired

January 31, 2018

Comments

AI emotion chatbots

Software developer Eugenia Kuyda is releasing the code to her Replika chatbot, which can inject emotion into conversations.

Credit: Hot Little Potato

When Eugenia Kuyda created her chatbot, Replika, she wanted it to stand out among the voice assistants and home robots that had begun to take root in peoples lives.


From Wired

View Full Article


[Suggestion] Final year project ideas for college

http://ift.tt/1CNTXkp

Hello guys, I am currently pursuing Bachelor Degree in Computer Science, and I am in my final year. I would like to ask that is there any suggestion for my final year project to research on?

submitted by /u/wanjun23
[link] [comments]

from Artificial Intelligence http://ift.tt/2BGIjMA

Enzyme Designed Entirely From Scratch Opens a World of Biological Possibility

Ann Donnelly was utterly confused the first time she examined her protein. On all counts, it behaved like an enzyme—a protein catalyst that speeds up biological reactions in cells. One could argue that enzymes, sculpted by eons of evolution, make life possible.

There was just one problem: her protein wasn’t evolved. It wasn’t even “natural.” It was, in fact, a completely artificial construct made with random sequences of DNA—something that’s never existed in nature before.

Donnelly was looking at the first artificial enzyme. An artificial protein that, by all accounts, should not be able to play nice with the intricate web of biochemical components and reactions that support life.

Yet when given to a mutant bacteria that lacks the ability to metabolize iron, the enzyme Syn-F4 filled in the blank. It kickstarted the bacteria’s iron processing pathways, naturally replacing the organism’s missing catalyzer—even though it was like nothing seen in life.

“That was an incredible and unbelievable moment for me—unbelievable to the point that I didn’t want to say anything until I had repeated it several times,” says Donnelly, who published her results in Nature Chemical Biology.

The big picture? We are no longer bound by the chemical rules of nature. In a matter of months, scientists can engineer biological catalysts that normally take millions of years to evolve and fine-tune.

And with brand new enzymes comes the possibility of brand new life.

“Our work suggests that the construction of artificial genomes capable of sustaining cell life may be within reach,” says Dr. Michael Hecht at Princeton University, who led the study.

Cogs in the Machine

In 2011, Hecht was examining the limits of artificial biology.

At the time, many synthetic biologists had begun viewing biological processes as Lego blocks—something you could deconstruct, isolate, and reshuffle to build new constructs to your liking.

But Hecht was interested in something a little different. Rather than copy-and-pasting existing bits of genetic code across organisms, could we randomly build brand new molecular machines—proteins—from scratch?

Ultimately, it comes down to numbers. Like DNA, proteins are made up of a finite selection of chemical components: 20 amino acids, which combine in unique sequences into a chain.

For an average protein of 100 “letters,” the combinations are astronomically large. Yet an average cell produces only about 100,000 different proteins. Why is this? Do known proteins have some fundamental advantage? Or has evolution simply not yet had the chance to fashion even better workers? Could we tap into all that sweet potential?

A New Toolkit

Hecht and his group used a computer program to randomly generate one million new sequences. The chains were then folded into intricate 3D shapes based on the rules of biophysics.

The litmus test: one by one, the team inserted this new library of artificial proteins into mutant strains of bacteria that lacked essential genes. Without these genes, the mutants couldn’t adapt to harsh new environments—say, high salts—and died.

Remarkably, a small group of artificial proteins saved these mutants from certain death. It’s like randomly shuffling letters of known words and phrases to make new ones, yet somehow the new vocabulary makes perfect sense in an existing paragraph.

“The information encoded in these artificial genes is completely novel—it does not come from, nor is it significantly related to, information encoded by natural genes, and yet the end result is a living, functional microbe,” Michael Fisher, a graduate student in Hecht’s lab said at that time.

How?

A series of subsequent studies showed that many of these artificial proteins worked by boosting the cell’s backup biological processes—increasing the expression of genes that allows them to survive under selection pressure, for example.

The lab thought they had it nailed, until one protein came along: Syn-F4.

The New Catalyst

Syn-F4 is a direct descendant of one of the original “new” proteins. Earlier, the team discovered that the protein could help mutant bacteria thrive in a low-iron environment—just not very well.

Mimicking evolution, they then randomly mutated some of the “letters” of the protein into a library, and screened them for candidates that worked even better than the original for supporting low-iron life. The result? Syn-F4.

Donnelly took on the detective work. Normally, scientists can scour the sequence of a newly discovered protein, match it up to similar others and begin guessing how it works. This was obviously not possible here, since Syn-F4 doesn’t look like anything in nature.

The protein also escaped all attempts at crystallizing it, which would freeze it in 3D and allow scientists to figure out its structure.

In a clever series of experiments, Donnelly cracked the mystery. Like baking soda, Syn-F4 sped up iron-releasing reactions when mixed with the right ingredients. What’s more, it’s also extremely picky about its “clients”: it would only grab onto one structural form of an ingredient (say, a form that looks like your left hand) but not its mirror image (the right hand)—a hallmark of enzymes.

Several more years of digging unveiled a true gem: Syn-F4’s catalytic core, a short sequence hidden in the protein’s heart that makes its enzyme activity possible.

Mutating the protein’s letters one by one, Donnelly tenaciously picked those that rendered the protein inactive. This process eventually identified key letters that likely form the protein’s so-called “active site,” splattered across Syn-F4’s sequence.

Like petals on a rosebud, the process of folding brings these active letters together into a 3D core. And like the Syn-F4 itself, its structure looks completely different than that of any native enzyme.

The team explains, “We don’t think Syn-F4 is replacing the mutant bacteria’s missing enzymes; we think it’s working through a completely different mechanism.”

“We have a completely novel protein that’s capable of sustaining life by actually being an enzyme—and that’s just crazy,” says Hecht.

A New Life?

The implications are huge, says Dr. Justin Siegel at the UC Davis Genome Center, who wasn’t involved in the study.

Biotechnology routinely relies on enzymes for industrial applications, such as making drugs, fuel, and materials.

“We are no longer limited to the proteins produced by nature, and that we can develop proteins—that would normally have taken billions of years to evolve—in a matter of months,” he says.

But even more intriguing is this: the study shows that enzymes made naturally aren’t the solution to life. They’re just one solution.

This means we need to broaden our search for biochemical reactions and life, on Earth and elsewhere. After all, if multiple solutions exist for a biological problem, it makes it much more likely that one has already been found elsewhere in the universe.

Back on Earth, Hecht is extremely excited for the future of artificial life.

“We’re starting to code for an artificial genome,” he says. Right now we’ve replaced about 0.1 percent of genes in a bacteria, so it’s just a weird organism with some funky artificial genes.

But suppose you replace 20 percent of genes, 30 percent, or more. Suppose a cohort of completely artificial enzymes runs the bacteria’s metabolism.

“Then it’s not just a weird E. coli with some artificial genes, then you have to say it’s a novel organism,” he says.

Image Credit: Ann Donnelly/Hecht Lab/Princeton University

from Singularity Hub http://ift.tt/2BFOhxD