How Fast Is AI Progressing? Stanford’s New Report Card for Artificial Intelligence

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.

Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.

The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.

The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.

But again, this could all just be a measure of AI enthusiasm in general.

No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.

In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.

Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.

Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?

Here is where it’s harder to track progress.

We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.

The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.

Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.

A key concern that all the experts bring up is the ethics of artificial intelligence.

Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.

Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”

For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.

We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.

Image Credit: Photobank gallery / Shutterstock.com

from Singularity Hub http://ift.tt/2DujJno

Advertisements

The Future (The MIT Press Essential Knowledge series)

http://ift.tt/2FOoarB

How the future has been imagined and made, through the work of writers, artists, inventors, and designers.

The future is like an unwritten book. It is not something we see in a crystal ball, or can only hope to predict, like the weather. In this volume of the MIT Press’s Essential Knowledge series, Nick Montfort argues that the future is something to be made, not predicted. Montfort offers what he considers essential knowledge about the future, as seen in the work of writers, artists, inventors, and designers (mainly in Western culture) who developed and described the core components of the futures they envisioned. Montfort’s approach is not that of futurology or scenario planning; instead, he reports on the work of making the future — the thinkers who devoted themselves to writing pages in the unwritten book. Douglas Engelbart, Alan Kay, and Ted Nelson didn’t predict the future of computing, for instance. They were three of the people who made it.

Montfort focuses on how the development of technologies — with an emphasis on digital technologies — has been bound up with ideas about the future. Readers learn about kitchens of the future and the vision behind them; literary utopias, from Plato’s Republic to Edward Bellamy’s Looking Backward and Charlotte Perkins Gilman’s Herland; the Futurama exhibit at the 1939 New York World’s Fair; and what led up to Tim Berners-Lee’s invention of the World Wide Web. Montfort describes the notebook computer as a human-centered alterative to the idea of the computer as a room-sized “giant brain”; speculative practice in design and science fiction; and, throughout, the best ways to imagine and build the future.

—Publisher

from KurzweilAI http://ift.tt/2mGYhB6

Tracking a thought’s fleeting trip through the brain

https://www.youtube.com/embed/PgU4s_U2Lb4


Repeating a word: as the brain receives (yellow), interpretes (red), and responds (blue) within a second, the prefrontal cortex (red) coordinates all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

Recording the electrical activity of neurons directly from the surface of the brain, using electrocorticograhy (ECoG)*, the scientists were able to track the flow of thought across the brain in real time for the first time. They showed clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response to a perception.

Here’s what they found.

For a simple task, such as repeating a word seen or heard:

The visual and auditory cortices react first to perceive the word. The prefrontal cortex then kicks in to interpret the meaning, followed by activation of the motor cortex (preparing for a response). During the half-second between stimulus and response, the prefrontal cortex remains active to coordinate all the other brain areas.

For a particularly hard task, like determining the antonym of a word:

During the time the brain takes several seconds to respond, the prefrontal cortex recruits other areas of the brain — probably including memory networks (not tracked). The prefrontal cortex then hands off to the motor cortex to generate a spoken response.

In both cases, the brain begins to prepare the motor areas to respond very early (during initial stimulus presentation) — suggesting that we get ready to respond even before we know what the response will be.

“This might explain why people sometimes say things before they think,” said Avgusta Shestyuk, a senior researcher in UC Berkeley’s Helen Wills Neuroscience Institute and lead author of a paper reporting the results in the current issue of Nature Human Behavior.


For a more difficult task, like saying a word that is the opposite of another word, people’s brains required 2–3 seconds to detect (yellow), interpret and search for an answer (red), and respond (blue) — with sustained prefrontal lobe activity (red) coordinating all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

The research backs up what neuroscientists have pieced together over the past decades from studies in monkeys and humans.

“These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output,” said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience and a professor of neurology and neurosurgery at UCSF. “Here we have eight different experiments, some where the patients have to talk and others where they have to push a button, where some are visual and others auditory, and all found a universal signature of activity centered in the prefrontal lobe that links perception and action. It’s the glue of cognition.”

Researchers at Johns Hopkins University, California Pacific Medical Center, and Stanford University were also involved. The work was supported by the National Science Foundation, National Institute of Mental Health, and National Institute of Neurological Disorders and Stroke.

* Other neuroscientists have used functional magnetic resonance imaging (fMRI) and electroencephelography (EEG) to record activity in the thinking brain. The UC Berkeley scientists instead employed a much more precise technique, electrocorticograhy (ECoG), which records from several hundred electrodes placed on the brain surface and detects activity in the thin outer region, the cortex, where thinking occurs. ECoG provides better time resolution than fMRI and better spatial resolution than EEG, but requires access to epilepsy patients undergoing highly invasive surgery involving opening the skull to pinpoint the location of seizures. The new study employed 16 epilepsy patients who agreed to participate in experiments while undergoing epilepsy surgery at UC San Francisco and California Pacific Medical Center in San Francisco, Stanford University in Palo Alto and Johns Hopkins University in Baltimore. Once the electrodes were placed on the brains of each patient, the researchers conducted a series of eight tasks that included visual and auditory stimuli. The tasks ranged from simple, such as repeating a word or identifying the gender of a face or a voice, to complex, such as determining a facial emotion, uttering the antonym of a word, or assessing whether an adjective describes the patient’s personality.


Abstract of Persistent neuronal activity in human prefrontal cortex links perception and action

How do humans flexibly respond to changing environmental demands on a subsecond temporal scale? Extensive research has highlighted the key role of the prefrontal cortex in flexible decision-making and adaptive behaviour, yet the core mechanisms that translate sensory information into behaviour remain undefined. Using direct human cortical recordings, we investigated the temporal and spatial evolution of neuronal activity (indexed by the broadband gamma signal) in 16 participants while they performed a broad range of self-paced cognitive tasks. Here we describe a robust domain- and modality-independent pattern of persistent stimulus-to-response neural activation that encodes stimulus features and predicts motor output on a trial-by-trial basis with near-perfect accuracy. Observed across a distributed network of brain areas, this persistent neural activation is centred in the prefrontal cortex and is required for successful response implementation, providing a functional substrate for domain-general transformation of perception into action, critical for flexible behaviour.

from KurzweilAI http://ift.tt/2mGARfh

Tracking a thought’s fleeting trip through the brain

http://img.youtube.com/vi/sU8SZwNK9QE/0.jpg


Repeating a word: as the brain receives (yellow), interpretes (red), and responds (blue) within a second, the prefrontal cortex (red) coordinates all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

Recording the electrical activity of neurons directly from the surface of the brain, using electrocorticograhy (ECoG)*, the scientists were able to track the flow of thought across the brain in real time for the first time. They showed clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response to a perception.

Here’s what they found.

For a simple task, such as repeating a word seen or heard:

The visual and auditory cortices react first to perceive the word. The prefrontal cortex then kicks in to interpret the meaning, followed by activation of the motor cortex (preparing for a response). During the half-second between stimulus and response, the prefrontal cortex remains active to coordinate all the other brain areas.

For a particularly hard task, like determining the antonym of a word:

During the time the brain takes several seconds to respond, the prefrontal cortex recruits other areas of the brain — probably including memory networks (not tracked). The prefrontal cortex then hands off to the motor cortex to generate a spoken response.

In both cases, the brain begins to prepare the motor areas to respond very early (during initial stimulus presentation) — suggesting that we get ready to respond even before we know what the response will be.

“This might explain why people sometimes say things before they think,” said Avgusta Shestyuk, a senior researcher in UC Berkeley’s Helen Wills Neuroscience Institute and lead author of a paper reporting the results in the current issue of Nature Human Behavior.


For a more difficult task, like saying a word that is the opposite of another word, people’s brains required 2–3 seconds to detect (yellow), interpret and search for an answer (red), and respond (blue) — with sustained prefrontal lobe activity (red) coordinating all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

The research backs up what neuroscientists have pieced together over the past decades from studies in monkeys and humans.

“These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output,” said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience and a professor of neurology and neurosurgery at UCSF. “Here we have eight different experiments, some where the patients have to talk and others where they have to push a button, where some are visual and others auditory, and all found a universal signature of activity centered in the prefrontal lobe that links perception and action. It’s the glue of cognition.”

Researchers at Johns Hopkins University, California Pacific Medical Center, and Stanford University were also involved. The work was supported by the National Science Foundation, National Institute of Mental Health, and National Institute of Neurological Disorders and Stroke.

* Other neuroscientists have used functional magnetic resonance imaging (fMRI) and electroencephelography (EEG) to record activity in the thinking brain. The UC Berkeley scientists instead employed a much more precise technique, electrocorticograhy (ECoG), which records from several hundred electrodes placed on the brain surface and detects activity in the thin outer region, the cortex, where thinking occurs. ECoG provides better time resolution than fMRI and better spatial resolution than EEG, but requires access to epilepsy patients undergoing highly invasive surgery involving opening the skull to pinpoint the location of seizures. The new study employed 16 epilepsy patients who agreed to participate in experiments while undergoing epilepsy surgery at UC San Francisco and California Pacific Medical Center in San Francisco, Stanford University in Palo Alto and Johns Hopkins University in Baltimore. Once the electrodes were placed on the brains of each patient, the researchers conducted a series of eight tasks that included visual and auditory stimuli. The tasks ranged from simple, such as repeating a word or identifying the gender of a face or a voice, to complex, such as determining a facial emotion, uttering the antonym of a word, or assessing whether an adjective describes the patient’s personality.


Abstract of Persistent neuronal activity in human prefrontal cortex links perception and action

How do humans flexibly respond to changing environmental demands on a subsecond temporal scale? Extensive research has highlighted the key role of the prefrontal cortex in flexible decision-making and adaptive behaviour, yet the core mechanisms that translate sensory information into behaviour remain undefined. Using direct human cortical recordings, we investigated the temporal and spatial evolution of neuronal activity (indexed by the broadband gamma signal) in 16 participants while they performed a broad range of self-paced cognitive tasks. Here we describe a robust domain- and modality-independent pattern of persistent stimulus-to-response neural activation that encodes stimulus features and predicts motor output on a trial-by-trial basis with near-perfect accuracy. Observed across a distributed network of brain areas, this persistent neural activation is centred in the prefrontal cortex and is required for successful response implementation, providing a functional substrate for domain-general transformation of perception into action, critical for flexible behaviour.

from KurzweilAI » News http://ift.tt/2mGARfh

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress

http://ift.tt/2DJbTE5

“A terrific book…[Pinker] recounts the progress across a broad array of metrics, from health to wars, the environment to happiness, equal rights to quality of life.” –The New York Times

The follow-up to Pinker’s groundbreaking The Better Angels of Our Nature presents the big picture of human progress: people are living longer, healthier, freer, and happier lives, and while our problems are formidable, the solutions lie in the Enlightenment ideal of using reason and science.

Is the world really falling apart? Is the ideal of progress obsolete? In this elegant assessment of the human condition in the third millennium, cognitive scientist and public intellectual Steven Pinker urges us to step back from the gory headlines and prophecies of doom, which play to our psychological biases. Instead, follow the data: In seventy-five jaw-dropping graphs, Pinker shows that life, health, prosperity, safety, peace, knowledge, and happiness are on the rise, not just in the West, but worldwide. This progress is not the result of some cosmic force. It is a gift of the Enlightenment: the conviction that reason and science can enhance human flourishing.

Far from being a naïve hope, the Enlightenment, we now know, has worked. But more than ever, it needs a vigorous defense. The Enlightenment project swims against currents of human nature–tribalism, authoritarianism, demonization, magical thinking–which demagogues are all too willing to exploit. Many commentators, committed to political, religious, or romantic ideologies, fight a rearguard action against it. The result is a corrosive fatalism and a willingness to wreck the precious institutions of liberal democracy and global cooperation.

With intellectual depth and literary flair, Enlightenment Nowmakes the case for reason, science, and humanism: the ideals we need to confront our problems and continue our progress.

—Publisher

from KurzweilAI http://ift.tt/2Dht0eU

Why Gene Silencing Could Launch a New Class of Blockbuster Drugs

Long before CRISPR, there was gene silencing.

Ever since the Human Genome Project transcribed our genetic bible in 1997, scientists have dreamt of curing inherited diseases at the source. The first audacious idea? Shoot the messenger.

Unlike CRISPR, which latches onto a gene and snips it out, gene silencing leaves the genome intact. Rather, it targets another molecule—RNA—the “messenger” that shuttles DNA instructions to the cell’s protein-making factory.

The idea is extremely elegant in its simplicity: destroy RNA and nix mutant proteins before they’re ever made.

If realized, this new class of drugs could overhaul modern medicine. Over 85 percent of proteins in the body can’t be targeted with conventional chemical drugs. Gene silencing opens up an enormous portion of the genome to intervention. They may easily be the next great class of drugs—or even the future of medicine.

Yet what followed the initial wave of excitement was two decades of failed attempts and frustration.

Pharmaceutical companies pulled out. Investment funding dried up. Interest moved to CRISPR and other gene editing tools. For a while, it seemed as if gene silencing was slowly being relegated to the annals of forgotten scientific history. Until now.

Last month, Ionis Pharmaceutical announced positive results for a groundbreaking gene silencing trial for Huntington’s disease. The drug, an anti-sense oligonucleotide (ASO), successfully lowered the levels of a toxic protein in the brains of 46 early-stage patients.

This is huge. For one, it’s a proof-of-concept that gene silencing works as a therapeutic strategy. For another, it shows that the drug can tunnel through the notorious blood-brain barrier—a tight-knit wall of cells that blocks off and protects the brain from toxic molecules in the body.

The trial, though small, once again piqued big pharma’s interest. Roche, the Swiss pharmaceutical giant, licensed the drug IONIS-HTTRX upon seeing its promising results, shelling out $45 million to push its development down the pipeline towards larger trials.

If replicated in a larger patient population, this is the breakthrough in Huntington’s disease.

“For the first time a drug has lowered the level of the toxic disease-causing protein in the nervous system, and the drug was safe and well-tolerated,” says Dr. Sarah Tabrizi, a professor at University College London (UCL) who led the phase one trial. “This is probably the most significant moment in the history of Huntington’s since the gene [was isolated].”

But perhaps more far-reaching is this: Huntington’s disease is only one of many neurodegenerative disorders of the brain that slowly kill off resident neurons. Parkinson’s disease, amyotrophic lateral sclerosis (Lou Gehrig’s disease), and even Alzheimer’s also fall into that category.

None of these devastating diseases currently have a cure. The success of IONIS-HTTRX now lights a way. As a gene-silencing ASO, the drug is based on a simple Lego-like principle: you can synthesize similar ASOs that block the production of other misshapen proteins that lead to degenerative brain disorders.

To Dr. John Hardy, a neuroscientist at UCL who wasn’t involved in the study, the future for gene silencing looks bright.

“I don’t want to overstate this too much,” he says, “but if it works for one, why can’t it work for a lot of them? I am very, very excited.”

A Roadmap for Tinkering With Genes

ASOs may be the wild new player within our current pharmacopeia, but when it comes to genetic regulation, they’re old-school.

Lessons learned from their development will no doubt help push other gene silencing technologies or even CRISPR towards clinical success.

Within every cell, the genes encoded in DNA are “translated” into copies of RNA molecules. These messengers float into the cell’s protein-making factory, carrying in their sequence (made out of the familiar A, G, C, and curious U) the coding information that directs which proteins are made.

Most of our current pills directly latch onto proteins to enhance or block their function. ASOs, on the other hand, work on RNA.

In essence, ASOs are short strands of DNA not unlike the genomic data present in your body. Scientists can engineer ASO sequences that latch onto a specific RNA molecule—say, the one that makes the mutant protein mHtt that leads to symptoms of Huntington’s disease.

ASOs are bad news for RNA. In some cases, the ASO jams the messenger from delivering its genetic message to the cell’s protein factory. In others, the drug calls a “scissor-like” protein into action, causing it to chop up the target RNA. No RNA, no mutant protein, no disease.

While the biology is solid, a surprising roadblock tripped scientists up for decades: getting ASO molecules inside the cell and nucleus, which harbors the cell’s genetic material.

In short, “naked” ASOs are the body’s prime target. Blood factors chew them up. Kidneys spit them out. Even when they manage to tunnel into the right organ, they may get directed to the cell’s waste disposal system and disintegrate before they have a chance to do their magic.

What’s more, in some cases ASOs are confused with viruses by the body’s defense system, which leads to a double whammy: not only does the drug get eaten up, the body also generates an immune response, which in some cases could be deadly.

If these troubles sound familiar, you’re right: they’re remarkably similar to the ones that CRISPR has to tackle. But IONIS-HTTRX, in its success, offers some important tips to the newcomer.

For example, covering up “trouble spots” on the drug that stimulate the immune response helps calm the body down—a strategy CRISPR scientists are likely to adopt given that a recent study found signs of antibodies against the technology’s major component, Cas9.

A True Breakthrough for Huntington’s

IONIS-HTTRX signals a potential new age for tackling devastating brain disorders. But broad impacts aside, for patients with Huntington’s disease, the success couldn’t have arrived sooner.

“This is a very exciting day for patients and families affected by this devastating genetic brain disorder. For the first time we have evidence that a treatment can decrease levels of the toxic disease-causing protein in patients,” says Dr. Blair Levitt at the University of British Columbia, who oversaw the Canadian portion of the study.

A total of 49 early-stage Huntington’s patients across nine centers in the UK, Germany and Canada participated in the study. Each patient received four doses of the drug or placebo through a direct injection into their spine to help IONIS-HTTRX reach the brain.

Not only was the drug well-tolerated and safe, it lowered the levels of the mutant protein in the spinal fluid. The effect was dose-dependent, in that the more drugs a patient received, the lower level of toxic protein was found.

“If I’d have been asked five years ago if this could work, I would have absolutely said no. The fact that it does work is really remarkable,” says Hardy.

That said, the drug isn’t necessarily a cure. A patient would have to receive a dose of drug every three to four months for the rest of their lives to keep symptoms at bay. While it’s too early to put a price sticker on the treatment, the number could be in the hundred thousands every year.

A major step going forward is to see whether the drug improves patients’ clinical symptoms. Roche is on it—the company is throwing in millions to test the drug in larger trials.

In the meantime, participants from the safety trial are given the option to continue drug treatment, and this extension trial is bound to give scientists more insights.

Although this isn’t the first gene-silencing drug that has been shown to be successful, it is the first drug that tackles an incurable disease in the brain.

It’s a glimpse into a profound future—one where broken brains can be mended  and lives can be saved, long before the first signs of disease ever manage to take hold.

Image Credit: science photo / Shutterstock.com

from Singularity Hub http://ift.tt/2Dj1L7P

How the Science of Decision-Making Will Help Us Make Better Strategic Choices

http://img.youtube.com/vi/Ptapz9mSpyM/0.jpg

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.

As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.

Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.

We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?

Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.

Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.

At Worldview, we create learning experiences that are an amalgamation of all of those things.

LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?

BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.

You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.

There’s no single set of answers. There are as many unanswered questions as there are answered questions.

LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?

BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”

As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?

We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.

And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.

For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.

Let’s take hiring for a moment.

How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.

LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?

BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.

In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.

For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.

Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.

I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.

Image Credit: Black Salmon / Shutterstock.com

from Singularity Hub http://ift.tt/2DdOiua