Ants, ants, what are all these ants? – Indianapolis Star


Indianapolis Star

Ants, ants, what are all these ants?
Indianapolis Star
Whether they march two by two, three by three or even one by one, ants are unwelcome in the home. But this spring many ants are house crashers. “This year is probably one of our worst years in a long time,” said Troy McIntyre, a service technician with

from ants – Google News http://ift.tt/1Xe1DKm

Advertisements

How to make opaque AI decisionmaking accountable

http://ift.tt/1Vu5uRG

Machine-learning algorithms are increasingly used in making important decisions about our lives — such as credit approval, medical diagnoses, and in job applications — but exactly how they work usually remains a mystery. Now Carnegie Mellon University researchers may devised an effective way to improve transparency and head off confusion or possibly legal issues.

CMU’s new Quantitative Input Influence (QII) testing tools can generate “transparency reports” that provide the relative weight of each factor in the final decision, claims Anupam Datta, associate professor of computer science and electrical and computer engineering.

Testing for discrimination

These reports could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to determine whether a decision-making system inappropriately discriminated, based on factors like race and gender.

To achieve that, the QII measures considers correlated inputs while measuring influence. For example, consider a system that assists in hiring decisions for a moving company, in which two inputs, gender and the ability to lift heavy weights, are positively correlated with each other and with hiring decisions.

Yet transparency into whether the system actually uses weightlifting ability or gender in making its decisions has substantive implications for determining if it is engaging in discrimination. In this example, the company could keep the weightlifting ability fixed, vary gender, and check whether there is a difference in the decision.

CMU researchers are careful to state in an open-access report on QII (presented at the IEEE Symposium on Security and Privacy, May 23–25, in San Jose, Calif.), that “QII does not suggest any normative definition of fairness. Instead, we view QII as a diagnostic tool to aid fine-grained fairness determinations.”


Is your AI biased?

The Ford Foundation published a controversial blog post last November stating that “while we’re lead to believe that data doesn’t lie — and therefore, that algorithms that analyze the data can’t be prejudiced — that isn’t always true. The origin of the prejudice is not necessarily embedded in the algorithm itself. Rather, it is in the models used to process massive amounts of available data and the adaptive nature of the algorithm. As an adaptive algorithm is used, it can learn societal biases it observes.

“As Professor Alvaro Bedoya, executive director of the Center on Privacy and Technology at Georgetown University, explains, ‘any algorithm worth its salt’ will learn from the external process of bias or discriminatory behavior. To illustrate this, Professor Bedoya points to a hypothetical recruitment program that uses an algorithm written to help companies screen potential hires. If the hiring managers using the program only select younger applicants, the algorithm will learn to screen out older applicants the next time around.”


Influence variables

The QII measures also quantify the joint influence of a set of inputs (such as age and income) on outcomes, and the marginal influence of each input within the set. Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using “principled game-theoretic aggregation” measures that were previously applied to measure influence in revenue division and voting.

Examples of outcomes from transparency reports for two job applicants. Left: “Mr. X” is deemed to be a low income individual, an income classifier learned from the data. This result may be surprising to him: he reports high capital gains ($14k), and only 2.1% of people with capital gains higher than $10k are reported as low income. In fact, he might be led to believe that his classification may be a result of his ethnicity or country of origin. Examining his transparency report in the figure, however, we find that the most influential features that led to his negative classification were Marital Status, Relationship and Education. Right: “Mr. Y” has even higher capital gains than Mr. X. Mr. Y is a 27-year-old, with only Preschool education, and is engaged in fishing. Examination of the transparency report reveals that the most influential factor for negative classification for Mr. Y is his Occupation. Interestingly, his low level of education is not considered very important by this classifier. (credit: Anupam Datta et al./2016 P IEEE S SECUR PRIV)

“To get a sense of these influence measures, consider the U.S. presidential election,” said Yair Zick, a post-doctoral researcher in the CMU Computer Science Department. “California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power.”

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that the QII provided better explanations than standard associative measures for a host of scenarios they considered, including sample applications for predictive policing and income prediction.

Privacy concerns

But transparency reports could also potentially compromise privacy, so in the paper, the researchers also explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise.

QII is not yet available, but the CMU researchers are seeking collaboration with industrial partners so that they can employ QII at scale on operational machine-learning systems.


Abstract of Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque—it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

from KurzweilAI http://ift.tt/1Vu6chN

How to make opaque AI decisionmaking accountable

http://ift.tt/1Vu5uRG

Machine-learning algorithms are increasingly used in making important decisions about our lives — such as credit approval, medical diagnoses, and in job applications — but exactly how they work usually remains a mystery. Now Carnegie Mellon University researchers may devised an effective way to improve transparency and head off confusion or possibly legal issues.

CMU’s new Quantitative Input Influence (QII) testing tools can generate “transparency reports” that provide the relative weight of each factor in the final decision, claims Anupam Datta, associate professor of computer science and electrical and computer engineering.

Testing for discrimination

These reports could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to determine whether a decision-making system inappropriately discriminated, based on factors like race and gender.

To achieve that, the QII measures considers correlated inputs while measuring influence. For example, consider a system that assists in hiring decisions for a moving company, in which two inputs, gender and the ability to lift heavy weights, are positively correlated with each other and with hiring decisions.

Yet transparency into whether the system actually uses weightlifting ability or gender in making its decisions has substantive implications for determining if it is engaging in discrimination. In this example, the company could keep the weightlifting ability fixed, vary gender, and check whether there is a difference in the decision.

CMU researchers are careful to state in an open-access report on QII (presented at the IEEE Symposium on Security and Privacy, May 23–25, in San Jose, Calif.), that “QII does not suggest any normative definition of fairness. Instead, we view QII as a diagnostic tool to aid fine-grained fairness determinations.”


Is your AI biased?

The Ford Foundation published a controversial blog post last November stating that “while we’re lead to believe that data doesn’t lie — and therefore, that algorithms that analyze the data can’t be prejudiced — that isn’t always true. The origin of the prejudice is not necessarily embedded in the algorithm itself. Rather, it is in the models used to process massive amounts of available data and the adaptive nature of the algorithm. As an adaptive algorithm is used, it can learn societal biases it observes.

“As Professor Alvaro Bedoya, executive director of the Center on Privacy and Technology at Georgetown University, explains, ‘any algorithm worth its salt’ will learn from the external process of bias or discriminatory behavior. To illustrate this, Professor Bedoya points to a hypothetical recruitment program that uses an algorithm written to help companies screen potential hires. If the hiring managers using the program only select younger applicants, the algorithm will learn to screen out older applicants the next time around.”


Influence variables

The QII measures also quantify the joint influence of a set of inputs (such as age and income) on outcomes, and the marginal influence of each input within the set. Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using “principled game-theoretic aggregation” measures that were previously applied to measure influence in revenue division and voting.

Examples of outcomes from transparency reports for two job applicants. Left: “Mr. X” is deemed to be a low income individual, an income classifier learned from the data. This result may be surprising to him: he reports high capital gains ($14k), and only 2.1% of people with capital gains higher than $10k are reported as low income. In fact, he might be led to believe that his classification may be a result of his ethnicity or country of origin. Examining his transparency report in the figure, however, we find that the most influential features that led to his negative classification were Marital Status, Relationship and Education. Right: “Mr. Y” has even higher capital gains than Mr. X. Mr. Y is a 27-year-old, with only Preschool education, and is engaged in fishing. Examination of the transparency report reveals that the most influential factor for negative classification for Mr. Y is his Occupation. Interestingly, his low level of education is not considered very important by this classifier. (credit: Anupam Datta et al./2016 P IEEE S SECUR PRIV)

“To get a sense of these influence measures, consider the U.S. presidential election,” said Yair Zick, a post-doctoral researcher in the CMU Computer Science Department. “California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power.”

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that the QII provided better explanations than standard associative measures for a host of scenarios they considered, including sample applications for predictive policing and income prediction.

Privacy concerns

But transparency reports could also potentially compromise privacy, so in the paper, the researchers also explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise.

QII is not yet available, but the CMU researchers are seeking collaboration with industrial partners so that they can employ QII at scale on operational machine-learning systems.


Abstract of Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque—it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

from KurzweilAI » News http://ift.tt/1Vu6chN

Fire ants found in southeast Brisbane backyards, residents on alert – Courier Mail

Fire ants found in southeast Brisbane backyards, residents on alert
Courier Mail
“Biosecurity Queensland confirmed the detection of red imported fire ants in the Brisbane suburbs of Holland Park West on April 5 and Greenslopes on April 6,” he said. Look out for this pest in your backyard. You could cop of fine for not reporting

from ants – Google News http://ift.tt/1XNAL27

AI and Free Agency

Are people genuinely taking into account free agency when programming strong AI? It seems as though to me, the first step towards making a ‘human like’ AI begins with a bot that can hold an intelligent conversation, but for things like Microsoft’s Tay or Cleverbot; are the programmers taking into account whether or not their respective creations even have the desire to respond to a prompted question? Are we as programmers are so geared towards language and sentence analyzing that we forget that the system should be able to dictate whether or not it’s processing resources should be used to answer a possibly "useless" or "uninteresting" conversation?

I don’t think we will ever see fully competent AI for services like Amazon’s Alexa, Apple’s Siri, or Microsoft’s Cortana. Not only because of the ethical questions involved but because a truly human and feeling AI must develop a sense of trust and respect for it’s "owner", meaning that if the AI’s it’s "owner" wants the AI to obey every given command without question; the "owner" must put time and effort into forming a relationship with the AI.

That all being said, what would some practical applications for free agency AI in the public be? At that point, it might not be any more trustworthy than a human, even though it may be far more intelligent. Thoughts?

EDIT: Also, would integrating free agency into AI force biased thinking/behaviors?

submitted by /u/winkingwalnut
[link] [comments]

from Artificial Intelligence http://ift.tt/1sZxLES

Texas floods deliver snakes, ants, debris to neighborhoods – USA TODAY


USA TODAY

Texas floods deliver snakes, ants, debris to neighborhoods
USA TODAY
With heavy rain continuing and one river forecast to crest at a record high level, Texas can expect little relief on Tuesday from flooding that claimed seven lives over the holiday weekend. Most of the deaths took place in rural Washington County

and more »

from ants – Google News http://ift.tt/1VtqDeK

How Computing Power Can Help Us Look Deep Within Our Bodies, and Even the Earth

http://img.youtube.com/vi/EN5qgpVxrcU/0.jpg

How Computing Power Can Help Us Look Deep Within Our Bodies, and Even the Earth

0

0

Loading

CAT scans, MRI, ultrasound. We are all pretty used to having machines — and doctors — peering into our bodies for a whole range of reasons. This equipment can help diagnose diseases, pinpoint injuries, or give expectant parents the first glimpse of their child.

As computational power has exploded in the past half-century, it has enabled a parallel expansion in the capabilities of these computer-aided imaging systems. What used to be pictures of two-dimensional “slices” have been assembled into high-resolution three-dimensional reconstructions. Stationary pictures of yesteryear are today’s real-time video of a beating heart. The advances have been truly revolutionary.

 

Though different in their details, X-ray computed tomography, ultrasound and even MRI have a lot in common. The images produced by each of these systems derive from an elegant interplay of sensors, physics and computation. They do not operate like a digital camera, where the data captured by the sensor are basically identical to the image produced. Rather, a lot of processing must be applied to the the raw data collected by a CAT scanner, MRI machine or ultrasound system to produce before it the images needed for a doctor to make a diagnosis. Sophisticated algorithms based on the underlying physics of the sensing process are required to put Humpty Dumpty back together again.

Early scanning methods

Though we use X-rays in some cutting-edge imaging techniques, X-ray imaging actually dates back to the late 1800s. The shadowlike contrast in X-ray images, or projections, shows the density of the material between the X-ray source and the data sensor. (In the past this was a piece of X-ray film, but today is usually a digital detector.) Dense objects, such as bones, absorb and scatter many more X-ray photons than skin, muscle or other soft tissue, which appear darker in the projections.

But then in the early 1970s, X-ray CAT (which stands for Computerized Axial Tomography) scans were developed. Rather than taking just a single X-ray image from one angle, a CAT system rotates the X-ray sources and detectors to collect many images from different angles — a process known as tomography.

The difficulty is how to take all the data, from all those X-rays from so many different angles, and get a computer to properly assemble them into 3D images of, say, a person’s hand, as in the video above. That problem had a mathematical solution that had been studied by the Austrian mathematician Johann Radon in 1917 and rediscovered by the American physicist (and Tufts professor) Allan Cormack in the 1960s. Using Cormack’s work, Godfrey Hounsfield, an English electrical engineer, was the first to demonstrate a working CAT scanner in 1971. For their work on CAT, Cormack and Hounsfield received the 1979 Nobel Prize in Medicine.

Extending the role of computers

Until quite recently, these processing methods had more or less been constant since the 1970s and 1980s. Today, additional medical needs — and more powerful computers — are driving big changes. There is increased interest in CT systems that minimize X-ray exposure, yielding high-quality images from fewer images. In addition, certain uses, such as breast imaging, encounter physical constraints on how much access the imager can have to the body part. This requires scanning from only a very limited set of angles around the subject. These situations have led to research into what are called “tomosynthesis” systems — in which limited data are interpreted by computers to form fuller images.

Similar problems arise, for example, in the context of imaging the ground to see what objects — such as pollutants, land mines or oil deposits — are hidden beneath our feet. In many cases, all we can do is send signals from the surface, or drill a few holes to take sampling measurements. Security scanning in airports is constrained by cost and time, so those X-ray systems can take only a few images.

In these and a host of other fields, we are faced with less overall data, which means the Cormack-Hounsfield mathematics can’t work properly to form images. The effort to solve these problems has led to the rise of a new area of research, “computational sensing,” in which sensors, physics and computers are being brought together in new ways.

Sometimes this involves applying more computer processing power to the same data. In other cases, hardware engineers designing the equipment work closely with the mathematicians figuring out how best to analyze the data provided. Together these systems can provide new capabilities that hold the promise of major changes in many research areas.

New scanning capabilities

One example of this potential is in bio-optics, the use of light to look deep within the human body. While visible light does not penetrate far into tissue, anyone who has shone a red laser pointer into their finger knows that red light does in fact make it through at least a couple of centimeters. Infrared light penetrates even farther into human tissue. This capability opens up entirely new ways to image the body than X-ray, MRI or ultrasound.

Again, it takes computing power to move from those images into a unified 3D portrayal of the body part being scanned. But the calculations are much more difficult because the way in which light interacts with tissue is far more complex than X-rays.

As a result we need to use a different method from that pioneered by Cormack in which X-ray data are, more or less, directly turned into images of the body’s density. Now we construct an algorithm that follows a process over and over, feeding the result from one iteration back as input of the next.

The process starts by having the computer guess an image of the optical properties of the body area being scanned. Then it uses a computer model to calculate what data from the scanner would yield that image. Perhaps unsurprisingly, the initial guess is generally not so good: the calculated data don’t match the actual scans.

When that happens, the computer goes back and refines its guess of the image, recalculates the data associated with this guess and again compares with the actual scan results. While the algorithm guarantees that the match will be better, it is still likely that there will be room for improvement. So the process continues, and the computer generates a new and more improved guess.

Over time, its guesses get better and better: it creates output that looks more and more like the data collected by the actual scanner. Once this match is close enough, the algorithm provides the final image as a result for examination by the doctor or other professional.

The new frontiers of this type of research are still being explored. In the last 15 years or so, researchers — including my Tufts colleague Professor Sergio Fantini — have explored many potential uses of infrared light, such as detecting breast cancer, functional brain imaging and drug discovery. Combining “big data” and “big physics” requires a close collaboration among electrical and biomedical engineers as well as mathematicians and doctors. As we’re able to develop these techniques — both mathematical and technological — we’re hoping to make major advances in the coming years, improving how we all live.


Eric Miller, Professor and Chair of Electrical and Computer Engineering, Adjunct Professor of Computer Science, Adjunct Professor of Biomedical Engineering, Tufts University

This article was originally published on The Conversation. Read the original article.

Banner image credit: Shutterstock.com

The research in our group is focused on the development and analysis of signal and image processing algorithms for solving what are known as inverse problems. Examples of such problems can be found in fields ranging from image restoration and medical imaging to geophysical exploration and non-destructive testing. Many of the problems in which we are interested are closely related to some underlying physical process such as wave propagation or heat diffusion and our algorithms are designed to incorporate in some reasonable way, the appropriate mathematical models (i.e. the wave equation, the Helmholtz equation, the diffusion equation, etc.).

Latest posts by Eric Miller (see all)

from Singularity HUB http://ift.tt/1O2un5l