Customs and Border Protection is testing a security kiosk with an avatar that appears onscreen and makes queries in a polite, automated voice.
Credit: Courtesy of the University of Arizona
Imagine you decide to take a casual trip to Mexico, walking across the border for a day of shopping or even cheap dental care that’s not available in the United States. Upon your return, an officer from Customs and Border Protection directs you to a kiosk that looks like an ATM.
You’re instructed to press start and answer any question the machine asks. A cartoon-looking face, or avatar, appears onscreen and begins making queries in a polite, automated voice.
Are you carrying anything destructive in your bag? Has anyone given you contraband to bring into the United States? What should happen to someone who does smuggle contraband?
This Max Headroom interrogation sounds far-fetched, but just such an experiment is occurring on the border in Nogales, Ariz., using a variation of technology the Department of Homeland Security has been pursuing for years.
The avatar records the answers and forwards them to a tablet handled by one of the blue-uniformed officers. They see not just what you said but how you said it, along with a green, yellow or red “risk color,” based on your responses. Maybe you spoke faster, louder and with a higher pitch than normal for most people. Maybe you hesitated when you answered.
It’s sort of like a lie-detector test – except the government dislikes calling it that.
“We instruct the officers that nowhere is deception ever indicated,” says Aaron Elkins, a postdoctoral researcher at the University of Arizona involved with the project. “But it gives them some of that feedback, things they would have observed if they had done the interview themselves.”
For now, the kiosk is being tested with applicants seeking “trusted traveler” status; these are people who agree to a background check in exchange for avoiding long daily waits at the border.
But the future could hold something different: a cluster of high-tech monitoring devices, such as special infrared cameras and microphones, attached to the ATM-like machines. As you answer the avatar’s questions, the devices assess an array of physiological reactions, including body temperature, facial expressions, the tempo and frequency of your voice, breathing patterns and more.
The technology is part of a field of research known as “credibility assessment” that seeks to capture physiological cues we give off emotionally and cognitively: the facial temperature of someone carrying false papers, the anxious posturing of a drug courier, the racing heart of a would-be terrorist.
Elkins prefers the phrase “anomaly detection” and says this advanced stage is still five to 10 years off. There is no machine that can single out liars with absolute reliability, just like a vehicle X-ray can’t tell you if a long-haul truck is hiding 7.52 kilograms of Peruvian cocaine. The X-ray can reveal only an otherwise secret compartment with suspicious-looking packages.
“We know now how to measure these different behaviors. We can get a good baseline of that person and a sense of when there’s something affecting them,” Elkins says. “(But) there are a lot of explanations for it. That’s why I don’t say ‘deception detection’ or ‘lie detection,’ because that is a very presumptuous thing to say.”
Two of his colleagues from Arizona, graduate associates Nathan Twyman and Justin Giboney, told a roomful of security experts at a summit in Tucson earlier this year that they didn’t expect the technology would be limited to the border. It could be used in employment screening, building protection or security at major athletic events.
There is plenty of research on body movement and deception, the pair said. The technology just wasn’t reliable and efficient enough as a security tool in the real world – at least not before.
Elkins emphasized that personally identifiable information is not being stored in the machines, and they’ve worked hard to make the kiosk itself, first assembled two years ago, appear unintimidating.
Ginger McCall, open government program director at the Electronic Privacy Information Center, is nonetheless concerned about any attempt to measure guilt or risk using what she called inconclusive physiological findings.
“Also, this is very inefficient,” McCall says. “If you already have to have a (Customs and Border Protection) officer observe each person passing through, why not just cut out the machine middle man and have the traveler interact directly with the CBP officer?”
An earlier sibling of the kiosk was known as Future Attribute Screening Technology, first envisioned by the Department of Homeland Security as a panel of sensors that would remotely monitor your heart rate, respiration, eye movement and more, all without physical contact as you passed through a security screening checkpoint, such as at an airport.
“Somebody’s anxious about getting on the airplane, so that anxiety is going to show elevated stress levels. Or maybe a mother’s got a child and the child is misbehaving. That’s going to increase her stress levels. It just made no sense to me,” says John Kircher, a psychophysiologist at the University of Utah who was critical of the program. “It was absolutely absurd.”
Washington pumped $37 million into that project over five years, with Draper Laboratory in Massachusetts leading the way. Officials insist their limited goal was achieved by the time money ran out: Establish a scientific basis for using sensors to detect “mal-intent.”
“You have to keep in mind that when we started this program in fiscal year 2007, there was no theory regarding this particular area of research,” says Robert Middleton, a program manager at the Homeland Security Department. “One of the more valuable contributions I think that FAST made was the definition of a theory and, ultimately, the validation of that theory.”
Kircher says he attended early meetings at which the concept of FAST was discussed but claims his skepticism was not welcomed. A border kiosk, however, may be worth consideration, he says.
That’s because you would be interacting with the machine, and the interaction would be more controlled, similar to the numerous control questions asked before a traditional polygraph gets under way. Then there’s at least the possibility that someone actively engaging in deception can be distinguished from someone who’s simply in a hurry, he says.
Another critical factor is the avatar, which isn’t there just to save money on salaried officers. It can be programmed to ask questions the same way each time, in the same tone and with the same lapses between each one, meaning answers would be less affected by the daily mood swings of an officer.
But Kircher still has his doubts. In order for the kiosk to be useful, he says, people being questioned need to be isolated from external sources of stimulation – light, sound, temperature and strangers – for 10 or more minutes to avoid contaminated signals.
Elkins says the issue of signal interference can be addressed in environments that aren’t as highly controlled as a polygraph. His team builds a unique statistical model based on your condition at the time the interview begins.
“The extent to how well we can calibrate to a high-stress baseline, such as those found in a busy airport, is still an active area of research for us,” Elkins says. “This is why testing in Nogales with trusted travelers has been our initial focus, because the interview and environment is low-stress from time pressure or other distractions.”
In the meantime, researchers are building a new kiosk based on what they’ve learned so far, and a second phase of system testing has begun with one notable change: It now speaks Spanish.