From Bench to Bunker – The Chronicle Review – The Chronicle of Higher Education
In a small, anonymous office in the Trump Tower, 28 floors above Wall Street, a man sits in front of a computer screen sifting through satellite images of a foreign desert. The images depict a vast, sandy emptiness, marked every so often by dunes and hills. He is searching for man-made structures: houses, compounds, airfields, any sign of civilization that might be visible from the sky. The images flash at a rate of 20 per second, so fast that before he can truly perceive the details of each landscape, it is gone. He pushes no buttons, takes no notes. His performance is near perfect.
Or rather, his brain’s performance is near perfect. The man has a machine strapped to his head, an array of electrodes called an electroencephalogram, or EEG, which is recording his brain activity as each image skips by. It then sends the brain-activity data wirelessly to a large computer. The computer has learned what the man’s brain activity looks like when he sees one of the visual targets, and, based on that information, it quickly reshuffles the images. When the man sorts back through the hundreds of images—most without structures, but some with—almost all the ones with buildings in them pop to the front of the pack. His brain and the computer have done good work.
That display was a demonstration of a new technology being developed through a collaboration between the Defense Advanced Research Projects Agency, the military’s research arm, and a private company called Neuromatters, which was founded by a team led by the Columbia University bioengineer Paul Sajda. The hope is that, in the near future, military analysts might use the technology to eliminate worthless images in seconds, speeding up their review of satellite images by orders of magnitude. By the looks of it, it’s working.
The program, called Neurotechnology for Intelligence Analysts, or NIA, is just one of many being pursued by Darpa, as the agency is known, to translate basic neuroscience research into tools that will make the military more able and efficient. Other projects Darpa finances include one to test whether sending electricity through the brain can accelerate learning; another that seeks to use psychology and neuroscience to understand which types of communication best convince those living in occupied lands that they should yield to American forces, a sort of Propaganda 2.0; and a project aimed at developing drugs that would reduce or erase traumatic memories.
Some critics view these projects with suspicion and raise ethical objections: They see Darpa initiating a military invasion of the mind that warps the goals of basic research to fit the battlefield. “As a scientist I dislike that someone might be hurt by my work. I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that,” Vincent P. Clark, an associate professor of psychology at the University of New Mexico whose work with brain stimulation has influenced the military, told The Guardian earlier this year.
For others, however, such military projects are just another outgrowth of years of basic-science research, the natural siblings of other clinical and bioengineering applications.
Either way, the NIA project makes clear the often unpredictable routes that basic-science findings take on their way to becoming something useful in the wider world. Because of that unpredictability, support for basic biological science occasionally comes under attack for lacking clear, direct benefits to society. But in 2009, in a speech before the National Academy of Sciences, President Obama spoke about the value of such research: “The fact is, an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not.” Now, in an age of increasing interest in bioengineering and, specifically, tapping into the computational power of the brain, these Darpa-financed projects are proof that basic-science discoveries in the biological sciences do lead to unexpected places, including to war.
In the 1990s, the military began to realize it had a problem: too many pictures and not enough eyeballs.
Specifically, it had a glut of satellite images, photos covering every inch of the planet, waiting to be sifted, scrutinized, and analyzed for any precious bits of intelligence. Paul Sajda, who would later found Neuromatters and develop the NIA program, learned of this problem on a visit to the National Photographic Interpretation Center, in Washington, D.C., in 1995. The center was staffed with hundreds of analysts whose job was to sort painstakingly through piles and piles of satellite images, looking for communications lines one day, rebel camps the next.
At the time, Sajda was working for the nonprofit David Sarnoff Research Center, in Princeton, N.J. Sarnoff had many contracts with the Department of Defense, including a project Sajda himself had been working on to apply the military’s computer technologies to the analysis of radiological images of potential cases of breast cancer, hoping to improve diagnostic screenings. It was that project that brought him to NPIC, to see its image-analysis process in action.
During his visit, Sajda was struck by how the analysts could tell, from only a few pixels, what they were looking at. It was analysts at the center, for example, who first discovered, in a set of grainy photos taken during flyovers of Cuba by American U-2 planes, the Russian cache of nuclear missiles that led to the Cuban missile crisis. These analysts were good.
Nevertheless, looking through images was a slow and laborious process, and while computer technology had improved the program’s results, the gains were limited. Further, as the sites of important intelligence became more widely distributed, the number of potentially significant images ballooned.
Sajda was amazed at how many gigabytes of images went unanalyzed, even unviewed. “Here was this huge pile of data, and no one could even look at it. There just wasn’t enough manpower,” he told me. In 1996, the government merged the NPIC with several related organizations to form the National Imagery and Mapping Agency, hoping to improve its success. But the problem did not go away, and in 2001 a Congressionally appointed committee released a report condemning the agency for its poor performance.
For Sajda, the problem was an intriguing one, and it held his attention starting with that first visit. “I thought then,” Sajda remembers, “that there has to be a way we can speed this up.”
Though Sajda was an engineer, he had studied the human visual system as a graduate student, developing models for how the brain picks apart a scene, identifying what is important and what is not. He knew that the brain still outperformed any computer at identifying important features of images like satellite photos. Most importantly, Sajda was familiar with a long literature, dating back to the mid-1960s, that related rapid changes in brain activity to visual processing of important information.
What is most remarkable about Sajda’s attempt at solving the military’s problem is that it is based primarily on that 1960s-era research. In particular, a series of EEG studies published starting in 1964 in the journals Nature and Science demonstrated, for the first time, specific markers of cognitive processing in the brain activity of people while they viewed images.
One of those studies in particular is a clear precursor of Sajda’s work. It was carried out by a young psychologist named Robert Chapman, and it showed that brain activity was quite different while people viewed images that held important information than while they viewed images that meant nothing to them.
Chapman’s experimental design would seem primitive to psychologists today, but it worked. Subjects sat in a chair in a dimly lit room. In front of them were two illuminated boxes. In one box, a single number was shown, while in the other, a series of numbers, interspersed with plus signs, flashed in front of the subject. The numbers were selected randomly, via holes punched into a piece of paper that was fed by a motored gear through the illuminated machine (the days of experiments presented on computer screens had not yet arrived). A subject had to decide, with each number flashed on the right, whether the number on the left was smaller. Chapman then used a hulking computer, made by Packard Bell, to average all the data surrounding the different types of trials—those with numbers, and those with blanks or plusses.
This data averaging itself was a major step forward. In the early 1960s, the use of EEG to study brain activity was about 40 years old, but the brain’s signals were still poorly understood. In the 1930s, for example, the originator of the EEG technique, Hans Berger, had shown that the squiggly lines representative of electrical brain activity changed significantly when people closed their eyes, or did math in their heads. But, because such early EEG researchers had to do all analysis by looking at the data visually and counting important events or changes, it was almost impossible to conduct and analyze complicated cognitive experiments.
With the introduction of computers, however, researchers could look not just at the continuous EEG over long periods of time but also at the changes that occurred around specific events by averaging the data from a large number of painstakingly timed trials. Most researchers began using this newfound capability to study sensory responses—placing electrodes over the visual cortex at the back of the head, for example, and analyzing how the EEG signal changed when flashes of light of different durations were presented to subjects. Chapman was one of the first to apply that approach to cognitive tasks.
What Chapman found in his study immediately excited him: When subjects viewed any stimulus, there was a quick change in brain activity, the size of which depended on how bright the stimulus was. But when subjects were shown a number, crucial to performing the task before them, the EEG registered a huge spike in brain activity about 300 milliseconds after the stimulus appeared. When a plus sign was shown instead of a number, the spike was notably smaller.
That simple task had revealed something profound: a clear EEG marker of the perception and processing of information relevant to a decision. Samuel Sutton, in a series of experiments published in 1965 in the journal Science, continued to explore that class of responses, focusing specifically on the spike that occurred 300 milliseconds after the stimulus. Eventually, that spike was named the P300 response.
Since those early findings, the P300 has been used to study almost every conceivable topic in neurology and neuroscience: decision-making, consciousness, Alzheimer’s disease, schizophrenia, and, quite prominently, as a brain-computer interface to allow paralyzed people to spell using EEG.
At the time of his visit to the National Photographic Interpretation Center, Sajda was already familiar with the P300 literature, and he began to wonder if there was some way that brain activity itself could be used to speed up image analysis.
The idea was not so far-fetched. In 1996 a paper was published in Nature about a technique called rapid serial visual presentation: RSVP. Researchers demonstrated that images shown extremely rapidly could still be parsed by the visual system, that the telltale signs of visual processing in the EEG were still there. “This was a big inspiration for me,” Sajda remembers. If he could find a difference in brain activity between the rare images that had important targets and those that didn’t, he could use that signature to create a system to analyze the military’s images. And the Nature paper suggested it could be done extremely rapidly, faster than 10 images per second. What’s more, the P300 effect had been shown to be modulated by expertise: Analysts who spent all day looking through images would have particularly robust brain responses.
In 2003, at the urging of a Darpa program officer named Amy Kruse, Sajda wrote a proposal and brought the idea to the agency’s attention. First, he wrote, the system would take advantage of state-of-the-art computer vision techniques, weeding out images that could be easily analyzed without human involvement. Once the more difficult images were isolated, he would train a computer to recognize what an analyst’s brain activity looked like after viewing an image with a target, and one without a target. Then he would present images to analysts at a rapid rate, up to 20 times per second. If his algorithm worked, the computer could generate an “interest score” for each image simply by looking at how robust the P300 response was. Analysts could then spend their time studying the images that mattered, those with the highest scores. After several false starts, the project was backed by Darpa, and Sajda founded Neuromatters to do the product development and engineering. Darpa also set up and financed seven other groups to pursue the technique.
By the time I saw the project, only two groups remained in the hunt, and Sajda’s approach was in the process of being tested by government analysts. According to all of Neuromatters’ studies, the project was a huge success, ready for the field: They claim to have achieved a 300-percent increase in the speed of image analysis by peeking in on the brain. The government might, finally, be able to analyze most of those images.
It seems clear that the technology, if validated by analysts, will be used to speed up the routine processing of satellite imagery. But other military uses, such as the rapid selection of targets for bombing, remain unclear: The program manager for the project at Darpa, William Casebeer, refused a request for an interview for this article. Instead he issued a statement that read, “Taking advantage of the massively distributed parallel-processing capabilities of the human brain by sensing when it has detected anomalies in images could be an important part of a comprehensive approach for dealing with the deluge of data our intelligence analysts deal with each day. Testing of promising prototype NIA systems is ongoing so we can make fully-informed transition decisions.”
But it’s not just the military that is faced with a glut of visual information these days, and both Sajda and the chief executive at Neuromatters, Barbara Hanna, were willing to discuss other potential commercializations of their technology. Hanna and Michael Repucci, who was at the time a Neuromatters engineer and who served as the subject in the sorting demonstrating, showed me a system in which a series of random objects, instead of landscapes, flew by. Repucci is a biker, so he decided to look for images of bikes. When the experiment finished, the bikes flew to the front of the pack, skipping over baseball mitts and lighthouses. Hanna and Repucci excitedly described how, one day, we might no longer flip slowly through catalogs, instead letting the brain-activity interest score decide which products we most want to see. They’re not there yet, they caution, but the applications have begun to seem endless.
In the 50 years since its discovery, the fundamental principle of the P300 has been used to create tools that allow paralyzed people to communicate, to find biomarkers that differentiate people with Alzheimer’s disease and schizophrenia from healthy people, and, now, to efficiently sort the vast data troves of the U.S. government and, perhaps soon, retailers.
According to Robert Chapman, many of those applications were foreseeable, even at the beginning, once it was clear that his team had discovered a basic marker of cognition. “I immediately saw the clinical implications,” he told me. “I saw practically no limit to what we could do.” But what he admitted he could not foresee was that one day a field he helped found would find one of its most promising applications to be a tool developed for the U.S. military.
The irony is that the basic-science laboratory in which Chapman conducted his research was at the Army’s Walter Reed Military Research Center. Chapman had been drafted while in graduate school at Brown University and had worked with his faculty mentors to devise a way for him to remain in science during his conscription by doing research at Walter Reed, where many Brown faculty had connections. In fact, Chapman almost didn’t carry out his EEG research at all; a mistake on his military paperwork sent him to Davids’ Island, off the coast of New York in Long Island Sound, to serve as a statistician at the Chaplain School there. Only after he worked out a swap with a statistician from Kansas was he finally allowed to move to Walter Reed.
He describes Walter Reed in the early 1960s as an idyllic, interdisciplinary research environment reminiscent of the golden era of Bell Labs. According to Chapman, scientists were required to put 50 percent of their effort into their main project, but the other 50 percent was up to them—they could pursue any research they liked. It was this openness that allowed Chapman and his mentor, John Armington, to take time away from their study of the retina to stick those electrodes on the scalp and study cognition.
While all of Chapman’s work at Walter Reed was government-supported, his funds came from the National Institutes of Health, not the military. His work was basic; it had no obvious application at the time outside of cognitive science.
Basic science regularly follows those types of meandering paths to practical relevance, according to Jonathan Moreno, a professor of bioethics at the University of Pennsylvania. “Scientists often fail to foresee where their research is headed,” says Moreno, whose 2006 book Mind Wars discusses the ethical issues surrounding military applications of neuroscience. “Even Einstein refused, for a long time, to believe his work contributed to the atomic bomb.”
While predictions may be nearly impossible, Moreno believes that basic neuroscientists have a responsibility to remain a part of the conversation about how their work is used. “It’s easy to just be too busy to care, with meetings, grad students, grants. But if scientists find the time to stay involved, they can create a true culture of thoughtfulness around these complex issues,” he says.
There are already several groups of scientists and national-security experts attempting to create such a culture. In particular, the National Research Council recently commissioned a two-year study of the potential impacts and ethical implications of using neuroscience research in the military, which was published in 2008. Christopher Green, the chair of the council’s committee and an assistant dean at the Wayne State School of Medicine, in Detroit, says that he believes such reports, if taken seriously by Congress, can fill the role of ethical watchdog effectively.
“There would be nothing wrong with Congress giving one to three million dollars to the National Research Council simply to do a yearly study of the current state of this research and its ethics,” Green says. What’s more, he points out, the process of calling dozens of experts to give their opinions on the topic provides basic scientists a forum to voice their support for or concerns regarding the use of their work.
With that in mind, I asked Chapman whether it bothered him that the ideas he helped uncover and now applies clinically might be used by the military to help plan attacks. He doesn’t see things that way. “I can appreciate that view, but I don’t hold it strongly,” he told me. “I think it’s becoming increasingly difficult to make moral decisions in this world.” Besides, he pointed out, the image-analysis tool was just as likely to be used to avoid civilian casualties as it was to help soldiers kill insurgents, and the early work in his field had also led to major clinical breakthroughs in populations as disparate as the severely brain injured and the schizophrenic.
He paused for a moment, thinking, and then rejected my premise outright. “The truth is, you just can’t hide your discoveries.”
Or rather, his brain’s performance is near perfect. The man has a machine strapped to his head, an array of electrodes called an electroencephalogram, or EEG, which is recording his brain activity as each image skips by. It then sends the brain-activity data wirelessly to a large computer. The computer has learned what the man’s brain activity looks like when he sees one of the visual targets, and, based on that information, it quickly reshuffles the images. When the man sorts back through the hundreds of images—most without structures, but some with—almost all the ones with buildings in them pop to the front of the pack. His brain and the computer have done good work.
That display was a demonstration of a new technology being developed through a collaboration between the Defense Advanced Research Projects Agency, the military’s research arm, and a private company called Neuromatters, which was founded by a team led by the Columbia University bioengineer Paul Sajda. The hope is that, in the near future, military analysts might use the technology to eliminate worthless images in seconds, speeding up their review of satellite images by orders of magnitude. By the looks of it, it’s working.
The program, called Neurotechnology for Intelligence Analysts, or NIA, is just one of many being pursued by Darpa, as the agency is known, to translate basic neuroscience research into tools that will make the military more able and efficient. Other projects Darpa finances include one to test whether sending electricity through the brain can accelerate learning; another that seeks to use psychology and neuroscience to understand which types of communication best convince those living in occupied lands that they should yield to American forces, a sort of Propaganda 2.0; and a project aimed at developing drugs that would reduce or erase traumatic memories.
Some critics view these projects with suspicion and raise ethical objections: They see Darpa initiating a military invasion of the mind that warps the goals of basic research to fit the battlefield. “As a scientist I dislike that someone might be hurt by my work. I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that,” Vincent P. Clark, an associate professor of psychology at the University of New Mexico whose work with brain stimulation has influenced the military, told The Guardian earlier this year.
For others, however, such military projects are just another outgrowth of years of basic-science research, the natural siblings of other clinical and bioengineering applications.
Either way, the NIA project makes clear the often unpredictable routes that basic-science findings take on their way to becoming something useful in the wider world. Because of that unpredictability, support for basic biological science occasionally comes under attack for lacking clear, direct benefits to society. But in 2009, in a speech before the National Academy of Sciences, President Obama spoke about the value of such research: “The fact is, an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not.” Now, in an age of increasing interest in bioengineering and, specifically, tapping into the computational power of the brain, these Darpa-financed projects are proof that basic-science discoveries in the biological sciences do lead to unexpected places, including to war.
In the 1990s, the military began to realize it had a problem: too many pictures and not enough eyeballs.
Specifically, it had a glut of satellite images, photos covering every inch of the planet, waiting to be sifted, scrutinized, and analyzed for any precious bits of intelligence. Paul Sajda, who would later found Neuromatters and develop the NIA program, learned of this problem on a visit to the National Photographic Interpretation Center, in Washington, D.C., in 1995. The center was staffed with hundreds of analysts whose job was to sort painstakingly through piles and piles of satellite images, looking for communications lines one day, rebel camps the next.
At the time, Sajda was working for the nonprofit David Sarnoff Research Center, in Princeton, N.J. Sarnoff had many contracts with the Department of Defense, including a project Sajda himself had been working on to apply the military’s computer technologies to the analysis of radiological images of potential cases of breast cancer, hoping to improve diagnostic screenings. It was that project that brought him to NPIC, to see its image-analysis process in action.
During his visit, Sajda was struck by how the analysts could tell, from only a few pixels, what they were looking at. It was analysts at the center, for example, who first discovered, in a set of grainy photos taken during flyovers of Cuba by American U-2 planes, the Russian cache of nuclear missiles that led to the Cuban missile crisis. These analysts were good.
Nevertheless, looking through images was a slow and laborious process, and while computer technology had improved the program’s results, the gains were limited. Further, as the sites of important intelligence became more widely distributed, the number of potentially significant images ballooned.
Sajda was amazed at how many gigabytes of images went unanalyzed, even unviewed. “Here was this huge pile of data, and no one could even look at it. There just wasn’t enough manpower,” he told me. In 1996, the government merged the NPIC with several related organizations to form the National Imagery and Mapping Agency, hoping to improve its success. But the problem did not go away, and in 2001 a Congressionally appointed committee released a report condemning the agency for its poor performance.
For Sajda, the problem was an intriguing one, and it held his attention starting with that first visit. “I thought then,” Sajda remembers, “that there has to be a way we can speed this up.”
Though Sajda was an engineer, he had studied the human visual system as a graduate student, developing models for how the brain picks apart a scene, identifying what is important and what is not. He knew that the brain still outperformed any computer at identifying important features of images like satellite photos. Most importantly, Sajda was familiar with a long literature, dating back to the mid-1960s, that related rapid changes in brain activity to visual processing of important information.
What is most remarkable about Sajda’s attempt at solving the military’s problem is that it is based primarily on that 1960s-era research. In particular, a series of EEG studies published starting in 1964 in the journals Nature and Science demonstrated, for the first time, specific markers of cognitive processing in the brain activity of people while they viewed images.
One of those studies in particular is a clear precursor of Sajda’s work. It was carried out by a young psychologist named Robert Chapman, and it showed that brain activity was quite different while people viewed images that held important information than while they viewed images that meant nothing to them.
Chapman’s experimental design would seem primitive to psychologists today, but it worked. Subjects sat in a chair in a dimly lit room. In front of them were two illuminated boxes. In one box, a single number was shown, while in the other, a series of numbers, interspersed with plus signs, flashed in front of the subject. The numbers were selected randomly, via holes punched into a piece of paper that was fed by a motored gear through the illuminated machine (the days of experiments presented on computer screens had not yet arrived). A subject had to decide, with each number flashed on the right, whether the number on the left was smaller. Chapman then used a hulking computer, made by Packard Bell, to average all the data surrounding the different types of trials—those with numbers, and those with blanks or plusses.
This data averaging itself was a major step forward. In the early 1960s, the use of EEG to study brain activity was about 40 years old, but the brain’s signals were still poorly understood. In the 1930s, for example, the originator of the EEG technique, Hans Berger, had shown that the squiggly lines representative of electrical brain activity changed significantly when people closed their eyes, or did math in their heads. But, because such early EEG researchers had to do all analysis by looking at the data visually and counting important events or changes, it was almost impossible to conduct and analyze complicated cognitive experiments.
With the introduction of computers, however, researchers could look not just at the continuous EEG over long periods of time but also at the changes that occurred around specific events by averaging the data from a large number of painstakingly timed trials. Most researchers began using this newfound capability to study sensory responses—placing electrodes over the visual cortex at the back of the head, for example, and analyzing how the EEG signal changed when flashes of light of different durations were presented to subjects. Chapman was one of the first to apply that approach to cognitive tasks.
What Chapman found in his study immediately excited him: When subjects viewed any stimulus, there was a quick change in brain activity, the size of which depended on how bright the stimulus was. But when subjects were shown a number, crucial to performing the task before them, the EEG registered a huge spike in brain activity about 300 milliseconds after the stimulus appeared. When a plus sign was shown instead of a number, the spike was notably smaller.
That simple task had revealed something profound: a clear EEG marker of the perception and processing of information relevant to a decision. Samuel Sutton, in a series of experiments published in 1965 in the journal Science, continued to explore that class of responses, focusing specifically on the spike that occurred 300 milliseconds after the stimulus. Eventually, that spike was named the P300 response.
Since those early findings, the P300 has been used to study almost every conceivable topic in neurology and neuroscience: decision-making, consciousness, Alzheimer’s disease, schizophrenia, and, quite prominently, as a brain-computer interface to allow paralyzed people to spell using EEG.
At the time of his visit to the National Photographic Interpretation Center, Sajda was already familiar with the P300 literature, and he began to wonder if there was some way that brain activity itself could be used to speed up image analysis.
The idea was not so far-fetched. In 1996 a paper was published in Nature about a technique called rapid serial visual presentation: RSVP. Researchers demonstrated that images shown extremely rapidly could still be parsed by the visual system, that the telltale signs of visual processing in the EEG were still there. “This was a big inspiration for me,” Sajda remembers. If he could find a difference in brain activity between the rare images that had important targets and those that didn’t, he could use that signature to create a system to analyze the military’s images. And the Nature paper suggested it could be done extremely rapidly, faster than 10 images per second. What’s more, the P300 effect had been shown to be modulated by expertise: Analysts who spent all day looking through images would have particularly robust brain responses.
In 2003, at the urging of a Darpa program officer named Amy Kruse, Sajda wrote a proposal and brought the idea to the agency’s attention. First, he wrote, the system would take advantage of state-of-the-art computer vision techniques, weeding out images that could be easily analyzed without human involvement. Once the more difficult images were isolated, he would train a computer to recognize what an analyst’s brain activity looked like after viewing an image with a target, and one without a target. Then he would present images to analysts at a rapid rate, up to 20 times per second. If his algorithm worked, the computer could generate an “interest score” for each image simply by looking at how robust the P300 response was. Analysts could then spend their time studying the images that mattered, those with the highest scores. After several false starts, the project was backed by Darpa, and Sajda founded Neuromatters to do the product development and engineering. Darpa also set up and financed seven other groups to pursue the technique.
By the time I saw the project, only two groups remained in the hunt, and Sajda’s approach was in the process of being tested by government analysts. According to all of Neuromatters’ studies, the project was a huge success, ready for the field: They claim to have achieved a 300-percent increase in the speed of image analysis by peeking in on the brain. The government might, finally, be able to analyze most of those images.
It seems clear that the technology, if validated by analysts, will be used to speed up the routine processing of satellite imagery. But other military uses, such as the rapid selection of targets for bombing, remain unclear: The program manager for the project at Darpa, William Casebeer, refused a request for an interview for this article. Instead he issued a statement that read, “Taking advantage of the massively distributed parallel-processing capabilities of the human brain by sensing when it has detected anomalies in images could be an important part of a comprehensive approach for dealing with the deluge of data our intelligence analysts deal with each day. Testing of promising prototype NIA systems is ongoing so we can make fully-informed transition decisions.”
But it’s not just the military that is faced with a glut of visual information these days, and both Sajda and the chief executive at Neuromatters, Barbara Hanna, were willing to discuss other potential commercializations of their technology. Hanna and Michael Repucci, who was at the time a Neuromatters engineer and who served as the subject in the sorting demonstrating, showed me a system in which a series of random objects, instead of landscapes, flew by. Repucci is a biker, so he decided to look for images of bikes. When the experiment finished, the bikes flew to the front of the pack, skipping over baseball mitts and lighthouses. Hanna and Repucci excitedly described how, one day, we might no longer flip slowly through catalogs, instead letting the brain-activity interest score decide which products we most want to see. They’re not there yet, they caution, but the applications have begun to seem endless.
In the 50 years since its discovery, the fundamental principle of the P300 has been used to create tools that allow paralyzed people to communicate, to find biomarkers that differentiate people with Alzheimer’s disease and schizophrenia from healthy people, and, now, to efficiently sort the vast data troves of the U.S. government and, perhaps soon, retailers.
According to Robert Chapman, many of those applications were foreseeable, even at the beginning, once it was clear that his team had discovered a basic marker of cognition. “I immediately saw the clinical implications,” he told me. “I saw practically no limit to what we could do.” But what he admitted he could not foresee was that one day a field he helped found would find one of its most promising applications to be a tool developed for the U.S. military.
The irony is that the basic-science laboratory in which Chapman conducted his research was at the Army’s Walter Reed Military Research Center. Chapman had been drafted while in graduate school at Brown University and had worked with his faculty mentors to devise a way for him to remain in science during his conscription by doing research at Walter Reed, where many Brown faculty had connections. In fact, Chapman almost didn’t carry out his EEG research at all; a mistake on his military paperwork sent him to Davids’ Island, off the coast of New York in Long Island Sound, to serve as a statistician at the Chaplain School there. Only after he worked out a swap with a statistician from Kansas was he finally allowed to move to Walter Reed.
He describes Walter Reed in the early 1960s as an idyllic, interdisciplinary research environment reminiscent of the golden era of Bell Labs. According to Chapman, scientists were required to put 50 percent of their effort into their main project, but the other 50 percent was up to them—they could pursue any research they liked. It was this openness that allowed Chapman and his mentor, John Armington, to take time away from their study of the retina to stick those electrodes on the scalp and study cognition.
While all of Chapman’s work at Walter Reed was government-supported, his funds came from the National Institutes of Health, not the military. His work was basic; it had no obvious application at the time outside of cognitive science.
Basic science regularly follows those types of meandering paths to practical relevance, according to Jonathan Moreno, a professor of bioethics at the University of Pennsylvania. “Scientists often fail to foresee where their research is headed,” says Moreno, whose 2006 book Mind Wars discusses the ethical issues surrounding military applications of neuroscience. “Even Einstein refused, for a long time, to believe his work contributed to the atomic bomb.”
While predictions may be nearly impossible, Moreno believes that basic neuroscientists have a responsibility to remain a part of the conversation about how their work is used. “It’s easy to just be too busy to care, with meetings, grad students, grants. But if scientists find the time to stay involved, they can create a true culture of thoughtfulness around these complex issues,” he says.
There are already several groups of scientists and national-security experts attempting to create such a culture. In particular, the National Research Council recently commissioned a two-year study of the potential impacts and ethical implications of using neuroscience research in the military, which was published in 2008. Christopher Green, the chair of the council’s committee and an assistant dean at the Wayne State School of Medicine, in Detroit, says that he believes such reports, if taken seriously by Congress, can fill the role of ethical watchdog effectively.
“There would be nothing wrong with Congress giving one to three million dollars to the National Research Council simply to do a yearly study of the current state of this research and its ethics,” Green says. What’s more, he points out, the process of calling dozens of experts to give their opinions on the topic provides basic scientists a forum to voice their support for or concerns regarding the use of their work.
With that in mind, I asked Chapman whether it bothered him that the ideas he helped uncover and now applies clinically might be used by the military to help plan attacks. He doesn’t see things that way. “I can appreciate that view, but I don’t hold it strongly,” he told me. “I think it’s becoming increasingly difficult to make moral decisions in this world.” Besides, he pointed out, the image-analysis tool was just as likely to be used to avoid civilian casualties as it was to help soldiers kill insurgents, and the early work in his field had also led to major clinical breakthroughs in populations as disparate as the severely brain injured and the schizophrenic.
He paused for a moment, thinking, and then rejected my premise outright. “The truth is, you just can’t hide your discoveries.”
Jon Bardin is a freelance writer based in New York.