Point/Counterpoint: Why Google Can’t Stop Terror Attacks
In the aftermath of the acts of terror committed in Paris and San Bernardino, law enforcement officials and politicians have called for…
In the aftermath of the acts of terror committed in Paris and San Bernardino, law enforcement officials and politicians have called for restrictions on the availability of encryption as a way to prevent interceptions of terrorist communications from “going dark.”
Fellow Inflectionite Gwynne Monahan calls this a misguided approach, and I wholeheartedly agree. Gwynne goes on to propose that the tech community create better solutions for law enforcement to solve the “Too Much Information” problem, and more efficiently sort through mountains of collected data. This is where Gwynne and I have some differences of opinion. (By the way, if you haven’t read her post, you should go do so.)
Do we hold algorithms accountable to people, or do we hold people accountable to algorithms?
I believe in the power of technology to solve a lot of problems — but not all of our problems. Unfortunately, it can often feel like everyone in Silicon Valley is wielding a massive hammer, and are desperately in search of a nail. Beyond just asking ourselves if technology and algorithms can be used to solve the terrorism problem, we must first ask: should they be?
The algorithm that Google uses to customize your search results relies on Big Data and machine learning. Each time you run a search, Google is saying “when you enter this search query, we think this result is the most relevant one for you because it was also the most relevant one for millions of other people similar to you.” Did you see the unspoken assumption in that, though? Google is assuming that past behavior is the best predictor of future intent.
This is problematic when we attempt to apply the same reasoning to terror attacks. Almost by definition, terror attacks are anomalies. This makes it difficult — if not impossible — to use past events as a predictor of who will be a terrorist, or what the next terror attack will be. It’s why the rest of the world ridicules Americans for taking off their shoes in airports.
When you look at common attributes of individuals committing acts of violence in the US, you’re much more likely to be killed by a white male than an Islamic extremist. Yet the idea that all white men should be treated with additional scrutiny is treated with (deserved) scorn. The sheer amount of data generated by surveilling an entire community makes it more likely that an actual threat will be missed.
Applying Big Data analysis to terrorism also has some disturbing implications for freedom of speech and thought. If the last terrors attack was committed by a Muslim, should we search the homes of everyone who orders a Quran from Amazon? If the last terror attack was committed by a white supremacist, should we search the homes of all white gun owners?
The bottom line is that using statistically insignificant data to try to prevent terror attacks will have hugely negative outcomes for innocent populations.
We’ve already seen studies reporting that individuals are self-censoring their web searches in response to NSA surveillance. I consider that to be a massive blow to intellectual freedom. If our primary response to terror attacks is to create predictions about future attackers based on the behavior of past attackers, all we will end up creating is an ever-lengthening list of activities that will automatically flag you for investigation.
Silicon Valley can’t solve terrorism. I fear that creating “better” data analysis tools will only perpetuate agencies’ vicious appetite for more and more data. Time and time again, we have seen that successful anti-terror efforts involve local policing and human intelligence sources — not algorithms.
So, what can the tech community do? I believe we will be collectively better off in the long run if we put down our hammers and admit that they’re not the best tool for assembling a jigsaw puzzle. Terrorism is a real problem, but I think we can address it more effectively by keeping people — not code — at the center of our solutions.