I don’t know why this took me by surprise. Maybe because I’m not a big sci-fi reader — with exceptions made for the likes of Isaac Asimov and Philip K. Dick and Stanislaw Lem and William Gibson. And Robert Heinlein and Mary Shelley and H.G.Wells and Ray Bradbury. And Margaret Atwood and Neil Stephenson. And how could I have not led with one of my literary heroes, Jorge Luis Borges? But still, you get my point (I think), which is that I don’t usually think in terms of science fiction becoming applied science.
More fool I.
Last week, Christof Heyns, the man burdened with the unenviable title of “United Nations special rapporteur on extrajudicial, summary, or arbitrary executions,” called for a global moratorium on the testing, production, and use of armed robots that can select and kill targets without human command.
They are known as “lethal autonomous robots.” And yes, this is indeed a nightmarish killer-robot movie come marching off the screen and into all too non-virtual reality.
Yet the report of Heyns’s call didn’t even make the front page of America’s “newspaper of record.” Soothingly buried on an inside page of the New York Times, and calmingly including the reassurance that such robots weren’t “yet” in production, it elicited little comment. It seems our alarm systems have been lulled by the use of drones, so conveniently deployed halfway round the world in all sorts of places most Americans can’t even find on a map.
Drones, it’s now clear, are only the warm-up stage. Think of lethal autonomous robots as drones with minds of their own. Just program them and set them loose, secure in the knowledge that nothing can possibly go wrong. No way their electronics will go haywire. No way they’ll become just a little bit too autonomous. With the kind of fail-safe electronics that exist only in android dreams, humans can sleep secure. So long as they’re not the targets.
But wait just a moment: who gets to say who the targets are? Who’s going to program the robots? And according to what criteria? Will they be programmed to search out “suspicious behavior,” as human drone operators do? But then what makes behavior suspicious? The skin color of the person doing the behaving? Anyone with a beard? Anyone moving too fast, or maybe too slow? In too large a group or suspiciously alone? Animal, vegetable, or mineral?
But it seems to me that an important question to ask here is this: Who is going to be raking in the billions on these robots? Who exactly is doing the research and testing, and will presumably get the huge military contracts? Consider this report last year from San Diego public radio station KPBS on who’s profiting from the $12 billion drone industry (yes, you read the last five words correctly — that’s for the years 2005 to 2011). The top three? How could you possibly not guess? Lockheed Martin, Boeing, and Northrup Grumman. It’s enough to make me ashamed of ever having gotten my pilot’s wings.
And then consider the lengthy, detailed report on the military robot market (mind-numbingly referred to as “Military Ground Robot Mobile Platform Systems of Engagement”) prepared by an outfit called WinterGreen Research. Here, in the kind of mangled grammar that seems to accompany lip-smacking anticipation, is a short extract from the press release:
Even as the US presence in Iraq and Afghanistan winds down, automated process implemented as mobile platform systems of engagement are being used to fight terrorists and protect human life. These robots are a new core technology in which all governments must invest. Military ground robot market growth comes from the device marketing experts inventing a new role as technology poised to be effective at the forefront of fighting terrorism. Markets at $4.5 billion in 2013 reach $12.0 billion by 2019. Growth is based on the adoption of automated process by military organizations worldwide.