Radical Instrument

IT is changing the exercise of power. Radical Instrument is picking up the signals.

Posts Tagged ‘robotics

Here come the robot sailors…

leave a comment »

…or ghost-frigates, to use The Register’s term for a new drone anti-submarine vessel project proposed by DARPA. The march of military robotics (pun intended) continues, with new questions and still no answers to the previous ones (like ethics). A few, in this case –

1.  Could a foreign navy simply “pick up” a drone at sea, claiming that it constituted a hazard to navigation? How would you disentangle those claims?

2.  Could you spoof a drone’s communications, or block them entirely? Could you “turn” a drone to lead you to a manned, parent vessel?  (Have doubts? Intercepts have already happened.)

3.  If a drone is sunk, what would the consequences be? Would you be willing to risk armed conflict (between manned vessels) over the loss of an unmanned drone?

I can’t help but think that a future U.S. Secretary of State will one day make a speech about the norms needed to govern the use of robotics in warfare…well after the robotics arms race is underway.

Advertisements

Written by Mark

February 2, 2010 at 10:19 pm

Posted in Military & Security

Tagged with

Military robots and ethics – more debate, but still missing some questions?

leave a comment »

In the BBC’s top technology stories tonight:  a University of Sheffield professor of artificial intelligence states that a military robot’s ability to distinguish friend from foe reliably is still 50 years away, meaning that the technology needs restraint while the ethics catch up.

Regardless of whether it’s fifteen or fifty years, Moore’s Law practically mandates that the technology will outrace ethics and policies, absent a multinational commitment to constrain it. There are questions beyond rules of engagement as exercised by a semi-autonomous or autonomous robot – for instance, whether controllers, safely ensconced hundreds or thousands of miles away, constitute legitimate military targets. All such questions point to a grave potential – the probability that the growing use of robots could encourage rather than inhibit war, and expand the domain of the battlefield to include more civilians.

The same questions have been raised when it comes to cybersecurity, leading some to raise the idea of an international convention. If it comes about, it might need to aim at a larger ambition – to understand, and then govern automation as it advances and is applied to war.

Written by Mark

August 3, 2009 at 9:36 pm

Back from hiatus…

leave a comment »

…and I’m reading “The Downside of Letting Robots Do the Bombing” in the Sunday NYT. The theme is one of investing in tactics missing a strategic context: 

But in Pakistan, some C.I.A. veterans of the tribal battles worry that instead of separating the citizenry from the militants the drone strikes may be uniting them. These experts say they fear that killing militants from the sky won’t undermine, and may promote, the psychology of anti-American militancy that is metastasizing in the country.

…and later –

Intelligence officials in Washington and Islamabad said it was nearly impossible to measure the impact of the strikes on the so-called ‘war of ideas.'” 

Umm…yes. What idea, exactly, is a flying killer robot supposed to convey?

Written by Mark

March 22, 2009 at 9:08 pm

Posted in Military & Security

Tagged with ,

Soviet drones of yesteryear

leave a comment »

Military robots are having their fifteen minutes, from The Daily Show to the pages of The New Yorker. Amid all the talk of revolution, it might be worth asking what happened to the Soviet teletank, and why it didn’t last beyond the first few years of World War II.

It’s an interesting question, given that the “revolution in military affairs” in U.S. defense circles has its intellectual roots in the Soviet Union’s “military-technical revolution” … a concept that would be difficult to untangle from the Soviet Union’s social and cultural relationship with cybernetics

The (purely speculative) point:  there’s something in the “robot revolution” that’s more than a straightforward matter of military capability, that points to deeper societal and historical trends. It might be sympotmatic of a return to isolationism, or something else. Whatever it is, it’s worth examining, before the revolution moves much further.

Written by Mark

February 19, 2009 at 9:47 pm

Posted in Military & Security, Technology

Tagged with

Robots and the systematization of war

leave a comment »

After a tortuous ride through blog-space (thanks to links on Opposed Systems Design and Kings of War), I came across this New Year’s Day post (featured on Entitled to an Opinion) featuring a 2005 presentation made by a former R&D head in the Israeli Ministry of Defense.

At the heart of the presentation is a “system collapse model” for Israeli anti-terrorism efforts, which is described as “mostly all maths” and a “thermodynamic model which analyzes the disorder inside the system.” It’s essentially the theory of Jenga:  the more components of a system that are damaged, the greater the probability of system collapse, even if the system’s “critical point” remains undamaged. 

It’s pure speculation to draw lines from this theory to the IDF’s 2006 and 2008 campaigns in Lebanon and Gaza. Still, one can’t help but be reminded of General James N. Mattis’ memo to U.S. Joint Forces Command criticizing “effects-based operations” (which this Air Force briefing describes as an approach that models an “Enemy as a System”):  

For example, a recent analysis of the recent Israeli-Hezbollah conflict found that EBO ‘terminology used was too complicated, vain, and could not be understood by the thousands of officers needed to carry it out.’ … Although there are several factors why the IDF performed poorly during the war, various post-conflict assessments have concluded that over reliance on EBO concepts was one of the primary contributing factors in their defeat.”

Here’s what’s troubling. There’s an argument to be made that the military’s growing use of robots – highlighted by P.W. Singer’s new book and debates over the implications for ethics and laws of war – represents the continuance of a mindset that applies “enemy as a system” thinking to war. General Mattis’ memo highlights the dangers inherent in this reductionism:  “…all operating environments are dynamic with an infinite number of variables…” Variables, in other words, that can’t be overcome with better or even adaptive programming. 

And that doesn’t even start to address the moral questions involved in systems models for warfare, or the ethical and legal questions associated with the application of these models to domains like homeland security … domains which will see technology manifestations of this thinking (e.g., TSA databases) just as we’re seeing with the growth in military robots. It’s not the technology that should trouble us. It’s the theory that defines how we use it. 

Post-script:   The background for the presentation featured on Entitled to an Opinion is intriguing. It was delivered at a Russian think-tank headed by a political theorist with possible links to far-right European and neo-Nazi groups. This same theorist, Sergey Kurginyan, also appeared on a panel with a U.S. Department of Homeland Security official at a 2008 “World Summit on Counter-Terrorism” in Israel. No conspiracy theory here – just a suspicion on the possible attraction of these systems models to the right-wing, and a comment that we should really watch who we associate with on the counterterrorism front. 

Written by Mark

February 10, 2009 at 11:21 pm

Monday roundup…Google vs. closed information societies, virtualizing the Great Game, Asian MMOs, and, yes, more robots

with one comment

1.  The director of Google Earth / Google Maps connects objections to these programs to “closed information societies,” naming “…China to some extent and in Russia and legacies of that in places like India.

2.  SimAfghanistan. The McNamara legacy lives on…with better visualization techniques, I’m sure.

3. Via Wired’s Danger Room, a Navy-sponsored report from Cal Poly on “Autonomous Military Robotics:  Risk, Ethics, and Design.” The report alludes to the potential for “asymmetric response” to the increased use of military robots, but misses what I think is a central question:  is the “man-in-the-loop” (or “tele-operator,” in the lingo of this report) become a legitimate military target for the opposing side? If so, does that not risk loosening the ethical barrier against “total war,” considering that many of these tele-operators may be on U.S. soil? The technology is moving too fast here to focus solely on the ethics of American use of military robots. Rather, the U.S. should take advantage of its lead in this area to drive an international convention defining the use and limits – the law of war, if you will – for robots as they exist today and might exist in the next five years.

4.  Four of the top five massively multiplayer online role-playing games (MMORPG) are Asian, in terms of revenue. The other one is, of course, World of Warcraft…and rounding out the rest of the top ten are other Western efforts. GigaOm speculates that the most popular MMORPG in this group is not WoW, but likely one of the Asian games – possibly China’s Fantasy Westward Journey

5.  In tangentially related news, South Korea announced a four-year, $25B (shared by the government and telecom operators) effort to upgrade Internet service to 1 Gbps, which GigaOm notes is 200 times as fast as a U.S. DSL connection. Korea has always been a fascinating lab to watch the development of “wired society,” from social networking and VoIP technologies (watch the video) to national plans for robotics and treatments for web-addiction. The aforementioned video asks why several Korean technologies – such as a forerunner to MySpace – went largely unnoticed outside Korea, and offers a partial answer in culture. I say partial, because it speaks to a dynamic between code and culture that still isn’t well understood.

Written by Mark

February 2, 2009 at 9:35 pm

Killer robots, part 2: implications for international law?

with one comment

As Foreign Policy Watch recently noted, the winter edition of the Wilson Quarterly features an article by P.W. Singer on the vertigo-inducing growth of robots and automation in the U.S. military, adapted from his new book Wired for War

Opinio Juris has a nice summary of points raised during an NPR interview with Singer, particularly his theme that the assumptions by which we define “war” are less and less valid. This is critical, and it seemed to be a point overlooked in a Washington Post opinion piece that caromed around a few blogs recently. 

The literature that’s sprung up so far seems to be unified by the contrast between the efficacy of military robots, and the ethics of their use. The legal question, when examined, seems to concentrate on whether we can avoid the robot equivalent of a My Lai (and, indeed, John Pike, among recent authors, has even suggested robots could thwart genocide, not to mention lesser atrocities), given the potential lethality of autonomous or semi-autonomous machines. 

While important, I’m not sure that the efficacy/ethics question can be resolved without a better understanding of what now constitutes “war,” as Singer’s work suggests. Take for instance, the example cited of the human operator in the U.S. piloting a drone operating in a war zone a continent away. Singer elsewhere in the article quotes Gordon Johnson of the Joint Forces Command:  “The enemy, are they going to give up blood and guts to kill machines?” Left unanswered is the question as to whether an enemy would instead focus on attacking the man-in-the-loop in the U.S., and whether or not this would constitute a legitimate act of war. Is the man-in-the-loop a combatant in this instance? Will increased use of robotics, controlled from afar, not just change the proclivity to use military force, as Singer asks, but also blur the boundaries of what legitimately constitutes a “battlefield”? 

As Gary M. Anderson and Adam Gifford, Jr., have noted, international law has historically lagged technological change – the submarine and strategic bombing representing two cases in point. But in a situation where Moore’s Law is driving the rate of change – and where advances in robotics are not limited to the U.S. – would it not be wise for a post-Iraq U.S. to lead a definition of conventions for robot operation and use? Leaving this one to technology evolution just seems too dicey. I don’t know whether this gets raised in Singer’s book, but I’m looking forward to reading it.

Written by Mark

January 27, 2009 at 10:43 pm

Posted in Military & Security, Technology

Tagged with