Radical Instrument

IT is changing the exercise of power. Radical Instrument is picking up the signals.

Archive for the ‘Military & Security’ Category

Monday reads: Internet activists’ limits in Iran; a middle ground for cyberwar; protests we saw coming

leave a comment »

1.   Internet activism running into its limits in Iran. Can a virtual movement survive without developing real-world institutions? (Foreign Policy)

2.  Finding the sensible middle ground when it comes to cyberwar. Is there such a thing? (O’Reilly Radar)

3.  Australian hackers rebel against content filtering. The sad thing is, government IT staff probably saw this coming, even if the Prime Minister didn’t. (The Canberra Times)

Written by Mark

February 15, 2010 at 10:48 pm

China and the “gray zone” of cybersecurity

leave a comment »

Via Computerworld and other sources:  China has announced the shutdown of what the BBC says “is believed to be the country’s biggest training website for hackers,” Black Hawk Safety Net, resulting in the arrests of three.  The WSJ confirms the arrests actually occurred in November, leading to speculation that this may be an attempt to ward off negative press from its recent flap with Google.

Whether or not that’s true, the shutdown of this site does signal that China is having to navigate a difficult balance with cybersecurity issues as Internet use grows. On the one hand, the growth of nationalist hacker groups has afforded the government the advantage of plausible deniability for activities ranging from campaigns against Tibetan exiles to sophisticated penetration attempts of U.S. government and industry databases. On the other hand, the sheer volume of trained hackers (or untrained, armed with a few easy-to-use tools) combined with a growing e-commerce market makes for … a fertile (if illicit) opportunity, sized at $1B in 2008 and fuel for a $35M “hacker training” industry.

Written by Mark

February 8, 2010 at 9:57 pm

Google and the NSA

with one comment

The Washington Post tells half the story, I think, here. There’s a comment in the WaPo story that says that the goal of partnership is to “build a better defense of Google’s networks.” But I’d guess there’s compelling self-interest for the government in learning about the exploited vulnerabilities themselves. In 2008, the San Francisco Chronicle pointed out a $2M+ deal between Google and the NSA for four search appliances and support, and called Google equipment “the backbone of Intellipedia,” a Wikipedia-like network for intelligence agencies.

Written by Mark

February 4, 2010 at 10:34 pm

Posted in Military & Security

Tagged with ,

Here come the robot sailors…

leave a comment »

…or ghost-frigates, to use The Register’s term for a new drone anti-submarine vessel project proposed by DARPA. The march of military robotics (pun intended) continues, with new questions and still no answers to the previous ones (like ethics). A few, in this case –

1.  Could a foreign navy simply “pick up” a drone at sea, claiming that it constituted a hazard to navigation? How would you disentangle those claims?

2.  Could you spoof a drone’s communications, or block them entirely? Could you “turn” a drone to lead you to a manned, parent vessel?  (Have doubts? Intercepts have already happened.)

3.  If a drone is sunk, what would the consequences be? Would you be willing to risk armed conflict (between manned vessels) over the loss of an unmanned drone?

I can’t help but think that a future U.S. Secretary of State will one day make a speech about the norms needed to govern the use of robotics in warfare…well after the robotics arms race is underway.

Written by Mark

February 2, 2010 at 10:19 pm

Posted in Military & Security

Tagged with

Jack Goldsmith on the cyber arms race

leave a comment »

Today’s Washington Post contains an editorial from Jack Goldsmith, head of the U.S. Justice Department’s Office of Legal Counsel for several months in 2003-2004. It’s a striking piece, arguing that cyber-norms can’t come about until the U.S. discloses or curbs its offensive cyber-activities.

Beneath the text seems to be a concern that animates Goldsmith’s book about his time in the OLC, “The Terror Presidency.” While sympathetic to the Bush Administration, Goldsmith’s time in the OLC saw him concerned with the legal cover that a potentially limitless war gave to executive seizure of power by fiat, rather than via “softer measures.” To quote from his article on the Obama Administration’s counterterrorism strategy:  “Packaging, argumentation, symbol, and rhetoric, it turns out, are vitally important to the legitimacy of terrorism policies.One can’t help but wonder whether some of the same concerns around power aren’t evident here.

More recently, Goldsmith co-authored a book which attempts to dismantle claims that the Internet will undermine government power, among other things. But doesn’t the current Google-China dispute show that that the question of norms actually isn’t being actively pursued by governments, but by non-governmental actors? Goldsmith highlights a possible hypocrisy in Secretary of State Clinton’s call for “norms of behavior among states.” But he neglects to note that this speech was prompted – or at least pre-empted – by actions taken by Google.

Certainly, patterns of similar cyberattacks had occurred previously, without a clear response on the part of the U.S. government (which has, as Goldsmith notes, provided tacit support for “hacktivism” in other circumstances). The vacuum of government action to promote norms may well lead to a situation in which norms originate from the private sector – either consciously, or through business decisions created by an environment of cyber-insecurity. To answer the question posed by the title of Goldsmith’s book, “Who Controls the Internet?” … well, I’m still not sure.

Written by Mark

February 1, 2010 at 11:42 pm

Perspective, Part II – Rethinking Google and China

with one comment

The Net has enough commentary on the situation between Google and China, with the bulk of it focusing on whether this really amounts to Google walking the “don’t be evil” talk.

Over at The Atlantic, though, there’s a remarkable (and widely disseminated) post by Marc Ambinder that includes the following –

In the absence of an international treaty defining what cyber sovereignty consists of, it is hard to figure out the boundaries, much less police them effectively.

The geopolitics of cyber power suggests that centrally directed government espionage is…tolerated by U.S. officials.

…and…

There is no fear among U.S. officials that China would ever mount a crippling cyber attack against U.S. infrastructure, even though they have mapped our electrical grid and probably left behind some malware that could be triggerable at a later date. (For what it’s worth, the U.S. has also mapped China’s electrical grid.)

The entire post is remarkable, but these three sentences point to the international norms that have developed organically around the use of cyberspace to project power. Ambinder’s post is yet more confirmation that every day, no matter what governments or companies deny, information networks are subject to “attacks” – read unauthorized penetration and potential tampering – at a volume which is only hinted at, but is presumed to be stunning, and likely originates with governments as well as criminals. This happens largely out of sight, except for those directly involved – and it’s difficult to resist parallels with military activities in Afghanistan and elsewhere. We have come to accept, as a new norm, the unauthorized reconnaissance of networks that (presumably, but not always) exist within national boundaries – much as the international community already accepts, with a few glaring exceptions, that states will attempt to maintain surveillance of other states’ activities, without authorization.

The analogy doesn’t hold, though. Surveillance conducted in the physical world still presumes that sovereignty remains respected – and there are still several steps of tension between surveillance that a state perceives as “crossing the line” and outright conflict. If reconnaissance in an information network is accompanied by tampering – see Ambinder’s reference to malware that “could be triggerable,” above – the distance between reconnaissance and conflict is much, much shorter. If you accept the feasibility of the “Digital Pearl Harbor” threat (and I don’t), wouldn’t the placement of “triggerable malware” be the equivalent of finding, say, explosives rigged for remote detonation outside key infrastructure? Should there be a pattern of norms in cyberspace that is fundamentally different for that governing states’ behavior in the physical world?

Ambinder’s post hints that the pattern is actually closer to a MAD relationship (see the third quote above, emphasis on the for what it’s worth part), as existed between the U.S. and Soviet nuclear arsenals – with the implicit assumption being that this represents a sort of stability. I’m not sure that holds. What made MAD work was transparency – the impossibility of the surprise “first-strike” that negated the “mutual” part of MAD. That transparency is completely lacking when it comes to the use of power in cyberspace. There is near-zero attribution (officially, anyway) of activities, of tracing cyberwar back to identifiable cyberwarriors. There is a level of secrecy afforded to the cyber-environment that I’d wager tempts states to take more risks, producing greater instability over time.

Back to Google and China. Where this represents a landmark – or where it doesn’t – is in the transparency Google brought to the situation that developed. Fundamentally, Google’s decision challenges the international norm that has allowed activities like China’s to continue and proliferate across global networks. The proposition that Google’s decision implies is that if international actors are to interact on the global internet, a set of acceptable behaviors to govern their interactions must be defined through practice. Google’s decision in effect implies that current practice is unacceptable.

And it may be the case that only a non-state actor like Google, one not vested in questions of international power, could do this. Whether this challenge gains momentum – or whether we give up on the idea of a global internet altogether – remains to be seen.

Written by Mark

January 26, 2010 at 10:13 pm

Perspective, Part I

with one comment

Relayed by a friend who’d asked an ex-colleague about what surprised him most about his Afghanistan deployment –

“I shot live rounds every day I was there. Every day.”

Written by Mark

January 19, 2010 at 11:46 pm

Posted in Military & Security

Tagged with

Military robots and ethics – more debate, but still missing some questions?

leave a comment »

In the BBC’s top technology stories tonight:  a University of Sheffield professor of artificial intelligence states that a military robot’s ability to distinguish friend from foe reliably is still 50 years away, meaning that the technology needs restraint while the ethics catch up.

Regardless of whether it’s fifteen or fifty years, Moore’s Law practically mandates that the technology will outrace ethics and policies, absent a multinational commitment to constrain it. There are questions beyond rules of engagement as exercised by a semi-autonomous or autonomous robot – for instance, whether controllers, safely ensconced hundreds or thousands of miles away, constitute legitimate military targets. All such questions point to a grave potential – the probability that the growing use of robots could encourage rather than inhibit war, and expand the domain of the battlefield to include more civilians.

The same questions have been raised when it comes to cybersecurity, leading some to raise the idea of an international convention. If it comes about, it might need to aim at a larger ambition – to understand, and then govern automation as it advances and is applied to war.

Written by Mark

August 3, 2009 at 9:36 pm

Cyberwar and civil damage

leave a comment »

From the front page of Sunday’s NY Times:   the outlines of a continuing debate around the broader, unintended consequences of cyberwar.

This curious section appears about midway through the piece:

But some military strategists argue that these uncertainties have led to excess caution on the part of Pentagon planners.

“Policy makers are tremendously sensitive to collateral damage by virtual weapons, but not nearly sensitive enough to damage by kinetic” — conventional — “weapons,” said John Arquilla, an expert in military strategy at the Naval Postgraduate School in Monterey, Calif. “The cyberwarriors are held back by extremely restrictive rules of engagement.”

Despite analogies that have been drawn between biological weapons and cyberweapons, Mr. Arquilla argues that “cyberweapons are disruptive and not destructive.”

This seems odd, given Arquilla’s previously articulated concerns over “a grave and growing capacity for crippling our tech-dependent society [which] has risen unchecked,” and his advocacy for arms control in this area. Granted, he’s careful to distinguish “mass disruption” from “mass destruction,” but the line between mass disruption and simple destruction seems blurry. There would seem to be a great deal of nuance in advocating international controls on the one hand, and less restrictive rules of engagement on the other.

He does raise a point about interpretative differences as applied to both conventional and cyberweapons. Should there be a difference, especially if the full extent of collateral effects are unknown? The case study here might be electrical infrastructure – especially since it’s been featured so prominently in Department of Homeland Security arguments. As the LA Times has noted, the U.S. attack on Iraq in 2003 deliberately avoided attacks on electrical infrastructure – a significant change from the 1991 campaign, and its second- and third-order effects. If an attack on information networks has the same effect on electrical infrastructure as a conventional attack, should it be governed by the same rules? Or if it has the same second- and third-order effects as an attack on electrical infrastructure – regardless of whether or not the electrical infrastructure is targeted – should it be governed the same?

Underlying this debate is the simple trend towards a more integrated world, in material, communications, and social networks. It’s not a flat world by any measure, but the wiring continues to be put in place. As that happens it’s going to be even more difficult to separate warfare from its effects on civil society. So it’s important that this debate continue – to ensure, at a minimum, that the use of information systems to conduct war works to inhibit rather than encourage war.

Written by Mark

August 2, 2009 at 7:01 am

Posted in Military & Security

Tagged with

Three more for Thursday

leave a comment »

1.  U.S. military considers banning social networking technologies, due to security concerns (Wired Danger Room). Watch out for that pendulum. Leads one to wonder how much of the security problem is purely due to the technologies, and how much is due to things like architecture, data structures, and organization.

2. About 70% of Nigeria’s bandwidth lost in an undersea cable cut (BBC).

3. The MS-Yahoo! deal doesn’t include Yahoo! China. Not that it matters, in terms of China’s search market (The Register).

Written by Mark

July 30, 2009 at 9:11 pm