Monday, June 25, 2012

Is It Possible to Wage a Just Cyberwar?

Is It Possible to Wage a Just Cyberwar?

m.theatlantic.com | Nov 30th -0001

It's time to get serious about the moral questions resulting from our new class of weapons.

In the last week or so, cyberwarfare has made front-page news: the United States may have been behind the Stuxnet cyberattack on Iran; Iran may have suffered another digital attack with the Flame virus; and our military and industrial computer chips may or may not be compromised by backdoor switches implanted by China. These revelations suggest that the way we fight wars is changing, and so are the rules.

This digital evolution means that it is now less clear what kind of events should reasonably trigger a war, as well as how and when new technologies may be used. With cyberweapons, a war theoretically could be waged without casualties or political risk, so their attractiveness is great -- maybe so irresistible that nations are tempted to use them before such aggression is justified. This essay identifies some important ethical issues that have been upturned by these emerging digital weapons, which in turn help explain why national cyberdefense is such a difficult policy area.

Why Worry?

How we justify and prosecute a war matters. For instance, the last U.S. presidency proposed a doctrine of preventive or preemptive war, known as the "Bush doctrine," which asked, if a nation knows it will be attacked, why wait for the damage to be done before it retaliates? But this policy breaks from the just-war tradition, which historically gives moral permission for a nation to enter war only in self-defense. This tradition says that waging war -- a terrible evil that is to be avoided when possible -- requires a nation to have the righteous reason of protecting itself from further unprovoked attacks.

With the Bush doctrine, the U.S. seeks to expand the triggers for war -- and this could backfire spectacularly. For instance, Iran reports contemplating a preemptive attack on the U.S. and Israel, because it believes that one or both will attack Iran first. Because intentions between nations are easy to misread, especially between radically different cultures and during an election year, it could very well be that the U.S. and Israel are merely posturing as a gambit to pressure Iran to open its nuclear program to international inspection. However, if Iran were to attack first, it would seem hypocritical for the U.S .to complain, since the U.S. already endorsed the same policy of first strike.

A big problem with a first-strike policy is that there are few scenarios in which we can confidently and accurately say that an attack is imminent. Many threats or bluffs that were never intended to escalate into armed conflict can be mistaken as "imminent" attacks. This epistemic gap in the Bush doctrine introduces a potentially catastrophic risk: The nation delivering a preemptive or preventative first strike may turn out to be the unjustified aggressor and not the would-be victim, if the adversary really was not going to attack first.

Further, by not saving war as a last resort -- after all negotiations have failed and after an actual attack, a clear act of war -- the Bush doctrine opens the possibility that the U.S. (and any other nation that adopts such a policy) may become ensnared in avoidable wars. At the least, this would cause harm that otherwise might not have occurred to the warring parties, and it may set up an overly stretched military for failure, if battles are not chosen more wisely.

What does this have to do with cyberwarfare? Our world is increasingly wired, with new online channels for communication and services interwoven into our lives virtually every day. This also means new channels for warfare. Indeed, a target in cyberspace is more appealing than conventional physical targets, since the aggressor would not need to incur the expense and risk of transporting equipment and deploying troops across borders into enemy territory, not to mention the political risk of casualties. Cyberweapons could be used to attack anonymously at a distance while still causing much mayhem, on targets ranging from banks to media to military organizations. Thus, cyberweapons would seem to be an excellent choice for an unprovoked surprise strike.

Today, many nations have the capability to strike in cyberspace -- but should they? International humanitarian laws, or the "laws of war", were not written with cyberspace in mind. So we face a large policy gap, which organizations internationally have tried to address in recent years, such as the U.S. National Research Council. But there is also a gap in developing the ethics behind policies. We describe below some key issues related to ethics that need attention.

1. Aggression

By the laws of war, there is historically only one "just cause" for war: a defense to aggression, as previously mentioned. But since aggression is usually understood to mean that human lives are directly in jeopardy, it becomes difficult to justify military response to a cyberattack that does not cause kinetic or physical harm as in a conventional or Clausewitzian sense, such as the disruption of a computer system or infrastructure that directly kills no one. Further, in cyberspace, it may be difficult to distinguish an attack from espionage or vandalism, neither of which historically is enough to trigger a military response. For instance, a clever cyberattack can be subtle and hard to distinguish from routine breakdowns and malfunctions.

If aggression in cyberspace is not tied to actual physical harm or threat to lives, it is unclear then how we should understand it.

If aggression in cyberspace is not tied to actual physical harm or threat to lives, it is unclear then how we should understand it. Does it count as aggression when malicious software has been installed on a computer system that an adversary believes will be triggered? Or maybe the very act of installing malicious software is an attack itself, much like installing a landmine? What about unsuccessful attempts to install malicious software? Do these count as war-triggering aggression -- or mere crimes, which do not fall under the laws of war? Traditional military ethics would answer all these questions negatively, but in the debate over the legitimacy of preemptive and preventative war, the answers are more complex and elusive.

Relatedly, insofar as most cyberattacks do not directly target lives, are they as serious as conventional attacks? Organized cybervandalism could be serious if it prevents a society from meeting basic human needs like providing food. A lesser but still serious case was the denial-of-service cyberattacks on media-infrastructure websites in the country of Georgia in 2008, which prevented the government from communicating with its citizens.

2. Discrimination

The laws of war prohibit the targeting of noncombatants, since they do not pose a military threat. Most theorists accept a "double effect" in which some noncombatants could be unintentionally harmed, i.e., collateral damage, in pursuing important military objectives, though other scholars defend more stringent requirements and greater protections for noncombatants. Some challenge whether noncombatant immunity is really a preeminent value, but the issue undoubtedly has taken center stage in just-war theory and therefore the laws of war.

It is unclear how discriminatory cyberwarfare can be. If victims use fixed Internet addresses for their key infrastructure systems, and these could be found by an adversary, then they could be targeted precisely. However, victims are unlikely to be so cooperative. Therefore, effective cyberattacks need to search for targets and spread the attack, but as with biological viruses, this creates the risk of spreading to noncombatants: while noncombatants might not be targeted, there are also no safeguards to help avoid them. The Stuxnet worm in 2010 was intended to target Iranian nuclear processing facilities, but it spread far beyond intended targets. Although its damage was highly constrained, its quick, broad infection through vulnerabilities in the Microsoft Windows operating system was noticed and required upgrades to antivirus software worldwide, incurring a cost to nearly everyone. The worm also inspired clever ideas for new exploits currently being used, another cost to everyone. Arguably, then, Stuxnet did incur some collateral damage.

Cyberattackers could presumably appeal to the doctrine of double effect, arguing that effects on noncombatants are acceptable because they are unintended, though foreseen. This may not be plausible, given how precise computers can be when we want. Alternatively, cyberattackers could argue that their attacks were not directly against noncombatants but against infrastructure. However, attacking a human body's immune system, such as the AIDS virus does, can be worse than causing bodily harm directly. Details matter: for instance, if it knocks out electricity and the refrigeration necessary to protect the food supply, even a modest cyberattack could lead to starvation and suffering of innocents.

3. Proportionality

As one U.S. official described the nation's cyberstrategy, "If you shut down our power grid, maybe we will put a missile down one of your smokestacks."

Proportionality in just-war theory is the idea that it would be wrong to cause more harm in defending against an attack than the harm of the attack in the first place. This idea comes from utilitarian ethics and is also linked to the notion of fairness in war. For example, a cyberattack that causes little harm should not be answered by a conventional attack that kills hundreds. But as one U.S. official described the nation's cyberstrategy, "If you shut down our power grid, maybe we will put a missile down one of your smokestacks."

A challenge to proportionality is that certain cyberattacks, like biological viruses, might spiral out of control regardless of the attackers' intentions. While those consequences could be tolerated to prevent even worseconsequences, lack of control means an attack might not be able to be called off after the victim surrenders, violating another key law of war. Another issue is that the target of a cyberattack may have difficulty assessing how much damage they have received. A single malfunction in software can cause widely varied symptoms; thus a victim may think they have been harmed more than they actually have, motivating a disproportionate counterattack. Therefore, counterattack in cyberspace -- a key deterrent to unprovoked attacks -- is now fraught with ethical difficulties.

4. Attribution

Discrimination in just-war theory also requires that combatants be identifiable to clarify which targets are legitimate; this is the principle of attribution of attackers and defenders. Terrorism ignores this requirement and therefore elicits moral condemnation. Likewise, a possible problem with cyberwarfare is that it is very easy to mask the identities of combatants. Then counterattack risks hurting innocent victims, if the responsible party is unknown. For example, the lack of attribution of Stuxnet raises ethical concerns because it denied Iran the ability to counterattack, encouraging it towards ever more extreme behavior.

Attribution is not only about moral responsibility but also criminal (or civil) liability: we need to know who to blame and, conversely, who can be absolved of blame. To make attribution work, we need international agreements. We first could agree that cyberattacks should carry a digital signature of the attacking organization. Signatures are easy to compute, and their presence can itself be concealed with the techniques of steganography, so there are no particular technical obstacles to using them. Nation-states could also agree to use networking protocols, such as IPv6, that make attribution easier, and they could cooperate better on international network monitoring to trace sources of attacks. Economic incentives, such as the threat of trade sanctions, can make such agreements desirable.

5. Treacherous Deceit

Perfidy, or deception that abuses the mutual trust needed for fair conduct in warfare, is prohibited by both Hague and Geneva Conventions. For instance, soldiers are not permitted to impersonate humanitarian workers and enemy soldiers. In contrast, some ruses, misinformation, false operations, camouflage, and ambush of combatants are explicitly permissible. Cyberattacks almost inevitably involve an element of deception, such as tricking a user to click on a malicious link. So, to what extent could cyberattacks count as perfidy and therefore be illegal given international humanitarian law? (Consider, for instance, a fake email addressed from the International Committee of the Red Cross to a military organization, but actually sent by some malicious nation-state along with a virus: how is this different from posing as a Red Cross worker?)

We don't get as angry when software betrays us as when people betray us. But maybe we should.

To understand why perfidy is prohibited, we can look at its twin concept of treachery: a prototypical example of a treacherous (and also illegal) act in war is to kill with poison. But why should poison be so categorically banned, given that some poisons can kill quickly and painlessly, much more humanly than a bullet to the head? This apparent paradox suggests that the concept of treachery -- and therefore perfidy -- is fuzzy and hard to apply. We don't get as angry when software betrays us as when people betray us. But maybe we should. Software would be better if users were less complacent.

6. A Lasting Peace

In just-war theory, recent attention has focused on the cessation of hostilities and establishment of a lasting peace, given problems with recent insurgencies. The consensus is that combatants have obligations after the conflict is over. For example, an attacking force might be obligated to provide police forces until the attacked state can stabilize, or attackers might have duties to rebuild the damage done by their weaponry.

This suggests that cyberattacks could be morally superior to traditional attacks insofar as they could be engineered to be reversible. When damage done is to data or programs, the originals may be restorable perfectly from backup copies, something that has no analogy with guns and bombs. Sophisticated, responsible attacks could even use encryption to make its reversal a matter of decryption. Such restoration could be done quickly if the attack was narrowly targeted, and it could be done remotely. Therefore, mandating reversal of cyberattacks after hostilities have ceased could even become part of the laws of war. However, reversibility is not guaranteed when it is unclear what is damaged or so much is damaged that restoration takes an unacceptable amount of time. We need to establish ethical norms for reversibility and make them design requirements for cyberattack methods.

Conclusion

The issues sketched above are only some of the basic ethical questions we need to resolve, if national cyberpolicies are to be supported by consistent and reasonable principles. The right time to investigate them is prior to the use of cyberweapons, not during an emotional and desperate conflict or after international outcry. Thinking about ethics after the fact is much less effective. For instance, the Ottawa Treaty prohibits the use of indiscriminate landmines, but countless landmines still lie buried, until someone -- perhaps a child -- discovers it with his or her life; similarly, the Non-Proliferation Treaty has reduced but not stopped the spread and threat of nuclear weapons.

With cyberweapons, we have the chance to get it right from the start, as we do with other emerging technologies. We need not be helpless bystanders, merely watching events unfold and warfare evolve in the digital age. With hindsight and foresight, we have the power to be proactive. By building ethics into the design and use of cyberweapons, we can help ensure that war is not more cruel than it already is.

Original Page: http://m.theatlantic.com/technology/archive/2012/06/is-it-possible-to-wage-a-just-cyberwar/258106/

Shared from Read It Later

 אל

No comments:

Post a Comment