The Cybersecurity Dilemma: Hacking, Trust, and Fear Between Nations organizes its arguments in a fairly no-nonsense, premise –> premise –> conclusion manner that I thought would be good to summarize with a blog post. You can buy the book here.
Security dilemmas are dangerous because in seeking their own security, states often build capabilities and take actions that can directly threaten the security of other states: often creating impressions of imminent offensive intent and prompting escalation. This is especially true for cybersecurity for several main reasons:
States that desire options for future cyber operations must make intrusions in advance.
States that desire options purely to defend themselves also have the incentive to intrude early as the defensive process of preparation, detection, data collection, analysis, containment, and decontamination benefit from making intrusions.
Cyber intrusions for intelligence gathering will be perceived as more threatening than past intelligence operations
The traditional mitigations to the security dilemma are less effective for the cybersecurity dilemma
The cybersecurity dilemma is also more complex to solve than the normal security dilemma since cyber capabilities are harder to assess than military power and status quo behaviors are likely to change as norms for mutually acceptable behavior have yet to be ironed out.
In response to these arguments establishing the “cybersecurity dilemma” as a serious problem in international relations, there are a few counter-arguments which the book seeks to address:
This question sounds like it assumes that cyber capabilities are simply build for defensive purposes and not for offensive purposes. I think all the bigger nations that have build capabilities currently use them for offensive purposes.
If a state would actually care more about defensive they could invest more resources into making their own systems safer.
The book does assume from the start that states want offensive options. I guess it is useful to breakdown the motivations of offensive capabilities. Though the motivations aren’t fully distinct, it matters if a state is intruding as the prelude to or an opening round of a conflict, or if it is just trying to improve its ability to defend itself without necessarily trying to disrupt anything in the network being intruded into. There are totally different motives too, like North Korea installing cryptocurrency miners on other countries’ computers, but I guess you could analogize that to taxing territory from a foreign state without engaging its military.
The book basically argues that even if cybersecurity is your goal, a more cost-effective defense will almost always involve making intrusions for defensive purposes since it becomes prohibitively expensive to protect everything when the attacker can choose anywhere to strike.
I could see an argument that very small actors would do better to focus purely on defenses, since if their networks are small enough, it may be easier to map them and to protect everything extremely well, while it could require more talent to make useful intrusions into other networks. The larger an actor is, (like a state) the more complex its systems are, and the harder to centrally control and monitor those systems are, presumably the more effective going on the offensive becomes to counter intruders. I think states do make this calculation, and that's why they often also have smaller air-gapped systems that are easier to defend.
For defending the public though, it would be a nightmare to individually intervene in millions of online businesses, just as it would be a nightmare if the government had to post guards outside every business to prevent intrusion by foreign soldiers. When the landscape is like that: far more vulnerabilities than adversaries, potential adversaries are a rational point of focus.
If you are a large country like the US. You don't need to intervene manually in millions of online businesses. On a policy-level you need to setup legal liability for people who are uses practices that put users at risk.
Equifax should be liable in a way that bankrupts the company for what they did.
As a result of Julia Reda's work the EU recently decided to pay for bug bounties for important open source projects that are used a lot in it's infrastructure.
We should move to a world where we don't have bufferoverflows due to the problems of C and use more safe language like Rust for the lower part of our techstack.
To the extend we have dependencies that are twenty year old vulnerable C code the government should take a few billion into it's hand to get them rewritten in Rust when it's widely used open-source code or force companies through liability for breaches to rewrite their own closed source stuff.
Edit Note: I fixed some of the formatting in this post. Feel free to revert it.
Edit: Note: I fixed some of the formatting in this post at the same time. We saved over each other and mine won, but now the editor is a bit broken. Will fix in the morning.