|It should have been titled bytes bombes and spies. A lost opportunity for historical puns!|
In my opinion, this book proved the exact opposite of its thesis, and because of that, it predicts, like the groundhog, another 20 years of cyber winter. I say that without yet mentioning what the book's overall thesis is, which is that it's possible to think intelligently about cyber policy without a computer science degree or clearance. That is, is it possible to use the same strategic policy frameworks we derived for the cold war going into a global war of disintermediation? You can hence judge this book on the coherence of its response to the questions it manages to pose.
It's no mistake that the best chapter in the book, David Aucsmith's dissection of the entire landscape, is also its most radical. Everything is broken, he explains, and we might have to reset our entire understanding to begin to fix it. You can read his thoughts online here.
|Westphalia is no longer the strongest force, perhaps.|
Jason Healey also did some great work in his chapter, if for no other reason than he delved into his own disillusionment more than usual.
|Yeah, about that...|
But those sorts of highlights are rare (in cyber policy writing in general but also in this book).
Read any Richard Haass article or his book and you will see personified the dead philosophy of the cold war reaching up from its well deserved grave: Stability at any cost, at the price of justice or truth or innovation. At the cost of anything and everything. This is the old mind-killer speaking - the dread of species-ending nuclear annihilation.
What that generation of policy thinkers fears more than anything is destabilization. And that filters into this book as well.
|Is stability the same as control?|
Every policy thinker in the space recognizes now, if only to bemoan them, the vast differences between the old way and the new:
|The domain built out of exceptions...|
But then many of the chapters fade into incoherence.
|This is just bad.|
Are we making massive policy differentiations based on the supposed intent of code again? Yes, yes we are. Pages 180 and 270 of the book disagree even on larger strategic intent of one of the most important historical cyber attacks, Shamoon, which is alternately a response to a wiper attack and a retaliation for Stuxnet. Both cannot be correct and it's weird the editors didn't catch this.
|What is your rules of engagement if not code running at wire speed, perhaps in unknowable ways, the way AI is wont to do, but even if not AI, can you truly understand the emergent systems that are your codebase or are you just fooling yourself?|
There are bad notes to the book as well: Every chapter that goes over the imagined order of operations for what an offensive cyber operation would look like, and which US military units would do what, has a short self life, although possibly this is the only book you'll find that kind of analysis currently.
But any time you see this definition for cyber weapons you are reading nonsense, of the exact type that indicates the authors should go get that computer science degree they assume isn't needed, or at least start writing as if their audience has one:
|Why do people use this completely broken mental model?|
Likewise, one chapter focused entirely on how bad people felt when surveyed about their theoretical feelings around a cyber attack. Surveys are a bad way to do science in general and the entire social science club has moved on from them and started treating humans like the great apes we are.
That chapter does have one of my favorite bits though, when it examines how out of sorts the Tallinn manual is:
|"Our whole process is wrong but ... whatevs!"|
So here's the question for people who've also read the whole book: Did we move forward in any unit larger than a Planck length? And if not, what would it take to get us some forward motion?