Why vulnerability lifecycle management does not surpass export controls in cyberspace

Comment: This text is in English because it is an contribution to the discussions of the working group „Encryption Policy & Lawful Hacking“ of the Transatlantic cyber forum that I am kindly able to join.

A recent publication of „Countering the Proliferation of Malware“ Trey Herr from the Belfer Center for Science and International Affairs at the Harvard Kennedy School identified the long lifetime of exploits and vulnerabilities to be one of the main threats for securing IT systems. The report argues that the current policy approach of establishing broader international export controls for malware, vulnerabilities and exploits – as e.g. launched under the Wassenaar-Agreement in 2013 – can not be sufficient to significantly reduce the proliferation of such tools. By identifying the main obstacles of this approach, the report proposes a multi stakeholder regulation framework how to shorten the lifecycle of malware to achieve a sustainable security of IT systems. Although this surely is an important goal, this text aims to argue – in contrast to the proposal of Trey Herr – for stronger export controls and tries to show why a strict lifecycle management might not affect the increasing collection and usage of malware in military forces and intelligence agencies.

The analysis of Trey Herr argues that current export control approaches mainly try to draw analogies between the fight of conventional arms or nuclear armament and cyber weapons and that this approach will fail the goal of better IT security due to the following points:

  • Malware and software can easily be “produced” (in comparison to nuclear weapons) and copied. An effective control of malware trading, that is based on the regulation of the “trading canals”, will not work
  • The regulation will most likely hinder the civil IT security research by putting rules on “the good ones” that the “the bad ones” will simply ignore by using hidden and shadow canal
  • The definition of a “malicious usage” of vulnerabilities cannot be achieve before the vulnerability has been used, therefore this rule cannot be applied to decide over the permission or the prohibition of a specific malware trade request.

Based on this review, the paper of Trey Herr proposes a different approach by defining a policy framework that aims to establish strong regulatory rules and processes from the discovery and the disclosure of vulnerabilities to the patch development and the patch deployment. By regulating and enforcing limited time periods between these steps, the possibilities of the application of vulnerabilities should get shorted and therefore the incentives to trade and gain them. The argumentation of this approach does not aim specifically to the criminal usage of malware but from a perspective of a broader security of IT systems. That includes restraining the acquisition, collection and non-disclosure of vulnerabilities by intelligence services and military forces too. In terms of their possibly justified need for malware the paper states, that these agencies might have to change there vulnerability management strategies without further specifying this task.

As already stated, it is definitely important to foster IT security from governmental, to civil as well as company systems. Any working approach to reduce the criminal exploitation of weak IT systems is very welcome. But when it comes to actors like intelligence agencies and especially military forces, it is doubtfully that they are willing to commit to such regulations that will reduce their “cyber weapon arsenals”. It is more likely that the secrecies around exclusively owned vulnerabilities will grow instead of responsibly disclosing them. So far the strategy of these actors – as far its possible to tell from the outside – was to secretely stockpile vulnerabilities for later usage or use them to implant hidden backdoors to a wide range of IT systems. As seen during the last months by the WannaCry and the Petya outbreaks – both based on the EternalBlue vulnerability from a cache of leaked NSA exploits – this approach can endanger many IT systems at once if it involves vulnerabilities in popular software or hardware. Although the NSA informed Microsoft of this vulnerability once they realized the leak and Microsoft immediately released a patch for the EternalBlue for current Windows operating systems, there are still a lot of older („legacy“) systems running with outdated operations systems for which Microsoft only published a patch one day after the first WannaCry outbreak. Many of those have still not been patched until today and are still vulnerable. This points to a flaw in the concept of vulnerability lifecycle management. Even if a discovered vulnerability is getting disclosed and patches for the bug are available, it can not get guaranteed that all affected systems get patched, which possibly leaves targets for attacks. This can be addressed with strong legislative regulations but will only work (at best) on a national level. Furthermore, if the patch development and deployment does not work perfectly, the obligation to disclose vulnerabilities can induce an increase of known attack vectors – even if their possible application lifetime is short.

Next to these points, the report of Trey Herr strongly argues that export controls can not be sufficient in the fight against the proliferation of malware. This approach has to get partitioned into the actors and aims that should get tackled. Herr’s argument is valid for the criminal application of cyberspace by non state actors, but insufficient for actors like intelligence agencies and military forces. Export control approaches are not targeting criminal markets and actors because they always base on agreements and transparency measures to control their compliance and are mostly instruments of building security and stability between states. Therefore they should not get criticized for flaws in the approach to strengthen IT security but should get recognized as an attempt to reduce the current cyber arms race of military forces. Also we have already seen cyber attacks during conflicts (like in the Ukraine or in the fight against ISIS), it still keeps unclear and hard to asses what kind of cyber weapons the different military actors have or develop and what the impact of cyber weapons can be. This often leads to national argumentation that one need defensive and (more and more often) offensive cyber capabilities too, to withstand this international development. If current transparency measures like the Wassenaar agreement – that covers currently only 41 states – don’t work well to brake this development, it should get discussed if the transparency measures had to be more strictly defined and applied (like the approach recently discussed in the EU) or on an international scale like the UN. Trey Herr also point out, that regulating the trade with vulnerabilities is hard because their malicious impact is resulting out of their usage which – in the view of Herr – differs from e.g. the regulation of missiles. Whereas the first is very true and any regulation should be as sensitive as necessary to allow the legitimate and most necessary research, the later comparison with missiles is improper. Even a missile consists out of many different parts with possible non-military usage like special metallic alloying, navigation and communication systems or special non-metallic screws – things that might get used in completely different devices too. So even missiles or its parts can be subject to military vs. non-military reasoning (the so called dual use character of goods). Current dual use agreements already address this question and specific trade request are explicitly evaluated in terms of the intend and possible usage as well as the possible proliferation of the good. The problem with the current Wassenaar agreement is, that each state can define the rules, processes and boundaries of the national trade regulation by its own, which produces no international comparability or reliability. As already pointed out, this problem could be tackled by a broader, stronger regulated and standardized approach that is commonly shared between multiple states.

The main take away point of this remarks should be, that weak export controls are not the problem or at least should not the be the primary lead on fostering IT security. It is more the situation, that besides regular canals of vulnerability trades and its actors they are many more actors and markets that do not commit themselves to any kind of regulation or control.

Finally, the paper of Trey Herr leaves some interesting questions in terms of the militarization of the cyberspace: What could be a meaningful extension of such a framework that would include intelligence agencies and military forces with their legal requirements? And what could be learned from such approach for reducing cyber armament of military forces and the necessary measurements to gain transparency in this field of international security.

Ideas and remarks are very welcome. and many thanks to Sven Herpig for his help.