I find it interesting to see how often insights come from outside perspectives. History tells us that applying knowledge and methodologies from one sector into another sector has produced Nobel prize winners and new products and processes. We know that diversity of perspective is a key driver of innovation. Often, change comes about through a series of small steps, over time. When we think about the evolution of technology, it has this tendency to be incremental, rather than revolutionary. We only recognise the revolutionary impact of technology in hindsight.
These ideas got me to thinking about this evolution in the context of cyber security. What do we see in the relatively short evolution of cyber security that would enable us to bring outside perspectives, knowledge and methodologies to accelerate the speed of innovation and learning? And then, what outside perspectives might be helpful?
This blog seeks to explore this concept, with my own thoughts and ideas in the hope that it might trigger a discussion and debate. There is no pride of authorship, rather I hope this is a trigger for us all to collaborate in a voyage of discovery, and hopefully innovation.
So, to begin with a simple analogy. The sea is essential to the health of the planet and the beings on the planet, but it can also be a threat to survivability. Sailors are very aware that they need to treat the sea with respect and to learn the trade craft of seamanship. Likewise, cyber space is a creation of man but is increasingly becoming a powerful force that is both crucial and a risk to survivability, for individuals, corporations and governments. So, is there an analogy here, that tradecraft is a common theme? The tradecraft of seamanship has been learned through millennia. The tradecraft of cyber security is only decades old. That said it could be argued that the tradecraft of seamanship was pretty much static during the many centuries when sails, oars and tides were the only means of propulsion and the stars and compasses the means of navigation. But, in relative terms things have evolved rapidly through powered propulsion in the last two centuries and radar and satellite navigation since the second world war. In short, technical innovation has reduced risks like running aground, piracy and getting lost in the vast oceans and the tradecraft has adapted to the new opportunities. But of course, dependency on the key technologies has increased the risk that they are disrupted by malevolent powers. If we think about cyber security, the technology to attack and defend were developed relatively swiftly but have not evolved much in the past decades. The primary thing that has changed is the exponential increase in the volume of opportunity and threat and the development of methodologies to attack and defend. A big issue for cyber security is that many of the targets of malevolent activity do not have much beyond a rudimentary understanding of the tradecraft of cyber security. So, the lack of real technical innovation puts a greater emphasis on the tradecraft of defensive cyber security at a time when the population of the relatively “unprepared” are coming under attack. Just as we don’t put untrained people in charge of vessels at sea, we need to think about the governance of risk in the cyber domain. Indeed, we need to think about the very nature of risk in the cyber domain as mitigations like cyber insurance emerge. But I’ll come back to that.
I mentioned the word survivability earlier. That takes me to explore some perspectives in another sector. The military have a methodology to think about the most basic challenge of the battlefield, namely survivability. They define the survivability challenge, as to remain alive, or continue to exist, or to be “mission capable”. You can think of this concept as a set of survivability onion rings, each building on the next. The elements of survivability comprise: Detectability – what is your ability to avoid being detected? Susceptibility – what is your ability to avoid being hit? Vulnerability – what are the longer term post hit effects and methods for capability restoration?
So, let’s see how we could apply this to cyber security.
Starting with detectability; if you are on the web you can be found. But if you are an individual, you may not be targeted specifically, but simply sent fraudulent communications to see if you can be compromised. In other words, an opportunist stranger attack. If, however, you are a high profile individual, a corporation or government, it may well be that you will be specifically attacked, to try and penetrate your defences to steal data or cause disruption. Based on this you might argue that it is not possible to be undetected in cyber space. It might be feasible in some instances to reduce your detectability by “disappearing in the noise” of cyber space, but that will only deliver marginal gains if any.
The next layer is susceptibility, can you avoid being hit? Here the news is more positive. At the turn of the century, the UK government was struggling to raise the awareness of business to the risk of cyber-attacks. The susceptibility of those businesses and indeed the infrastructure of the country was too high. After well publicised attacks and business impacts, most businesses are now all too aware they are at risk. The government has set up the National Cyber Security Centre with the strategic objective to make the UK a safe place to operate in cyber space. There is a big resource challenge because of increased reliance on computers, networks, programs, social media and data globally. This reaches deep into the ecosystems of global supply chains and impacts businesses of all sizes and sectors. A partnering arrangement and sharing of expertise with industry has been adopted. Many of the approaches seek to reduce the susceptibility to attack. Whilst many attacks can be prevented by good basic tradecraft, that will not defend against so called advanced persistent threats. This is where those who possess advanced tradecraft come into their own. So, for many the primary goal is to reduce susceptibility to attack through tools, architectures and methodologies.
The final element of the model is vulnerability. If there is a successful attack against our systems, what is our ability to deal with it and restore our capability? There have now been enough high profile outages of capability for people to understand that once an attack has been successful it can be very difficult to recover the situation and return capability to normal. It requires a rapid response from people with specialised expertise. It can be very expensive, both in terms of cash and reputation. It is a serious mistake to rely on addressing just susceptibility and not putting enough effort into addressing vulnerability.
In summary, I find the survivability model useful in highlighting the reality that it’s not feasible to be undetected, but it is feasible to reduce both your susceptibility and vulnerability. The two are linked, but separate challenges that rely on tradecraft as much, if not more that tools.
Some say that just as oil powered the twentieth century, so data powers the twenty first century. Data has become a powerful commodity, that can be acquired for free, bought and sold, or stolen. It’s a strategic asset that some nation states are happy to steal, or compromise in other ways. An increase in the sophistication of attacks has been driven by nation states for their own ends, but criminals have increasingly seen the opportunities that cyber-crime presents and tools to equip criminals can be acquired relatively easily.
So, the challenge for business is how to respond to the threat. Businesses face many risks and in addition to delivering the strategy for the business and ensuring compliance with laws and regulations, the role of a company board is to ensure that the executive are managing risk appropriately. Increasingly businesses are enhancing their own capability to assess and manage cyber risks, but such resources are in limited supply and expensive.
Also, threats, tools and techniques are evolving and technical currency, or trade craft needs to be maintained. One approach has been to insure against potential loss or disruption caused by a cyber-attack. This introduces another sector, that of risk management. Discussion of cyber insurance often states that cyber risks are systemic risks. We can define systemic risk broadly as the risk that arises out of a network (a digital network in our case) that consists of nodes that are interconnected and independent, that permits cascading adverse events throughout nodes and in which such adverse events occur at such high speed that they cannot be contained in a timely manner. These features combine so that risks to one node creates casually independent risks at all or some of other nodes in the network. Some cyber risks are always or virtually always systemic, some are never systemic and some may or may not be systemic depending on circumstances. Not very helpful! Its not even possible as a general rule to state that cyber systemic risks are either more or less manageable than those cyber risks that are not systemic. That said, risk analysis is a powerful tool to map and aid understanding of the risks and more importantly may aid mitigation of the risks, through architectures, processes and procedures. Sound cyber risk analysis and mitigation requires specialist expertise, or tradecraft.
I want to introduce another sector that has something to offer, intelligence. Let me define intelligence. Intelligence is an analysis of the available information to build a picture that may be incomplete but is of value. Perhaps we are most familiar with this idea when trying to understand what is happening in totalitarian states, where the press is not free. What is the truth and the lie? Piecing together the information we can gather and then analysing that information to paint a picture. It may be our own information or information shared by others that we may or may not trust. The trap of confirmation bias, where we just believe what we want to believe and disregard the rest, must be avoided. We need to recognise that correlation does not prove causation and avoid jumping to conclusions. Over time we can gather data and build heat maps and use other data visualisation tools to make an intelligence picture that can guide us towards improved cyber security. The important point here is that this is not raw attack data, but an analysis of patterns and trends to enable us to take an intelligence led approach to defending our cyber domain. Increasingly we see the sharing of data as a key part of the process and there are communities of common interest that do this in the UK.
In conclusion, I hope this has triggered your interest in pulling approaches from other sectors into the cyber security domain. If you have any further thoughts or ideas we would be delighted to discuss them with you.