Hardening Needs to be the First Line of Cyberdefense

Computer systems have been the target of cyberattackers for many decades now, with a rapid increase in attacks seen over the last decade. But why are computer systems the target of cyberattack, or more specifically, why do cyberattackers make the effort to try to compromise computer systems? This may seem like an obvious issue, so why even ask such a question. The reason to ask such questions is that I think the solution to the cybersecurity problem is closely related to the answer to these questions. If we follow a line of reasoning about cybersecurity from its basis we may find a path to a solution.

The most obvious reason that cyber-attackers attack computer systems is because they are vulnerable to those attacks. I.e. because they can. But, why are computer systems so vulnerable to cyberattack? Because they were never designed to be cybersecure. They were designed to work well, be cost effective, small, fast, etc. But, computer systems were not originally designed to be secure against cyberattack, because cybersecurity was not a major problem… until recently. This lack of consideration of cybersecurity in the initial and ongoing designs of computing systems leaves them vulnerable to cyberattack. That is, they are not inherently secure against cyberattack. If they were, cybersecurity would be a non-issue, and cyberattacks would not be a problem for computer systems. The result of the past choices to not address cybersecurity is that today, a vast number of various computer systems deployed around the world are vulnerable to cyberattack.

The realization that this vast array of computer systems are vulnerable to cyberattack, and that successful attacks are increasing in number and sophistication, has led to the rapid rise in efforts to secure these systems against these attacks. However, the main methods of providing additional cybersecurity have included relatively little effort to make changes to the actual computer systems to address the inherent insecurity of those systems. Such efforts at making computing systems immune to cyberattack is called hardening. Rather, these efforts to secure systems have involved adding additional systems, called appropriately enough cybersecurity systems, in order to increase the cybersecurity of the protected system. These added cybersecurity systems operate by a number of methods such as looking for known threats and anomalous behavior in systems resulting from cyberattack. These cybersecurity systems go by names like: virus checkers, firewalls, and anomaly detectors, and often require large staffs of cybersecurity experts to operate on a 24/7 basis.

The added cybersecurity systems do actually defend the protected systems... to some extent. Some do provide an element of hardening, or making the protected system immune to the cyberattack. For instance, firewalls limit Internetwork traffic to certain port numbers. But these efforts at hardening, while helpful, are not extensive and far from comprehensive, so many potential vulnerabilities are left open for the cyberattackers to exploit. Current cybersecurity systems mainly work by adding surveillance monitoring to detect signs of cyberattack and then responding to those signs by attempting to stop the attack from continuing, and then attempting to restore the system to a good state. In other words, the intrinsic insecurity of the computer system is not fixed, rather defensive capabilities are wrapped around the system to add a layer of protection. This is the so called “detect and respond” method of cybersecurity that dominates cybersecurity today.

At first thought, this might seem an odd way of trying to solve the problem. Why not just address the core problem of computer system insecurity by making intrinsically secure computer systems? That is, finally go back and fix the problem that was ignored from the beginning. There is one very good reason. System owners are loath to change anything about their systems. Particularly anything to do with the core functionality. The primary constraint on the methods and activities of cybersecurity is “do not make any changes to the existing system”! Any cybersecurity method that requires changes to existing system will simply be rejected out of hand, as will be deemed too risky to attempt. This leaves computer system security in an unfortunate state where no one feels compelled to do the hard work of building intrinsically secure computer systems. Instead of solving cybersecurity, everyone manages cybersecurity using “detect and respond” cybersecurity systems.

These “detect and respond” cybersecurity systems do not provide comprehensive security. Indeed, they appear to just address the latest vulnerability to come along while leaving the systems open to the next one. A seemingly endless series of stories and reports of cyberattacks and newly found vulnerabilities come on a nearly daily basis. And these incidents often refer not to unprotected systems, but systems that have in place a cybersecurity system that is expected to provide protection, but has failed in this case. Perhaps these cybersecurity systems should really be called progressive cyber-insecurity systems, as they only protect against the last vulnerability not the next. These cybersecurity systems battle threats on an endless cycle of tit-for-tat actions. First, vulnerability is discovered, next someone creates an exploit which is used to attack systems, then a patch is released to counter this latest threat. Rinse and repeat ad infinitum. There is no hope that such a method will ever lead to secure systems. As is evidenced by this continuing state of insecurity despite the ever growing budgets dedicated to cybersecurity, for which the total spending is estimated to top $124B/year by 2019 [1]. The future does not look good.

Cybersecurity methods based on “detect and respond” don’t scale. As the number and sophistication of both the cyberattacks and the cybersecurity systems increases, a corresponding increase in the cybersecurity workforce is needed to operate the cybersecurity systems. We are continually barraged with the message about a “cybesecurity workforce shortage”, along with predictions that the need will soon outstrip any chance of ever having enough workers. Only a small fraction of alerts produced by these systems are ever investigated so that many real threats are likely to go unexamined.

This brings us back to the prime cause of cybersecurity problem. The computer systems are left intrinsically insecure by the currently cybersecurity techniques. In a sense, current cybersecurity methods simply kick the can down the road and hope that some future computer system designers will improve the inherent cybersecurity of their systems.

Hardening has numerous benefits for cybersecurity. Hardening of computer systems is a permanent condition that continues to provide security into the future at no additional cost or effort. Hardening provides real-time cyberdefense rather than an indication after the attack has already succeeded. Cyberattacks are defeated so the system does not suffer adverse effects from the attack that need remediation. Hardening does not take a staff of cybersecurity personnel on a watchfloor to operate 24/7. Therefore, intrinsically secure computer systems dramatically reduce the cybersecurity threat compared to currently used methods of cybersecurity. Shouldn’t that be the goal for computer systems? Intrinsic cybersecurity is far less cumbersome and expensive in the long term than maintaining the current “detect and respond” methods of providing cybersecurity.

Hardened systems are intrinsically secure. Hardening needs to be built into systems from the start to be effective. Comprehensive security cannot be bolted on or wrapped around a system (unless it actually becomes a core part the system by doing so). Trying to do so will inevitably miss protecting elements of the system design and operation which are only known to the developers of the systems, thus leading to a litany of unknown vulnerabilities left behind.

Hardening systems has its critics. Indeed, the arguments against hardening seem to be the currently dominant ones that drive the core thinking about cybersecurity. The main argument made against efforts to create hardened systems is the perfectionist argument. Critics of hardening as a means of cybersecurity claim that no system can be ever made perfectly cybersecure, and therefore any such effort to harden systems is a waste of time. That the flaw that goes unknown and unaddressed will eventually be found and exploited by hackers. This “zero-day” attack is one for which the system will be completely unprepared, leading to a successful cyberattack that will affect all such systems.

Well, that systems likely have defects is true. However, this is not the end of the world. All defects don’t become vulnerabilities. Estimating the rate of vulnerabilities from defects is a black art, but attempts at measuring this have been made, e.g. Alhazmi and Malaiya find that about 1% of defects will become vulnerabilities [2]. So, if the rate of defects in a system can be significantly reduced, then the corresponding likelihood of vulnerabilities is similarly reduced. Systems with greatly reduced numbers of vulnerabilities are still vastly preferable to the situation we have today. Indeed, how can it be any worse than what we have today? Reducing defects should be a primary goal of systems developers. Defects can be reduced or eliminated in many cases through a number of methods and constraints, which include use of mission critical development methods, which can reduce defect rates to less than 1% of what they are typically today.

Principles of cybersecurity are well known. See for example the “Rainbow Series” [REF]. Applying these well known techniques when developing computer systems is an essential part of the effort of hardening systems. Systems designed with cybersecurity principles in mind will have dramatically fewer vulnerabilities than systems today. These also include methods like constraining interfaces and types, and partitioning systems to well contained elements with minimal interfaces.

Software doesn’t last forever. Most software are in use for maybe a decade. Any vulnerabilities that are to be exploited must be found during that time to have an adverse impact on the system. So, not only are the existence of vulnerabilities, but the rate at which they are discovered, is pertinent to the overall security of a system. A system for which a vulnerability is discovered monthly is a far less secure system than one for which a vulnerability is discovered once per decade. The rate that the vulnerabilities are discovered is determined by two main factors: the total number of vulnerabilities in the system, and the effort being expended by attackers to find them. If the number of vulnerabilities in a system is reduced by half, then the rate at which they are found is likely reduced by half. Similarly, if the effort to find the vulnerabilities is stretched out so it proceeds at half the rate, the rate of vulnerability discovery would be similarly reduced by half. If the total number of vulnerabilities is reduced by a few orders of magnitude over what we have today then it is unlikely that a vulnerability will be discovered during the entire lifetime of the software system. Such a system is effectively secure, as the result is the same as if no vulnerabilities at all exist in the system. We can’t prevent attackers from trying to compromise systems, but we can harden systems to make their jobs much more difficult. Maybe so much more difficult that they simply give up and look for other, more traditional, non-computer based methods to try to achieve their goals, be it spying or sabotage.

A few orders of magnitude of hardening is not extraordinary given modern software development methods and principles of cybersecurity. Which suggests that the effort to construct highly secure systems is both within reach and worth making.

So, yes it is true that despite all the hardening that is built into a system there may still be that one flaw in that one system. This is where the “detect and respond’ monitoring systems used today come in. Not as the first line, or principle method of cyberdefense, but as the last line of defense that is only used in the unlikely occurrence that the hardening fails.

A look at analogies in the physical world shows that performing hardening first is a reasonable approach for real world systems. A look back in history shows that hardening of assets has always been the first, and primary, line of defense. Consider a Medieval castle. The castle itself is typically a masonry structure which makes it difficult, but not impossible, to penetrate by adversaries. Did the occupants of castles then decide that hardening was a waste of time and effort as it will never be perfect, and instead abandon all hardening and switch to using sentries, scouts, and vast standing armies -- forms of labor intensive monitoring -- as their defensive solutions? No. Of course not. Instead, they redoubled their efforts to harden their castles. They added tall, thick, masonry walls around their castles, with a single, narrow armored gate to admit outsiders. That is, they further hardened their castles against attack. Some took this effort further by adding moats around the castle walls with a drawbridge at the gate, further hardening against attack. Furthermore, the gate and drawbridge created a choke point where traffic through the wall was carefully checked.

Only after hardening their castles did they include detection techniques. Castle owners didn’t only rely on hardening by using walls, moats, and gates a the only security. They also added monitored in the form of sentries and guards who kept watch for intruders and attacks. But, the number of personnel involved in these activities was relatively small, as the hardening provided by the wall and moat were relied on as the chief means of protection. Had the castle been protected by guards only this would have required a large standing army, that could respond at a moments notice to any incursion at any point along the perimeter of the castle grounds. The hardening elements provide the first line of defense and therefore reduced the number of personnel required for the watch on 24/7 duty.

If these hardening methods were not perfect why did the castle owners bother with them? and why did they continue to improve and add to the hardening methods? Because they are practical and effective. They constitute the most cost effective and practical solution for providing the first line of defense. They also match the goal of keeping an attacker out, and from succeeding in the first place, rather than detecting that attackers are already in the castle grounds and having to eject them after they have likely already caused harm.

Another example is homeowners that want secure homes. The first line of defense is to add stronger doors, such as ones made from solid wood or steel, and stronger locks. Next, is to add security bars to windows and doors. These are all examples of security hardening techniques. Only then are “detect and respond” style monitoring systems, such as home alarm systems, added to the home. The goal is not to detect the burglar once he is in the home, rather the goal is to keep the burglar out in the first place.

The approach to cybersecurity needs to be based on the same principles as that for physical security. Harden first, because it is practical and effective, then add monitoring to handle any highly unlikely holes in the shield. Even if we can not achieve perfect hardening for systems, maximizing hardening should be our first line of defense. Hardening techniques based on sound principles of security will stop the vast majority attacks and reduce the need for large numbers of personnel in watchfloors 24/7. Only after systems are hardened should “detect and respond” security systems be added to provide monitoring to catch the exceptional case of an intruder getting past the hardened defense.

Hardening needs to be performed in a comprehensive and principles based manner with the goal of achieving complete security. The problem with most efforts at cybersecurity today are performed piecemeal with the result that system cybersecurity is like a sieve that leaks in many places. Hardening doesn’t have to be performed over an entire system at once, but for the elements that are chosen to be secured, the securing must be done in a complete manner to avoid unknown leaks. Any one functional aspect can be chosen for securing such as network access, memory access, data formats, etc. The element that is chosen needs to be analyzed rigorously using known principles of security and logical arguments to achieve a complete understanding of the cybersecurity issues relating to that element. Then that understanding must be used as the basis for constructing a comprehensive solution that is developed using mission critical methods to create a secure and trustworthy security solution. These elements, that are now trustworthy, will then be combined to create trustworthy systems.

For many systems hardening will be difficult. General purpose processing systems are required to handle a wide variety of protocols and data formats that are transferred over a wide variety of interfaces. All of which need to be defended. A better approach is to use the specialized nature of the system to lock down protocols, interfaces, and data structures to constrain the security problem. For instance, a billing system only needs to do billing. It doesn’t need to stream videos, so why support a video capability, interface, and data format on such a system?

At Cognoscenti Systems we have followed the methods described above to produce a comprehensive cybersecurity solution for control systems middleware and interfaces. We have developed effective hardening techniques against network cyberattack specifically for the class of system that perform controls operations. These include: automation, industrial controls, robotics, medical devices, surveillance systems, and combat systems. We have the techniques and technologies to build truly secure systems. Now we just need to make the choice to do so...our security depends on it.

At Cognoscenti Systems we’ve followed the approach that fundamentally secure systems can be built by developing cybersecurity focused in on a specific functional area and doing that exceptionally well. The penetration testers who evaluated our technology agree that this approach works: “We were unable to manipulate, observe, or in any way disrupt the message traffic between the two test devices in this scenario.” - Fractal Security Group

Download our whitepaper to learn about our novel approach, developed at Johns Hopkins Applied Physics Lab, to secure mission critical systems.

Whitepaper

References:

[1] 2018-08-15-gartner-forecasts-worldwide-information-security-spending-to-exceed-124-billion-in-2019

[2] Alhazmi and Malaiya

David Viel Founder and CEO

David is the founder of Cognoscenti Systems.

Website: http://www.cognoscentisystems.com