By now most of you have heard about the new kind of cyberattack on industrial control systems (ICS) called variously Triton or Trisis. This threat goes beyond attacking the ICS primary systems and attacks the backup safety systems that protect against industrial processes going dangerously out of process parameters that may lead to injury or death of personnel. These so called safety instrumented systems (SIS) are control systems that operate in parallel to the main distributed control systems or SCADA. The role of SIS is to monitor sensors on the control system for conditions that could present a danger and then remediate the system to operate within proper parameters or shut it down to a safe state, which is known as fail safe.
This post is based on two analyses that were published shortly after the public disclosure of Triton-Trisis by parties that had access to attack information. The first is by the Mandiant branch of Fireeye: https://www.fireeye.com/blog/threat-research/2017/12/attackers-deploy-new-ics-attack-framework-triton.html The second is by Dragos, a firm that specializes in the ICS security space: https://dragos.com/blog/trisis/TRISIS-01.pdf
A number of important points and lessons about ICS cybersecurity can be derived from this latest attack.
First, and most importantly, no existing cyberprotection mechanism prevented or detected this attack. This attack succeeded in compromising several critical components of the SIS. Indeed, all components of a SIS are critical to the functioning of the system, so any compromise will have an adverse effect. The lesson here is that current methods of cyberprotection are inadequate, and new mechanisms that can protect these specialized, critical systems are needed.
Second, the attack was detected by the SIS itself. From Fireeye “During the incident, some SIS controllers entered a failed safe state, which automatically shutdown the industrial process and prompted the asset owner to initiate an investigation. The investigation found that the SIS controllers initiated a safe shutdown when application code between redundant processing units failed a validation check -- resulting in an MP diagnostic failure message.” and from Dragos “Many safety controllers offer redundancy, in the form of redundant processor modules. In the case of the Triconex system, the controller utilizes three separate processor modules. The modules all run the same logic, and each module is given a vote on the output of its logic function blocks on each cycle.” The SIS had multiple, parallel identical components that determined the safety state of the system by voting. When one, or more, of the controllers disagreed in their voting the system declared a fault and initiated a failsafe shutdown. Kudos to the systems engineers for implementing a voting backup system to ensure robustness and reliability in the SIS. Typically, this kind of redundant system is to detect and overcome hardware failures, or what is called hardware fault tolerance. In this case, the redundant nature of the system was able to detect that one or more components were compromised due to the cyberattack. The lesson here is that specialized, system specific mechanisms that are built-in to the system are needed to provide protection.
Third, the initial attack was the compromise of the engineering workstation. From Fireeye: “The attacker gained remote access to an SIS engineering workstation and deployed the TRITON attack framework to reprogram the SIS controllers.” This is the cyberattack. This is the stage of the cyberattack that counts. All the rest of the details are interesting, but ultimately are just the follow up from a successful cyberattack on the engineering workstation. A multitude of other forms of attack could have followed from that point on. Once the attackers were in the engineering workstation they had complete control of the SIS. From Fireeye “An Engineering Workstation is a computer used for configuration, maintenance and diagnostics of the control system applications and other control system equipment.” That is, the purpose of the engineering workstation is specifically to program and configure the SIS controllers. In order to to its job the engineering workstation needs complete permissions and control of the SIS controllers. Once an engineering workstation is compromised modifications of any kind, may be made to the SIS, and those changes, precisely because they came from the engineering workstation, will be accepted and deemed legitimate.
How did the attackers effect the compromise of the engineering workstation? Both analyses are silent on this issue, likely because of a lack of evidence to make a determination. But, Dragos does speculate on a likely cause: “A common practice at many sites is to allow access to the process control network to engineers via the Remote Desktop Protocol. The engineer will most frequently use their corporate workstation to access an RDP jump box inside of the process control DMZ.” But, a better view is that the owners and engineers that use the engineering workstation view it as a typical PC, much like a business or consumer computer, rather than a specialized, dedicated piece of SIS configuration equipment. Here we will speculate that they treat it like a common PC by connecting it to the Internet, use USB thumbdrives, and maybe surf the web, which would leave open many vectors of compromise. What if instead, the engineering workstation was not a PC, but a custom, specialized piece of equipment built by the SIS vendor to program and configure the system? I suspect that it would get treated as a more critical component of the system and not be used for anything but its specific SIS task.
The engineering workstations, as the critical entre into the SIS, need to be protected against compromise using effective mechanisms. Fortunately, we know of such mechanisms that are highly effective in preventing compromise, those used by the DoD for classified systems. This includes a number of measures to be taken, many of which are now recommended by the vendor. These include: the SIS, including the engineering workstation, needs to be on a physically isolated network. Physical access control to the SIS engineering workstations, and all software and peripherals, needs to be limited to the engineering personnel, perhaps in the equivalent of a SCIF. The vendor needs a similar classified level environment to develop and configure software and systems. A manufacturer fresh machine should be configured at the vendor factory with necessary software for the SIS engineering workstation tasks. The engineering workstation should be directly carried to, and installed on, the SIS network by a trusted engineer. The engineering workstation and all SIS components should never be connected to the Internet or other networks. No USB drives should be used to install or move programs or data. This is a point where we disagree with the vendor which believes that, quoted from the Dragos brief “CDs, USB drives, etc. should be scanned before use in the Tristation terminals” also “Laptops that have connected to any other network besides the safety network should never be allowed to connect to the safety network without proper sanitation. Proper sanitation includes checking for changes to the system not simply running anti-virus software…” We don’t believe that such “sanitation” is possible. Updates to vendor software should be from vendor supplied and labeled CDs. System updates should be through an intermediate package updating system rather than by connecting to the Internet, or preferably by the vendor supplying a complete updated image including O/S. Remote access should be prohibited, as this would require an outside network connection. NB. for this particular vendor’s SIS an engineer is needed on site to program and configure the system because it has a physical key in each controller that must be set to “program” mode to enter the program. This same engineer can do the engineering required on the engineering workstation. The fact that the SIS controller was left in “program” mode leads to some speculation as to why? Was it done inadvertently? Or was it done specifically so remote engineers could program the system?
What have we learned? A number of things: Exacting controls, similar to DoD classified systems, need to be put in place to ensure the security of SIS, and indeed the entire ICS. Existing cybersecurity methods did not prevent or detect the Triton-Trisis threat and will not likely prevent or detect threats in the future. Critical infrastructure, such as an ICS, need to be hardened against cyberattack to prevent these cyberattacks from succeeding in the first place. Detection after the fact is too late, as the damage has likely already been done by that time. The price of having on site engineers is small compared to the great risk of a successful cyberattack. Specialized, site specific, security and safety mechanisms, like the SIS which discovered the cyberattack itself, are the best mechanisms to ensure safe, secure, and reliable operations of these critical systems.
Once ICS, and similar controls systems, are configured and running, the communications also needs to be effectively secured against cyberattacks. Here at Cognoscenti Systems we provide ControlMQ, the secure messaging middleware that ensures control systems communications are protected against cyberattack. For more information see: www.cognoscentisystems.com or email at: email@example.com