Normal morning, I log into Twitter and start to browse through my stream. I quickly came across an article posted by CNN’s Security Clearance blog. In it, the author reveals that the Federal government is investigating whether an Illinois water treatment plant’s burned out pump was caused by a cyberattack. “What the what?” says I, as I read further:
Joe Weiss, a noted cyber security expert, disclosed the possible cyber attack on his blog Thursday. Weiss said he had obtained a state government report, dated Nov. 10 and titled “Public Water District Cyber Intrusion,” which gave details of the alleged cyber attack culminating in the “burn out of a water pump.”
Such an attack would be noteworthy because, while cyber attacks on businesses are commonplace, attacks that penetrate industrial control systems and intentionally destroy equipment are virtually unknown in the U.S.
Well. This could be interesting. The protection of industrial control systems (ICS) has been a worry of cybersecurity analysts . What makes this story even more fascinating? Weiss claims that the “attack” came from somewhere within the territory of the Russian Federation.
My initial assumption that the attack was funneled in through some external source seems to have been proven wrong in these Washington Post and CNET articles on the topic. It turns out, the ICS in question was managed using software developed by an company that provides supervisory control and data acquisition. This company, unnamed in the articles due to the nature of the report the articles were based around, was itself was hacked several months earlier with dozens of users information and passwords absconded with. The power plant in question has been powering up and down remotely at random, the overall effect of which led to the burning out of a water pump.
So, first of all, whoa. Second, while an important event in the history of American cybersecurity, I don’t think that this is quite on the level of bad that I’m sure many will assign to it. Comparisons to the Stuxnet virus that struck Iran, targeting the programmable logic controllers in its uranium-spinning centrifuges, are inevitable to be sure. But the level of sophistication displayed here is nowhere near on the level of Stuxnet. That attack was clearly designed for a specific purpose, with a specific goal. The Illinois case is much more likely the result of a hacker who has obtained this information playing around with their new capabilities, leading to the burnout in question. If this was a state-based attack, I highly doubt that a single water station in Springfield would be their target.
Further, the two-step process displayed in this attack makes it all the more important, in my book, that proper cybersecurity measures are taken in the private sector. The intruder obtaining the passwords to the control systems certainly made the actual penetration of the system easier. Even with that advantage, though, the hacker should not have been able to gain the remote access that was required to utilize that data. Which brings me to the title of this piece.
In the mini-series that launched the revamped Battlestar: Galactica, the Cylons manage to take out the entirety of the Colonial Fleet, save the titular Battlestar, by deploying a virus across the networked Colonial system. What saved the Galactica, you ask? Commander Adama’s near fanatical resistance to having any networks on his ship’s computers. Period. He knew that computers were necessary, but he’d be damned if they were allowed to talk to each other. It even went so far that in a situation where the Galactica was forced to network its computers together or face destruction, the Commander had to think long and hard on the subject before allowing it. To his credit, the Cylons immediately launched a cyberattack once the networking was completed, so there you go.
Edward James Olmos can teach us a lot through his steely glare. The vast majority of ICS networks are actually very secure, so long as they aren’t connected to the Internet. I understand that remote access is sometimes necessary for the monitoring and management of vital processes when nobody is available in person. But monitoring and actually being able to control and update those systems should be on different networks, the latter of which goes nowhere near the public interwebs. Even those plants that are segregated face danger not from clever ways to sneak in through the vastness of the intertubes, but through the mistakes of those humans who are charged with maintaining and operating these systems. An earlier published Washington Times article concerned with hackers being able to open jail cells remotely was panned, but still holds some truths in its pages:
“But in our experience, there were often connections” to other networks or devices, which were in turn connected to the Internet, making them potentially accessible to hackers, [Teague Newman, Department of Homeland Security] said.
In some of the facilities the team visited for their research, guards had used the same computer that controls the prison’s security systems to check their personal email, exposing it directly to potential hackers, Mr. Teague said.
In many prisons, technical support staff would add connections to enable them to update the system’s software remotely after the ICS systems were installed by security specialists.
Also of concern: the use of flashdrives and portable hard drives. We all have looked from our flashdrive to a computer and thought “Eh, whatever” before plugging it in. Doing so with a system that controls vital elements of key infrastructure, though? That’s insanely risky, even if you are the sort who runs ZoneAlarm on your personal PC. It’s highly likely that Stuxnet itself was first introduced into the Iranian nuclear plants through not through breaking through a firewall in a case of extreme hackery, but through getting passed along until some schmuck stuck his thumbdrive somewhere it doesn’t belong. If we’re actually serious about making sure that Richard Clarke’s declaring that cyberwar is the biggest threat that our country faces is false, we really should start acting like it. For our inspiration, I think we should look no further than the Old Man himself.
In that vein, Congress is looking for bipartisan solutions in troubled times, and I think I have one for them. This could be a simple insert into any of the pending cybersecurity legislation on the Hill, or a quick bill to pass. Congress: we should mandate that all workers who interact with ICS should be forced to wear wristbands that read “WWCAD?” or “What would Commander Adama Do?” The picture at left should also be hung in all Federally regulated sites that use ICS to manage their daily affairs. You can thank me later, Congress. You can thank me later.