Comment Of The Day: “Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?”

The comments on this post about the sudden discovery that every computer extant was vulnerable to hacking thanks to two 20-year-old “flaws” were so detailed, informative and excellent that I had the unenviable choice of posting one representative Comment of the Day, or eight. Having just posted eight COTDs on another post last weekend, I opted for one, but anyone interested in the topic—or in need of education about the issues involved— should go to the original post and read all the comments. Forget the post itself—the comments are better.

Here is Extradimensional Cephalopod‘s Comment of the Day on the post, Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

This is not likely to be a popular opinion among professional programmers, but I feel it needs to be said.

The excuse that computers are complex and that testing to remove all of these flaws would take a prohibitive amount of time just doesn’t hold water. I understand that security vulnerabilities are different from outright bugs: security vulnerabilities are only problems because people deliberately manipulate the system in unanticipated ways. Bugs happen when people inadvertently manipulate the system in unanticipated ways. Some of these ways are incredibly sophisticated and may be infeasible to anticipate. However, having supported computers for the past few years, I’ve seen bugs that should have been anticipated, and zero testing would be required in order to do so.

The problem with testing is that the people testing usually understand the software well enough to know how it is supposed to work, or they are given a few basic things to try, but they don’t have time to test a program with heavy use. Luckily, testing is not the problem.

The problem is that in many cases I’ve seen (and I’ve come to suspect most cases across the software industry) the input and output footprints of code modules are not documented (and if your code contains comments laying out the pseudocode structure, I consider you very lucky). From an engineering standpoint, the input footprint of a system or subsystem describes the conditions the system assumes to be true in order to work effectively. The output footprint describes what effects (including side-effects) the system has or could have on its environment, including if the input footprint is not fulfilled. Those aren’t the official names; I’ve just been calling them that.

The thing about input and output footprints is that you don’t have to know everything that could possibly happen, and you don’t have to test everything. The input footprint will tell you what could go wrong with a code, and the output footprint will tell you what problems the code could cause. As a simple example, if the input footprint is “the computer is intact”, you don’t have to make a list of all the things that could physically break the computer. At most, all you’d need is the range of temperatures, pressures, and other environmental conditions which interact with the material properties of the computer to cause its structure to break down. If your output footprint is “returns the sum of two numbers, and leaves the memory it used for the computation locked,” then you don’t need to test it to know that it will result in a memory leak, eat up your RAM, and ultimately crash your computer if you use it too much without restarting. When you put multiple modules together, you can check a module’s output footprint against the input footprint of another module, and make sure they don’t interfere. It’s much easier than testing, and it simplifies complex interactions enough to avoid problems.

Failure modes are also important. All programs should have error messages that should at least describe what step or module failed, if not why it failed. That’s what input footprints are for.

I know that optimized code often calls for modularity to be sacrificed, and I appreciate that. Even though it’s far more robust to have programs that keep track of each step of the way and let you go back to any previous step, that’s not always possible while maintaining efficiency. However, that is still no excuse for not documenting the code properly. If a flaw isn’t going to be fixed, fine, but it absolutely must be documented in such a way that the end user can avoid it, recognize it, or know what to do now that they’ve run into it.

All programmers should be starting with pseudocode, making things as modular as possible (while maintaining efficiency), commenting each section of code not just by what it does but by how it works, and by its input and output footprints (bonus points for commenting each line), and putting in clarifying error messages. Quality code contains its own documentation.

“But you don’t understand, software is really complicated,” software developers say.

“Yes,” I reply. “That’s why code hygiene is very important. If you practice it up front, it makes everything easier down the line. It saves a great deal of time and effort troubleshooting and patching problems, overhauling code bases, and having to tell customers that a use case they considered obvious would break the system because nobody bothered to clean up the variable in between uses. Then you can deal with the really complicated problems and not have to worry about the simple, stupid ones.” This applies to user-friendliness in addition to simple code functionality. Clarification mindset is indispensable.

Preventing unauthorized access and control is much tougher because people can be very skilled in the use of hacking mindset to exploit the overlooked properties of systems. Security mindset entails paying attention to those overlooked properties, and documenting their input and output footprints to predict what conditions could allow systems to be hacked. While the extra effectiveness of security mindset is indeed more expensive in terms of effort and resources, it becomes much easier if systems are documented clearly and comprehensively. Then people can make informed decisions about what efforts they are willing to go through to obtain greater security, instead of periodically having to patch flaws as they are discovered (and then patch the patches, and watch as the system gets more and more tangled and difficult to effectively support).

The world of software could be so much better than it is, but first we have to change the way people think. That’s why I decided to teach myself how to write and support paradigms: mental software is the most effective tool humanity will ever have.


Filed under Comment of the Day, Ethics Alarms Award Nominee, Science & Technology

5 responses to “Comment Of The Day: “Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?”

  1. Glenn Logan

    This is a great comment. I have only one observation.

    No matter how carefully one codes, and backtraces buffer overflows and other security issues, the biggest concern for me is the software that makes the software – i.e., the compiler.

    Compilers are known to automatically see some security checks as unnecessary and remove them. This isn’t the fault of the programmer writing the software, but of the programmer writing the compiler, and many of these compiler issues are “features,” not bugs.

    For those familiar with computer programming, check out these papers from MIT discussing the problems. For those doing the coding, it might not be your code, but your compiler making the programs insecure.

  2. Andrew Wakeling

    One my ‘best ever jobs’ was as a computer systems analyst in an insurance company in the 1970s. There were four or five of us, and we knew how all the critical systems worked: how you ensured all the cash had been posted, how you reconciled bank records with policy records, how you checked that benefit fields hadn’t been tampered with, how you could track changes etc. etc. Those were heady days and you didn’t do anything with the systems without coming through us. We held the ‘structures’ and the ‘disciplines’ and we seriously believed we were the guardians against chaos. (We were.) Quite often we had to explain to senior users (chief accountant etc.) why we wouldn’t make certain developments he wanted, like moving to on line updating. It was typically because with our files and systems, the new processes (eg involving destructive updating) would be insecure and uncontrollable. We had a long term development plan and we had a clear view as to how big changes would be made (like moving to real time posting) and the order in which they would be done. (You update the feasibility checks first etc. You don’t destroy anything. You work out how you can reverse out of trouble and reestablish last Friday’s position etc.)

    The key ‘joke’ that ultimately undermined us guardians, was that ‘the user knows best’. He didn’t. We did.

    30 years later I return to similar companies as a senior executive in a large international group. I need to check out their controls and ‘sign them off’. In corporate terms I have high level ‘clout’. I typically can’t find anyone to talk to; no modern version of my ‘guardians’. I only do simple stuff and I try very hard to be constructive. “How do you know all that cash has been posted” ( and not stolen etc.).” “How do you know all those claim payments are properly recorded on policy records, and if I added them up they’d reconcile with bank records”. “If I broke in and fraudulently changed the maturity date on this policy how would you detect me, and what would you do?”

    The terrifying / depressing aspect is that I am now typically referred around the houses. “I think internal audit must cover that”. “We use an accounting package and you should talk to them”. “I don’t think data could be changed like that; the computer system has lots of controls and I think you should talk to IBM”.

    I know towards the end of my career I must have appeared old, and cranky and out of date. “Why would you want to check computer records against the paper records in the basement?” etc. I could find many errors: sometimes the paper records for policy number 123456 were for a completely different case!

    I wrote high level reports and rattled many chains.

    But sadly what I never worked out how to do, was to revive the guardians.

  3. EC,
    Great comment; I agree 100%.

    Speaking of commenting on code; I’ve been told by many people that my code looks like a text book and it’s commented better than anything they’ve ever seen. On average about 90% of my code has comments; I comment even the simplest things so anyone who access the raw code in the future can know exactly what the intent is it makes modifying and debugging nearly effortless.

    I’m in the midst of a huge coding project right now, that’s why I’ve not been around here as often, should be done relatively soon.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s