From the New York Times:
Computer security experts have discovered two major security flaws in the microprocessors inside nearly all of the world’s computers. The two problems, called Meltdown and Spectre, could allow hackers to steal the entire memory contents of computers, including mobile devices, personal computers and servers running in so-called cloud computer networks.
There is no easy fix for Spectre, which could require redesigning the processors, according to researchers. As for Meltdown, the software patch needed to fix the issue could slow down computers by as much as 30 percent — an ugly situation for people used to fast downloads from their favorite online services. “What actually happens with these flaws is different and what you do about them is different,” said Paul Kocher, a researcher who was an integral member of a team of researchers at big tech companies like Google and Rambus and in academia that discovered the flaws.
Meltdown is a particular problem for the cloud computing services run by the likes of Amazon, Google and Microsoft. By Wednesday evening, Google and Microsoft said they had updated their systems to deal with the flaw.
Here’s the best part:
“Amazon told customers of its Amazon Web Services cloud service that the vulnerability “has existed for more than 20 years in modern processor architectures.”
We trust the tech giants and computer manufacturers to give us secure devices. We then entrust our businesses and lives to these devices.
That there were such massive “flaws” in every computer, and that it took 20 years for those whom we trusted to discover them, is an unprecedented breach of competence, trust and and responsibility. Imagine auto manufacturers announcing that every car in the world had a “flaw” that might cause a fatal crash. I see no difference ethically.
And why is this story buried in the Times’ Business Section, and not on the front page, not just of the Times, but of every newspaper?
Jack,
As it should seem obvious from any question or reasoning i have ever put forward here, I am new to constraining my analysis of situations to ethics alone. Questions…
“That there were such massive “flaws” in every computer, and that it took 20 years for those whom we trusted to discover them, is an unprecedented breach of competence, trust and and responsibility.”
Would this conclusion be the same if they provided these systems with a disclaimer?
Is the flaw the result of negligent oversight, or is it so subtle that it took 20 years of the smartest most educated people? If so subtle, is this a breach of competence?
Or, is the severity of the situation what merits your conclusion?
Yes. I think so.
Let’s take law firms as an example. Law firms must protect clients’ confidences. If there is any appreciable chance that a storage or communications method is not secure, it is unethical to risk client confidences by using it.
Computer manufacturers have, in fact, assured the legal profession that their products are secure. Lawyers trusted them, and then placed their own trust on the line with clients. Now we learn that the computer-makers have been installing dangerously flawed chips for 20 years. There is no way that length of time using bad parts isn’t negligence per se.The chip-makers too. And the damages to lawyers and their clients could be incalculable.
In their defense, these must have been extremely subtle flaws. Security software designers are heavily vested in rooting out any and all vulnerabilities, and the payday for being the first to discover flaws like this, and providing a fix, would have been enormous. They expend huge sums of money and man-hours looking for things like this, and yet they missed it.
It was so subtle, in fact, that no hacker had exploited it at least until last year, if at all.
But that’s just moral luck.
It’s a mistake to assume that computer systems can be permanently secure. Computer software is incredibly complex. Many modern large software systems are arguably much more complex than anything else humans have ever built. There are so many interacting parts with so many behavioral discontinuities that it’s amazing our large software systems work as well as they do. A great deal of effort has gone into making large software systems work as well as they do. Despite that, software is still released into production with tons of defects.
It’s a matter of diminishing returns. The first few bugs are really easy to find, especially if they are found in the earliest stages of development, where they are cheapest to fix. But as more and more bugs are found, it becomes more and more expensive to find the next bug. And except in some special cases, it’s impossible to prove that there aren’t any more bugs. The optimal quality approach is therefore to release the code when the estimated cost of finding the next bug (including the loss of use of the software) exceeds the estimated inclusive cost of releasing the bug (including cost of releasing bug fixes, loss of reputation, lawsuits, etc).
To put it another way, eventually it just makes sense to release the code so people can use it and then push out bug fixes as they are found. Exactly when it makes sense to do this depends on the balance between the benefits of releasing the code, the amount of damage caused by bugs, and the cost of pushing out bug fixes. Defects are then typically found at random by users making use of various features of the software. In the case of security defects, however, the defects are usually discovered by (hopefully) security researchers or (more commonly) malicious hackers. The process then becomes one of fixing security bugs quickly after they are discovered to mitigate the damage. Security is an ongoing process of keeping up with the bad guys.
Complex computer processors like the ones in question are essentially collections of smaller processors running built-in software that executes the processing model, so they are subject to the same tradeoffs. However, since the software is built-in (rather than downloaded like everything else these days), vendors are typically very cautious about allowing bugs to be released because of the high cost (or impossibility) of replacing defective units in the field.
This was a bug so hard to find that it’s been running on billions of processors for decades without anyone stumbling across it (as far as we know). Contrary to what you assert in an earlier comment, however, this is not “moral luck.” Using these chips in the face of possible undiscovered security flaws was a calculated risk. And the chip was engineered to make such flaws difficult to find, thus reducing the risk. Of course, now that the risk has been realized, we are going to have to pay the cost. But in the meantime we have received God-knows-how-many trillions of dollars of economic benefit by taking that risk.
Very helpful post. But to take a calculated risk, you have to know the risk you’re calculating. I assure you, nobody in the non-technical community imagined that there could be two “flaws” in every computer that, if exploited, would allow a malign agent to take everything.
But that’s just at this time. Given centuries of analysis, we might determine that today’s computers have thousands of vulnerabilities. The metric that should be evaluated is # of Known Vulnerabilities, Time to Fix, Responsiveness to Threats.
Two flaws? There were more than 18,000 entries in the National Vulnerabilities Database” in 2017. These are security bugs that are serious enough for the government to register and track. There are already more that 150 logged since New Years.
The fix for this particular one has been rolling out for weeks before the public was notified.
The NSA/CIA often finds these long before some random researcher, who responsibly discloses them to the relevant parties, and they actively exploit them while knowingly leaving everyone at risk, sometimes for years. What about the ethics of that?
Thanks; this whole thread has been very enlightening.
Hey, everyone: this is Fred, the same Fred who supplied dozens of stories, many of them obscure or hard to find, for Ethics Alarms posts over tbe last few years. A round of applause, please. He earned it.
Thank you, Fred!
Thanks, Fred! Pleased to meet a top ethics scout.
Jack,
I don’t know what has been settled legally, so can you elucidate any on what the legal definition of “secure” is? I’m taking a moment to read up on Spectre, which is proving interesting, and I think there might be a reasonable case against chip manufacturers. I might comment more on it later. But “secure” is a very gray term in every usage I’ve seen in my years working with computers. In general, when someone tells me they want a perfectly secure computers, I tell them to encase it in lead and burying 1,000 feet underground and destroy all evidence of where it was buried. There is no such thing as a usable computer that is perfectly secure. Every bit of hardware and software has flaws. Some are impossibly hard to find; others are infeasible to exploit. Security is often at best “well, none of the experts have found any hacks….yet.”
Some problems are more egregious than others. Some cryptography in use depends on multiplying large primes together, taking them modulo another large prime, and using that as a key. This is in general considered secure because as far as anyone knows, factoring (especially with modular arithmetic involved) is very hard to do efficiently. If at some later point in time we develop a new, very fast method of factoring, all those means of cryptography depending on factoring become insecure. Is it negligent to be using those means of cryptography, given that it is possible that factoring is very easy, and no one currently knows how to do it?
It’s like asking for a secure book, isn’t it? Anywhere information is stored so that one person can read it, there’s some way that another person can, too. Security mindset is an approach, not an accomplishment.
I’d expect people to say, “This system follows the practices made into law based on the foremost experts, in addition to some practices those experts recommend that aren’t required. We stopped short of destroying the information. That’s the best we’ve got, based on your cost and convenience requirements.”
Car manufacturers Bad! Computer manufacturers Good Geniuses on right side of history! Steve Jobs! Huzzah!
My favorite form of virtue signaling: An Apple sticker on the back windshield of a car.
Well, “they” got us right where they always wanted us.
For a more detailed summary of the problem (with links to even more details inside)
https://tech.slashdot.org/story/18/01/04/0524234/google-says-almost-all-cpus-since-1995-vulnerable-to-meltdown-and-spectre-flaws
Meltdown seems to be limited to Intel processors (which do make up a large majority of the market), but spectre, while harder to exploit, is much more fundamental, having to do with the way the architectures themselves are designed (and have been designed for, you guessed it, 20+ years).
Mistakes will happen, and vulnerabilities will ALWAYS exist and be discovered. The real thing to pay attention to here is how OEMs such as Intel, AMD, and ARM (and their various licensed users) react and patch these issues, and more importantly: how quickly.
“We trust the tech giants and computer manufacturers to give us secure devices.”
Not sure who this “we” is as anyone capable of reading a book is capable of discerning the problems with ubiquitous computer technology (or is it technocracy?). The problem is the proverbial “we” have not taken the time to research these tech giants, their ethics, and their efficacy in a multitude of situations. We didn’t bother to notice how IBM’s creepy Watson is a relic from Nazi science. We didn’t bother to question giving Facebook every moment of our lives. We don’t ask if Alexa monitoring our homes every second could be a total loss of privacy.
We put blind trust in companies headed by megalomaniacs who literally want this stuff both on & in our bodies. We said okay to encroaching data collection, facial/body/voice recognition, constant tracking, and unconscious constant confession of our most personal first thoughts.
As Chris Rock once said “That tiger didn’t go crazy. That tiger went tiger!” Of course tech security failed because we blindly trusted it. We don’t research these companies, their major players, their aspirations, and their flaws. We ignore who these companies collaborate with and what other interests they fund/push.
We got what we paid for by paying attention to what these technologies could do for us, rather than paying attention to what they are doing to us. We didn’t ask if their products were secure because we assumed out of laziness and willful distraction that they were.
And why is the story being buried for the moment? I think that’s a question anyone using a computer today should ask. But will they?
By definition we trust them. Trillions of dollars of data and trade secrets rely on their products. If the devices are not secure, then the customer must be told, “We cannot vouch for the security of your data. We make mistakes and may have installed fatally flawed chips.”
I use this stuff but I have & will never trust it. I have read too much to not know better.
Have you ever verbally mentioned an interest in a product in a conversation, in the presence of your smartphone (or home interactive device) without using the device and then found that you get ads targeting that product?
Read the little disclaimer the next time an app wants the use of your microphone and/or camera: it never specifically states that this permission only extends to when you are using the app, does it?
How would you know that the mic was not in use at any point of the day (given that the NSA has admitted they can hack a device to do just that, and what they can do any organization could also figure out?) I am reminded of the picture of the Facebook owner in his office, with tape over the computer camera built in to his laptop…
Why is the story being buried? Too big, too scary, too complex, to be allowed to fail, of course. As a society, we use computers such that their sudden failure would be more disrupting than an earthquake, or a flood, across our entire society.
You know why the story is being buried. All that you said and more.
It is sometimes good to be concerned.
I’m not going to pounce on you because you’re just following the media hysterics, and while this is bad it is not nuclear-reactor-going-to-blow-up bad.
Let me explain in as non-technical terms as possible.
First, let’s make clear that the problem is one of “information disclosure”. Meaning that someone who should not be able to look at information in your computer’s memory (these attacks only works on data in active RAM). This is a bad problem to have for cloud computing companies who share gigantic machines between multiple users (e.g. Google, Amazon) and less so for individual users. It would be more of a problem if you are being targeted – as in spear phishing – but for the generic computer user the chance of being targeted by the attacks is pretty low. Why? Because if you are attacked as a regular Internet user there is an enormous number of other Internet users being attacked too. Also, either your memory is automatically analyzed in place (which antivirus software updates in the next couple of days should be able to catch) or the full contents have to be streamed to the attacker’s machine (which will trigger even more egregious alarms). This vulnerability does not allow an attacker to take control of your computer to make it part of a botnet or anything like that, so the value of going after individuals is very low (in the off chance they can grab a bank account’s password or something, and lots of extra work is needed to take advantage of that info).
Second, the issue has been present for twenty years. Yes, but it only has been found in the past few months. I’ll talk more about responsible disclosure later, but for know, know that the people who can fix the problem in software have been informed for a while and most major vendors had patches ready to go on short notice. An analogous problem would be asbestos used for pretty much everything in old houses. It was used for decades before we figured out breathing its particles caused cancer. Does that make asbestos makers and users criminal, liable or unethical? We can argue about that, and we can argue about the way to go fix it; but there answer is not as obvious as it seems. I will certainly keep watching to see how Intel deals with the problem (it will be educational both from a process and technical perspective), but I would not call them negligent on the basis that this specific issue exists.
Third, the root cause is in hardware. There is nothing wrong with the software, and the proposed “fixes” only disable the relevant hardware feature (speculative execution, for those who care about the gritty details). In essence, modern Intel processors try to guess what the next instruction’s result will while the previous one is being processed. Think we have a test ‘A’ and an instruction that will read memory ‘B’. ‘A’ checks if the program should have access to ‘B’, but the processor does start executing ‘B’ while ‘A’ is being evaluated so it can have the result immediately rather than having to wait. ‘B’ in this case will happen almost immediately if access is allowed and take orders of magnitude more time if not. A clever attacker will chain ‘A’ and ‘B’ in such a way that execution of the next instruction (whatever it is, let’s call it ‘C’) is timed, and can determine whether ‘B’ was reaching out to valid memory or not. This set of instructions allow a very small amount of information (a couple of bits, really) to be determined. Repeated execution of this sequence with changing values for the memory to test allows reconstruction of all memory space in the target computer. This is fucking clever – and I don’t use profanity lightly. This is like someone figuring out they can track your location by repeatedly looking at the tire marks of only two-lane road intersection in a 10-mile radius.
Fourth, the targeted feature is there to make your computer faster, not slower. The fact that someone figured out how to exploit it does not invalidate the original design. Experts will have to figure out a way to make this attack statistically fallible while not sacrificing the gains in processor speed. This is not a straightforward problem to solve. Had it been internally known at Intel years ago, there might be a case against them for not disclosing it (responsibly) to OS makers. Also, I would expect a good design that takes it into account to take years to be properly done.
Fifth, disclosure of the problem was borderline. By that I mean that the issue was disclosed early to parties that could fix it, and as far as I can tell the plan was to make a coordinated release of patches and detailed explanation of the vulnerability about ten days from now. For some reason the agreement was broken days early, but most of the work had been done, and with quick response from software makers the patches were released. Updates will be out in a few days if they’re not already there. On the hardware side there isn’t much to do except recommend that the speculative execution feature be disabled for sensitive systems (which after studying I conclude should be the default).
I’m happy I don’t have to deal with the fallout of this any more. In my line of work we just plain disable all fancy processor features because predictability trumps speed every day, but two years ago I would have been running like a headless chicken instead of armchair quarterbacking.
Sigh… Gell-Mann Amnesia, it’s a thing. 🙂
Cheers!
Wow, WordPress ate most of my paragraph breaks. Sorry, I hope you get a chance to fix it sometime later Jack.
I’m on it.
Much appreciated, and thanks for the formatting too. I need to learn how to do that (one of these days, maybe).
Thanks, Alex. You saved me a LOT of time and hated typing. Good job.
Great explanation, Alex. I do this for a living too, and I agree with your spot-on analysis (to include the “fucking clever” part.
It’s not that they knew about it for 20-ish years and kept it a secret all this time. It’s been there for 20-ish years and only recently been discovered.
Having been discovered, the temporary secrecy is an industry standard that is employed to prevent those who are not in the know but who have the skills from creating and using exploit code while the people in charge of fixing it are still fixing it.
–Dwayne
This is a well-formed explanation, and I’m satisfied with it. I don’t consider these security flaws particularly damning, since it’s impossible to design hardware in such a way that it does its job while maintaining security against ways of interacting with it that may not even have been conceived yet, let alone designed or implemented. It becomes prohibitive to secure every part of the inside of a box just in case somebody decides to upgrade their box by putting unsecured things inside it.
That said, my rant below stands in general. It was prompted by this post, but I didn’t intend for it to refer to these particular security flaws (and I probably should have made that explicit). (However, I still consider Heartbleed, for example, an indictment of security-consciousness in software design.) I do think that more conscientious practices would make it easier to foresee such problems and less onerous to fix them, even if it’s still infeasible to prevent them entirely (security almost always comes with a cost).
“Imagine auto manufacturers announcing that every car in the world had a “flaw” that might cause a fatal crash. I see no difference ethically.
The analogy is flawed, try this:
Imagine auto manufacturers announcing that every car in the world had a “flaw” that could be exploited to cause a fatal crash.
Welcome to the discussion on self-driving cars, auto-pilot, connected cars, etc.
The guys at Bald Move, who review TV series via podcasts, discussed this topic once during their review of a Westworld episode.
They brought up an interesting point about human nature and evaluating risk and what level we’d tolerate hazards/consequences.
They posed a hypothetical about, let’s say 20,000 people die on the road each year, what if fully automated cars reduced those deaths to 5,000, but they are entirely different kinds of deaths caused by situations deriving from the driving algorithms…deaths that probably would have been avoidable IF there had been a human driver.
Would we, as the humans, cede control of our vehicles to automation given that situation?
The numbers would say yes we ought to…but our gut instincts, when presented with the new causes of deaths might not be so amenable.
That actually sparks a quite fascinating discussion. I’ll be interested to watch that, thanks!
Well, Westworld doesn’t have the discussion on that hypothetical. The guys on the podcast brought up that hypothetical to discuss some other aspect of the TV show.
That’s odd, my name is weird.
As far as I’ve read, both of these flaws require code to be running on the target computer to exploit (i.e. they can’t be exploited remotely by themselves), so perhaps this is an even more apt analogy:
Imagine auto manufacturers announcing that every car in the world had a “flaw” that could be exploited to cause a fatal crash, but only by someone who has the ignition key.
Don’t run software from unknown sources, and don’t let people you don’t trust have access to your computer or phone, and you drastically lower your risk from these (and most other) security flaws.
I mean, if it’s a physical security threat, then cars already have this fatal flaw as they carry a not-insubstantial amount of combustible fluid everywhere they go….
Re: “has existed for more than 20 years in modern processor architectures.”
However, computers have only recently had the power to have any chance of exploiting that flaw.
As Ryan Harkins has pointed out, at some time in the future we will have computers capable of hacking large prime encryption; that doesn’t mean it is unethical to use the most secure system currently available in the mean time.
The biggest risk for your data is sneakernet; real live people with sneakers on their feet who already have authorised access to the data. Always has been, always will be.
Well, what you are saying is that the manufacture of technology is inherently unethical, because no piece of technology will ever be free of bugs or glitches or loopholes that could be exploited maliciously. You could test a million different scenarios, but there are 7 billion people on this planet, and one of them will do something not tested. 99% of the time, the software will respond as expected, but that 1%, something unexpected will happen, and that may be repurposed for nefarious ends.
The facts of the situation here is the billions of machines have been doing trillions of unique tasks for twenty years. Only recently has a particular set of unanticipated instructions been able to access a part of a memory chip that it was not supposed to.
unprecedented breach of competence, trust and and responsibility
The bald fact is that every computer system that ships has severe security flaws, the million+1 scenarios, billion+1 scenarios not tested. Every operating system ever written, and that ever will be written. Is this irresponsible?
This becomes the ulitarian-balance. Will the technology benefit more people, than could be harmed by the anticipated, but virtually unknowable flaws.
For example, the Lawyer knows that he has to keep his client’s confidences. He could keep a strictly paper office, but this reduces his utility to the client; even then paper copies are still vulnerable to physical theft. So he adopts a computer system.
Knowing that computers have inherent risks, he is obligated to manage the computer according to best practices. This includes avoiding known vectors for malicious software and behavior. Email, for instance, is sent unencrypted, because it was written in 1980, when intercepting network traffic was not technically feasible. My doctor gets around this problem by hosting a private “portal”, that allows secure messages to be sent and received, with an email sent to inform me that a secure message is available. My privacy is protected, but with a caveat that I must log in to get the information.
The other major vector is downloading malicious software. These mostly come from sketchy websites and emails of unknown or frauduent origin. Even then, if the software is downloaded, it still must be executed to harm the computer. Best practice is to stay away from these websites on a work computer, and the download operating system patches and anti-virus updates (because, again, every system has unknown flaws, and are constantly patched while these are discovered), and certainly not open unverified files!
So far, following best practices and not downloading untrusted files, the previously unknown “Meltdown” and “Spectre” flaws could NOT harm the computer system, and thus the client. The lawyer would need to download a virus using one of the exploits, and open the file to run it. If he does not, his computer is safe. I have not had a virus detected on a computer I own for years, because follow this practice faithfully. Virtually all major security breaches involve companies not updating there systems against the latest known exploits.
“Meltdown” and “Spectre” are particularly dangerous if activated, because most activated viruses can only access or corrupt information saved in files on the harddrive. An M&S virus, however, could access information not yet saved, by violating protections in how programs are kept seperate by the operating system.
“Amazon told customers of its Amazon Web Services cloud service that the vulnerability “has existed for more than 20 years in modern processor architectures.”
The operating system processes and the underlying chip control logic that M&S exploits dates from revisions made 20 years ago; the exploits, however, are of recent discovery. This is not itself all that profound, and is why it is “buried” in the business section. It is routine, if unfortunate.
Microsoft, for instance, issues patches for both current and previous versions of Windows, as much of the code is the same. New vulnerabilities in this old code base, however, are often found decades later; because a virus exploiting the hole is detected, or security researchers found it first. Once found, software companies will issue patches to block the exploit, and anti-virus software will be updated to find viruses with code known to use the exploit.
“Meltdown” and especially “Spectre” present unique challenges, in that the problem exists with how the software interacts with the hardware. “Meltdown” can be patched, but there is a performance hit, that would require new versions of the hardware itself to avoid. “Spectre” is a more fundemental problem, but can be held at bay by blocking specific viruses that use it as they become known. Best practices, updating the system and deleting unknown files, can still protect systems.
No. I’m saying that lawyers, who are bound to protect their client confidence above all else, must no assume that when they are told a technology is safe for such data, that it is. And I ma saying that if the tech experts know that there might be such bugs, they have an obligation to say so in clear terms that everyone understands.
They didn’t.
What stored data is ever safe? Is the standard “completely impenetrably safe” or is it “reasonably safe”? The only safe data that I know of is data that has been 100% destroyed or has never been documented. Of course, you might think that data only you have in your head is safe, but that’s only safe until you’re tortured and your resolve gives way. Or how about data stored in your brain and tomorrow someone invents a brain scanner that works even after you die or can retrieve fragments from mutilated parts of the brain.
I think if the legal system, the bar associations, and you are using a strict definition for “secure” and “safe”, they should reconsider.
It’s “reasonably” safe, but also “as safe as the lawyer can reasonably make it.” In order to meet that standard, the lawyer has to know how safe it is. If the lawyer knows that major flaws are not being communicated, then it is unreasonable to assume the information is reasonably safe.
Jack, if the “as safe as the lawyer can reasonably make it” standard has to include “impervious to new forms of attack that have not been invented yet and won’t be for another 20 years” . . . then I think you can scratch out the word “reasonably“.
–Dwayne
If the new forms of attack are nonetheless inevitable, then it unreasonable for a lawyer to tell a client that his secrets are secure.They aren’t.
“If the new forms of attack are nonetheless inevitable, then it unreasonable for a lawyer to tell a client that his secrets are secure.They aren’t.”
Bingo.
They are inevitable, and the systems are not secure… ever. If you don’t want a secret known, tell it to your best friend, with whom you have implicit trust, then use high explosives to disintegrate his head. Electronic media will always be able to be read by someone in the right circumstances, by the nature of saving the data in the first place.
So never record such a thing at all.
Or use misdirection to make an attacker look in the wrong place, or for the wrong secret. This is the purloined letter method, or the honey pot technique.
Security is a mindset, not a destination. Technology changes the game (for instance, it is possible to read what is on a computer screen from a van parked outside, if you have the right equipment) You do the best you can and keep up with current events.
An example: a border guard observed Mike riding a bike through the border checkpoint every day, and noticed some ‘tells’ that Mike is nervous, as in “Don’t search me.” So he searches, every day… and never finds anything illegal. This goes on for years, with no result. One day, Mike fails to come through. Inquiries reveal Mike is retired. The guard seeks Mike out in a local bar, as they have something of a relationship after years of banter, and says “Mike, I know you were smuggling something, and now you are retired at a young age. I never could figure out what it was, even though I scanned and disassembled your bike many times. Between us, what were you smuggling?”
Mike smiles and says “Bicycles.”
The problem is that the particular exploits were unknowable when the chips were designed. Now that it is known, the firms involved have been swift to inform the public so that they know to install the patches (after a brief embargo period to allow major tech firms to develop those very patches and workarounds).
It’s interesting to note that when this problem was introduced in 1995, these security flaws wouldn’t have been much of an issue. That’s because in order for the Meltdown or Spectre exploit software to steal data from your computer, they actually have to be running on your computer.
But back in 1995, that would be unlikely. Most of the code running on a computer was intentionally obtained from commercial software vendors (or other trusted distributors) and put there by the computer’s owners, You wouldn’t normally have allowed total strangers to access your computer and install software on it. (Computer viruses did exactly that, of course, which is why they were such a security threat.)
In fact, prior to Windows 95 (or maybe even Windows 2000), non-server versions of Windows didn’t even provide strong memory isolation, meaning that any program running on your computer could read data from any other program. Nowadays, all major general-purpose operating systems offer strong isolation. Or at least they did. That’s what researchers broke with the Meltdown or Spectre exploits.
That’s much more of a problem nowadays than it was in 1995, because we have come to rely on isolation security barriers to allow us to intentionally allow total strangers to run code on the computers where we store data. We do this in at least three major ways: First, there’s your web browser, which is running code that you download from random strangers’ servers every time you visit a web page. Your browser is designed to allow you to do this while protecting your data from malicious code, but its ability to do so is based in part on the assumption that hardware isolation still works.
Second, a lot of things “in the cloud” involve letting multiple users share the same computer resources. Most commonly, if you use a hosting service for a small website with shared hosting, you are probably sharing the server with dozens or hundreds of other people. The operating system and services are designed to keep you isolated, but again, they depend on hardware isolation. (Containers and virtual servers also depend on memory isolation.) Other people sharing the same physical computer hardware might be able to read your data by using these exploits.
Finally, we now have all these smart phones and tablets that can download apps. This sounds similar to 1995, when we installed software from third parties, except that these devices were carefully designed so that apps can’t see or change each other’s data, and we’ve grown to depend on that, meaning that we’re used to assuming that we can fearlessly download apps without worrying that they might try to steal something from us. That may no longer be the case.
In other words, in 1995 the chip designers introduced a subtle bug in the design of a feature that not many people were using on personal computers. Over the next two decades, we came to rely on that feature a lot, but nobody noticed the flaw. Or if they suspected it, they didn’t believe it could be exploited as a practical matter. There’s probably a lesson in that.
The lesson is called Chaos, as you know.. Inherently complex non-Newtonian systems are impossible to predict and manage, and unintended stuff happens. Lawyers don’t understand Chaos, and they don’t understand how complex the systems are that they rely on.
Yes. It is hard not to conclude we are all doomed in the longer term to some very bad experiences, some of which might well be existentially significant. Much of ‘risk management’ is highly unsatisfying. (I’ve spent much of my career supposedly doing it.) A truism in our awful maths is that a very small chance of a catastrophe tomorrow equates to a certainty of a catastrophe sometime. It is depressing that we blunder into so many exposures to risk without seeming to consider whether we really need to go there. And once exposed, we rarely seem to be able to back off. (Who decided to make our modern cars even conceivably vulnerable to hackers acting from a distance? Who decided to equip our militaries with weapons that could destroy the planet, with control systems that could conceivably be hacked? Who was consulted? What were the alternatives?). Our increasingly interconnected world often brings increased risk. If we were truly concerned about survival (of our great grandchildren and beyond) we’d surely agree that we must build more effective world governance. But we’re not going to do that, are we? Something to do with ‘freedom’ being more important? And by then we’ll probably all be dead anyway.
It sounds like you are concerned about the big picture and who is looking out for it, particularly pertaining to existential risk. There are quite a few existential risk non-profit organizations looking out for such things, but it would be nice if such things were incorporated into our collective decision-making process, instead of being relegated to an afterthought.
As it happens, I’ve been working on a plan for redesigning the government (keeping the three branches), to address this lack of nuanced thought about the big picture. There are four fundamental big picture factors I’ve identified that everyone should be cognizant of, and that it would likely be beneficial for a government to be responsible for monitoring. These factors apply to any project or situation, but when we scale them up to society as a whole, it becomes easy to see why we need to pay attention to them and the tradeoffs of choosing to address any one over another. I call them the four apocalypses. They will be managed and weight against each other by the Apocalyptic Departments.
“Famine” is the apocalypse pertaining to scarcity and prosperity. All resources–food, water, shelter, energy, transportation, preventative healthcare, information access… even money (though it’s just a fungible promise of goods and services)–falls under this category. These resources are not all necessary to the same degree, but anything that people need to maintain their standard of living is covered by this apocalypse. Dealing with famine requires keeping an eye on the distribution and use of resources, making sure that the system is sustainable in the long-term and, ideally, engineering an increase in the resources available to people. In sum, the Department of Famine will push for making sure we don’t run out of stuff.
“Pestilence” is the apocalypse pertaining to disasters and safety. Accidents, catastrophes, and plagues, whether natural or anthropogenic, all fall under this category. It is impossible to prevent all undesirable events, even at great expense, but those who handle this apocalypse are charged with preparing for, mitigating, and responding to threats to society, people, and their works. Such measures may include safety regulations, emergency training, and response teams. It most obviously conflicts with famine due to the increased cost of reducing risks, but those risks include existential risk: the possibility that humanity will somehow be wiped out. In sum, the Department of Pestilence will push for making sure we don’t run into stuff.
“War” is the apocalypse pertaining to conflict and peace. Even in a prosperous, safe world, people’s desires will be at odds with each other. People who deal with this apocalypse are called to mediate, negotiate, and arbitrate disagreements between individuals, groups, and cultures. Proactively, also mitigate conflicts by eliciting widespread participation in decisions, fostering understanding, and spurring creative solutions and compromises which better satisfy all involved parties. They would also do well to help people accept that they won’t always be able to get what they want, and to better understand and modulate their own feelings and communicate with others more effectively. In sum, the Department of War will push for making sure we don’t destroy each other.
“Age” is the apocalypse pertaining to complacency and freedom. It represents the stagnation that comes from within a society if we allow ourselves to become fixed in our ways on the one hand, or the degeneration that comes if we discard our self-restraint because we forget why it is important on the other. If society ossifies or breaks down and becomes a place that is not worth living in, the other departments will be pointless. People who deal with this apocalypse will be working to ensure that the freedom of people to live their lives as they choose is not casually curtailed by the other departments, but also that people learn the maturity to use that freedom responsibly. In sum, the Department of Age will push for making sure we don’t destroy ourselves.
Each of these departments (in whichever branch is relevant) will need to approve each law, policy, initiative, and expenditure. That way we know the decision has been considered from the point of view of each type of big-picture concern. Furthermore, the division of big-picture concerns into different categories will obviate the two-party system, which survives in part because an elected official is expected to vote on any type of issue.
Because of this broad mandate, candidates can afford to have destructive views on some issues as long as they agree with their constituents on the “important” ones. That’s how the Republicans and Democrats have accumulated bundles of unrelated or contradictory policy positions. If people are willing to pay attention to more offices and candidates, they will get to choose a candidate who agrees with them on famine, for instance, and not worry about their stance on age.
Of course, the principal-agent problem of representative democracy (ensuring it is really representative) can’t be effectively addressed without a more educated and capable populace. That’s another thing I’ve been working on.
Yes, it would indeed be ‘nice if such things could be incorporated ….’ and it is very good to know you are working on it.
Can the heads of your four apocalyptic departments be seen as ‘horsemen’ ….. ? I don’t know my Revelations well enough to spot the parallels you may be seeking to draw. (Will you also need seven seals and a Great Harlot?)
Yes I am ‘concerned’ or at least ‘interested in’ systemic risk with potentially catastrophic consequences. I accept that the ‘boy’ has cried ‘wolf’ too many times : Malthus on population, Club of Rome, Silent Spring, peak oil, climate change, avian flu etc. But there might still be a ‘wolf’.And it is understandable that with the doomsayers being so frequently wrong, that there is a general confidence that we should just let the animal spirits rip and it will all sort itself out.
I am very depressed by the ongoing ‘climate’ debate. Not because I view climate change as a serious risk ( I am as yet unpersuaded); but that the debate has been so appallingly bad mannered. If we do have a serious existential risk heading our way, it would be ‘nice’ to know our best and brightest scientists and economists could work together to develop an appropriare response. Most of them have been educated largely out of the public purse. And they owe us (as their ultimate client) better than a seat at ringside while they batter and insult each other.
I am most interested in economic systems. The chaos in macroeconomics is startling. There is no ‘consensus’ any more, Washington or otherwise. Does that matter? I like Hyman Minsky, who I think (if still alive) might be saying that we are better off knowing that we don’t know, rather than thinking that we do. If ‘stability is destablising’ then maybe the converse is also true?
An addendum.
https://arstechnica.com/information-technology/2018/01/intel-ceos-sale-of-stock-just-before-security-bug-reveal-raises-questions/
The headline says it all. Intel’s CEO dumped stock over this before it hit the headlines.
You know, when I think something will cause a drop in my stock values, I usually ask my financial adviser to dump those stocks. You don’t suppose that CEO might have thought bad news would cause stocks prices to drop do you?
We know for sure that what would be unethical will depend on just how much he knew about the flaws beforehand.
But is dumping his stock inherently unethical?
I’m not so sure.
Doing it before the news becomes public? Yes actually, it’s called insider trading.
Is that technically insider trading, I mean, maybe it is. Doesn’t it have alot to do with the specific timing of the sells as well as whether or not the seller legally disclosed the sale?
I mean, maybe the article mentions it…I didn’t read it as I was only invited to consider the headline.
It has to do with what non-public info he had before the sale.
Stock sale can be tricky and I won’t pass judgment yet. What’s important here is that news sites have raise the concern and put the spotlight on it. Certainly it has the attention of the SEC right now, who can look at the evidence, open an investigation if necessary, and make a proper determination.
If he set up this sale years in advance on a schedule, probably not a problem. But from the articles I did read, yes, this stinks of Insider Trading and at a minimum warrants a closer look by the SEC.
This is not likely to be a popular opinion among professional programmers, but I feel it needs to be said.
The excuse that computers are complex and that testing to remove all of these flaws would take a prohibitive amount of time just doesn’t hold water. I understand that security vulnerabilities are different from outright bugs: security vulnerabilities are only problems because people deliberately manipulate the system in unanticipated ways. Bugs happen when people inadvertently manipulate the system in unanticipated ways. Some of these ways are incredibly sophisticated and may be infeasible to anticipate. However, having supported computers for the past few years, I’ve seen bugs that should have been anticipated, and zero testing would be required in order to do so.
The problem with testing is that the people testing usually understand the software well enough to know how it is supposed to work, or they are given a few basic things to try, but they don’t have time to test a program with heavy use. Luckily, testing is not the problem.
The problem is that in many cases I’ve seen (and I’ve come to suspect most cases across the software industry) the input and output footprints of code modules are not documented (and if your code contains comments laying out the pseudocode structure, I consider you very lucky). From an engineering standpoint, the input footprint of a system or subsystem describes the conditions the system assumes to be true in order to work effectively. The output footprint describes what effects (including side-effects) the system has or could have on its environment, including if the input footprint is not fulfilled. Those aren’t the official names; I’ve just been calling them that.
The thing about input and output footprints is that you don’t have to know everything that could possibly happen, and you don’t have to test everything. The input footprint will tell you what could go wrong with a code, and the output footprint will tell you what problems the code could cause. As a simple example, if the input footprint is “the computer is intact”, you don’t have to make a list of all the things that could physically break the computer. At most, all you’d need is the range of temperatures, pressures, and other environmental conditions which interact with the material properties of the computer to cause its structure to break down. If your output footprint is “returns the sum of two numbers, and leaves the memory it used for the computation locked,” then you don’t need to test it to know that it will result in a memory leak, eat up your RAM, and ultimately crash your computer if you use it too much without restarting. When you put multiple modules together, you can check a module’s output footprint against the input footprint of another module, and make sure they don’t interfere. It’s much easier than testing, and it simplifies complex interactions enough to avoid problems.
Failure modes are also important. All programs should have error messages that should at least describe what step or module failed, if not why it failed. That’s what input footprints are for.
I know that optimized code often calls for modularity to be sacrificed, and I appreciate that. Even though it’s far more robust to have programs that keep track of each step of the way and let you go back to any previous step, that’s not always possible while maintaining efficiency. However, that is still no excuse for not documenting the code properly. If a flaw isn’t going to be fixed, fine, but it absolutely must be documented in such a way that the end user can avoid it, recognize it, or know what to do now that they’ve run into it.
All programmers should be starting with pseudocode, making things as modular as possible (while maintaining efficiency), commenting each section of code not just by what it does but by how it works, and by its input and output footprints (bonus points for commenting each line), and putting in clarifying error messages. Quality code contains its own documentation.
“But you don’t understand, software is really complicated,” software developers say.
“Yes,” I reply. “That’s why code hygiene is very important. If you practice it up front, it makes everything easier down the line. It saves a great deal of time and effort troubleshooting and patching problems, overhauling code bases, and having to tell customers that a use case they considered obvious would break the system because nobody bothered to clean up the variable in between uses. Then you can deal with the really complicated problems and not have to worry about the simple, stupid ones.” This applies to user-friendliness in addition to simple code functionality. Clarification mindset is indispensable.
Preventing unauthorized access and control is much tougher because people can be very skilled in the use of hacking mindset to exploit the overlooked properties of systems. Security mindset entails paying attention to those overlooked properties, and documenting their input and output footprints to predict what conditions could allow systems to be hacked. While the extra effectiveness of security mindset is indeed more expensive in terms of effort and resources, it becomes much easier if systems are documented clearly and comprehensively. Then people can make informed decisions about what efforts they are willing to go through to obtain greater security, instead of periodically having to patch flaws as they are discovered (and then patch the patches, and watch as the system gets more and more tangled and difficult to effectively support).
The world of software could be so much better than it is, but first we have to change the way people think. That’s why I decided to teach myself how to write and support paradigms: mental software is the most effective tool humanity will ever have.
A Comment of the Day!
There are solid reasons to keep critical computer systems 100% isolated from the internet.
What examples would you provide as critical?
Michael Ejercito wrote, “What examples would you provide as critical?”
Top Secret documents, personal records, engineering documents, medical records, financial information, in general anything that you do not want to become public.