Tag Archives: computers

The Wake-Up Call And The Power Cord

As you may have noticed, your host has been involuntarily separated from Ethics Alarms for about 24 hours. Several things occurred that under normal circumstances would have had me dashing off a post while waiting for flights or preparing to check out of my hotel—and there were definitely several comments that had me reaching for a phantom keyboard—but I was without laptop, thanks to leaving the power cord behind in my previous hotel.

So I have a little story to tell. I stayed at a decent Boston hotel last night, not a 4 star hotel like the one I just left  in Atlanta (The Four Seasons), but a nice one, professionally run, dependable. Yet this morning this was my wake-up call, via recording:

“It’s 7 AM. This is your wake-up call for March 8, 2018.”

Almost at the same time, David Hogg was on CNN, explaining how darned easy it was to create a system that would prevent school shootings forevermore.

Wrong. Systems break down, you experience-free, arrogant, disrespectful, know-nothing puppet.  The belief that human beings can devise systems that will solve every problem, or any problem, and do what they are designed to do without failing miserably at the worst possible times and in the worst imaginable ways is signature significance for a fool, or a child. O-Rings fail. Police don’t act on warnings that a kid is violent. Obamacare raises health care premiums.  Political parties end up nominating Hillary Clinton and Donald Trump. Jack Ruby breaks past police security. Communism ends up killing hundreds of millions rather than creating a worker paradise. The Titanic hits the wrong iceberg exactly where it’s weakest. Hitler takes a sleeping pill during the Normandy invasion.

The T-Rex gets loose. Continue reading


Filed under Business & Commercial, Character, Childhood and children, Ethics Alarms Award Nominee, Ethics Train Wrecks, Government & Politics, Marketing and Advertising, War and the Military, Workplace

Comment Of The Day: “Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?”

The comments on this post about the sudden discovery that every computer extant was vulnerable to hacking thanks to two 20-year-old “flaws” were so detailed, informative and excellent that I had the unenviable choice of posting one representative Comment of the Day, or eight. Having just posted eight COTDs on another post last weekend, I opted for one, but anyone interested in the topic—or in need of education about the issues involved— should go to the original post and read all the comments. Forget the post itself—the comments are better.

Here is Extradimensional Cephalopod‘s Comment of the Day on the post, Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

This is not likely to be a popular opinion among professional programmers, but I feel it needs to be said.

The excuse that computers are complex and that testing to remove all of these flaws would take a prohibitive amount of time just doesn’t hold water. I understand that security vulnerabilities are different from outright bugs: security vulnerabilities are only problems because people deliberately manipulate the system in unanticipated ways. Bugs happen when people inadvertently manipulate the system in unanticipated ways. Some of these ways are incredibly sophisticated and may be infeasible to anticipate. However, having supported computers for the past few years, I’ve seen bugs that should have been anticipated, and zero testing would be required in order to do so.

The problem with testing is that the people testing usually understand the software well enough to know how it is supposed to work, or they are given a few basic things to try, but they don’t have time to test a program with heavy use. Luckily, testing is not the problem.

The problem is that in many cases I’ve seen (and I’ve come to suspect most cases across the software industry) the input and output footprints of code modules are not documented (and if your code contains comments laying out the pseudocode structure, I consider you very lucky). From an engineering standpoint, the input footprint of a system or subsystem describes the conditions the system assumes to be true in order to work effectively. The output footprint describes what effects (including side-effects) the system has or could have on its environment, including if the input footprint is not fulfilled. Those aren’t the official names; I’ve just been calling them that. Continue reading


Filed under Comment of the Day, Ethics Alarms Award Nominee, Science & Technology

Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

(That’s Meltdown on the left, Spectre on the right.)

From the New York Times:

Computer security experts have discovered two major security flaws in the microprocessors inside nearly all of the world’s computers. The two problems, called Meltdown and Spectre, could allow hackers to steal the entire memory contents of computers, including mobile devices, personal computers and servers running in so-called cloud computer networks.

There is no easy fix for Spectre, which could require redesigning the processors, according to researchers. As for Meltdown, the software patch needed to fix the issue could slow down computers by as much as 30 percent — an ugly situation for people used to fast downloads from their favorite online services. “What actually happens with these flaws is different and what you do about them is different,” said Paul Kocher, a researcher who was an integral member of a team of researchers at big tech companies like Google and Rambus and in academia that discovered the flaws.

Meltdown is a particular problem for the cloud computing services run by the likes of Amazon, Google and Microsoft. By Wednesday evening, Google and Microsoft said they had updated their systems to deal with the flaw.

Here’s the best part:

“Amazon told customers of its Amazon Web Services cloud service that the vulnerability “has existed for more than 20 years in modern processor architectures.”

We trust the tech giants and computer manufacturers to give us secure devices. We then entrust our businesses and lives to these devices.

That there were such massive “flaws” in every computer, and that it took 20 years for those whom we trusted to discover them, is an unprecedented breach of competence, trust and and responsibility. Imagine auto manufacturers announcing that every car in the world had a “flaw” that might cause a fatal crash. I see no difference ethically.

And why is this story buried in the Times’ Business Section, and not on the front page, not just of the Times, but of every newspaper?



Filed under Around the World, Business & Commercial, Ethics Alarms Award Nominee, Journalism & Media, Science & Technology

The Unabomber, The Red Light, And Me [UPDATED!]

I ran a red light last night, and I’m feeling bad about it. Ted Kaczynski made me do it.

It was after midnight, and I was returning home after seeing the pre-Broadway production of the musical “Mean Girls,” based on the cult Lindsay Lohan comedy. I was late, my phone was dead, I knew my wife would be worried, and I was stopped at an intersection where I could see for many football fields in all directions. There were no cars to be seen anywhere.

Ted, , aka “The Unabomber” or “Snookums” to his friends, cited my exact situation as an example of how we have become slaves to our technology. Why do we waste moments of our limited lifespan because of a red light, when there is no reason to be stopped other than because the signal says to. Admittedly, this had bothered me before I read Ted’s complaint. Stop lights should start blinking by midnight, allowing a motorist to proceed with caution, as with a stop sign.  If one isn’t blinking, we should be allowed to treat it as if it is.

Last night, I ran the light. With my luck, there was a camera at the intersection, and I’ll get a ticket in the mail. But..

…whether I do or not doesn’t change the ethical or unethical character of my conduct. That’s just moral luck.

…it was still against the law to run the light, even it I was treating it as a blinking light, because it wasn’t

…breaking the law is unethical, even when the law is stupid, and

…there was no legitimate emergency that could justify my running the light as a utilitarian act.

So I feel guilty. Not guilty enough to turn myself in, but still guilty, since I am guilty.

But Ted wasn’t wrong.

Update: Let me add this; I was thinking in the shower.

On several occasions in the past, I have found myself stopped by a malfunctioning light that appeared to be determined to stay red forever. Is it ethical to go through the light then? The alternative is theoretically being stuck for the rest of my life. So we run such lights, on the theory the frozen stop light is not meeting the intent of the law or the authorities who placed it there, and to remain servile to the light under such circumstances is unreasonable. Yet running it is still breaking the law, and isn’t stopping for a light in the dead of night with no cars to be seen also not consistent with the intent of the law and the light? What’s the distinction?


Filed under Citizenship, Daily Life, Government & Politics, Law & Law Enforcement, Science & Technology, U.S. Society

When “Ick!” Strikes Out Ethics: The Intensifying Robo-Umpire Controversy

[I see that I last wrote about this issue in April, and before that, in June of 2016, and in 2012 before that.Well, it’s worth writing about again, and again, until ethics and common sense prevails.]

This weekend Major League Umpires held a silent protest, wearing armbands in support of colleague Angel Hernandez, whose competence was publicly questioned by Detroit Tiger player Ian Kinsler. In fact, Angel Hernandez is a terrible umpire, and terrible, indeed, even mildly fallible umpires have a problem now that they never had to worry about in the good old days: their mistakes are obvious and recorded for all to see.

Yesterday Red Sox color man and former player Jerry Remy was reminiscing during the Red Sox -Yankee game broadcast about one of his few home runs. He said he had struck out, missing with his third swing by almost a foot, and was walking back to the dugout when the umpire called him back, saying he had foul-tipped the ball. “I know that was wrong, but I’m not going to argue I’m out when the ump says I’m not.” Remy said. He went back to the plate, and on the next pitch hit a home run. “Of course, they didn’t have replay them,” Jerry added.

Before every game was televised and before technology could show wear each pitch crossed the plate, balls and strikes were called definitively by umpires, many of whom proudly had their own strike zones. “As long as they are consistent with it ” was the rationalization you heard from players and managers. It was, however, a travesty. The strike zone isn’t a judgment call; it is defined, very specifically, in the rules. A pitch is either within the legal zone or it is not. A strike that is called a ball when it is not, or vice-versa, is simply a wrong call, and any time it happens can affect the outcome of the at-bat and the game. If you watch a lot of baseball, you know that we are not just talking about strikeouts and walks.  The on-base average when a batter is facing a 2 balls, one strike count as opposed to a 1-2 count is significantly higher. The wrongly called third pitch can change the result of the at bat dramatically.

Since the technology is available to call strikes correctly 100% of the time, why isn’t the technology being used? Actually it is being used, in TV broadcasts. The fan can see exactly when the umpire misses a call, and the broadcasters talk about it all the time. “Where was that?” “That was a gift!”  “Wow, the pitcher was squeezed on that one.” Once, a missed call in a game was virtually undetectable, because one could assume that the umpire had a better and closer view than any fan or broadcaster could have. Now, there is no doubt.

Yet the players, sportswriters and broadcasters still overwhelmingly argue against the use of computer technology to call balls and strikes. It’s amazing. They know, and admit, that  mistaken  ball and strike calls warp game results; they complain about it when it happens, point it out, run the graphics repeatedly to show how badly a crucial call was botched, and yet argue that a completely fixable problem with massive implications to the players, the games and the seasons, should be allowed to persist.

These are the rationalizations and desperate  arguments they advance: Continue reading


Filed under Science & Technology, Sports

The Unibomber Had A Point. [UPDATED]

FX has a new limited series about the hunt for the Unabomber, Theodore John Kaczynski. I didn’t pay much attention to the story when it was going on; I just thought it was one more Harvard-grad-turns-serial-killer episode, and that was that. I certainly didn’t pay attention to his “manifesto.” The series, however, enlightened me.  As I understand it, Ted believed that technology was destroying society, making us all slaves to it, and taking the joy out of life. I have yet to see how blowing people up addressed this problem, but then he shouldn’t have to be right about everything. The evidence has been mounting since 1995, when he killed his final victim,that  the Unabomber  wasn’t quite as crazy as we thought.

I could bury you in links, but will not.  We are slaves, for example, to passwords. I teach lawyers that their devices containing client confidences should, to be properly protective of them under ethics standards, have passwords of at least 18 random letters, characters and numbers, with the password for every such device being different, and all of them changed every month. Or you can go the John Podesta route, use “password.” and get hacked, and eventually disciplined by your bar association, once they decide to get serious.

[CORRECTION: In the original post, I relayed a link to a site where you can check your password to see if it’s been compromised. I had been forwarded the link by another tech-interested lawyer. But as I was just alerted by a commenter (Than you, Brian!) It’s apotential trap and an unethical site, making you reveal your password to check it. I apologize for posting it. See how dangerous and tricky this stuff is? See? SEE?.I fell for the trap of depending on technology to protect us from technology! Ted warned us about that, too.]

Then there is this feature in The Atlantic. An excerpt: Continue reading


Filed under Business & Commercial, Childhood and children, Daily Life, Finance, Science & Technology, The Internet, U.S. Society

THREE Comments Of The Day (Really Useful Ones): “Tech Dirt’s Mike Masnick On The Internet Privacy Bill”

There were not one but three excellent, informative, detailed comments, one after the other,  in response to the post about the GOP’s elimination of the recent Obama FCC regulations of Big Data gathering by broadband providers. Technology competence is, I believe, the greatest looming ethics issue for the professions, and it is important for the general public as well. All three of these Comments of the Day are educational. If only the news media and elected officials were as well-informed as Alex, John Billingsley, and Slick Willy.

I am very proud of the level of the discourse on Ethics Alarms, and these three Comments of the Day on the post Ethics Quote Of The Month: Tech Dirt’s Mike Masnick On The Internet Privacy Bill are prime examples.

First, here’s slickwilly:

How to be safe with electronic data

First rule: anything online is vulnerable, no matter who secures it. It follows that any computer/device connected online is also vulnerable.

Second rule: Public WiFi is hack-able, and doing so is not that difficult. Someone just has to want to. Using it for playing games could make you vulnerable, and using it to access your financial information (banks, brokers, etc.) is stoopid

Third rule: Anything you do electronically is forever. Any tweet, snap chat, Facebook post, cell phone text or conversation, email, web post, browsing activity, and anything else may be saved by someone. Some of those are harder to get than others: browsing activity takes a snooper on the data line, or a court order to set a snooper up at your ISP. For instance, all cell phones activity is now all saved by the NSA, including where the phone was when. No, no one looks at it, not until they have a reason to research a person, perhaps years later. ‘Smart’ TVs can record you in your own home, without your knowledge, unless you take steps to stop it (electrical tape over cameras/microphones is a start, but still not enough)

Fourth rule: Any public activity can be recorded today. Besides CCD cameras everywhere and license plate readers on many roads, facial metrics can track you in most urban and many rural areas. Even going into the desert or mountains could be spotted via satellite, should the motivation be enough to look your way.

So don’t leave your computer connected to the Internet 24/7 (a power strip that stops electricity from reaching the computer helps cut connectivity when ‘off’), do nullify the ability of other devices to spy on you in your home, and never say anything electronically you do not want going public. Use complex passwords, and never the same for multiple sites. Password safes are better than written notes (and Apple Notes are silly to use for this.) How much you protect yourself depends on your level of paranoia.

Do you have something to hide? A secret you would rather not be made public? Do not document it electronically! Or use the method below.

Now, how to be safe with electronic information: Place it exclusively on an air-gapped (no network connection at all) computer. Place that computer in a heavy steel safe. Encase that safe in concrete, take it out to a deep ocean trench, and drop it overboard. Forget the coordinates where you dropped it.

The point is, nothing is fool-proof

You can take steps to lower the probability that your information gets out, but even using paper and quill pen was only so good as the physical security the document was placed under. Learn some simple steps and you will remove yourself from the radar of most predators. People are careless, apathetic, and just plain dumb, so anything you do helps keep you safer.

I keep such information in a secure, encrypted flash drive that is not stored in a computer USB slot. Could someone break the encryption, should they find the drive and wish to spend the effort? Sure. But if they want me that badly they will get me, one way or another. Why would they? I do not have any deep dark secrets or hidden crimes in my past. Even so, why should my business be available to anyone just to browse through?

Your mileage may vary, but doing nothing is unethical in my responsibilities to my family.

Now John Billingley’s contribution:

Continue reading


Filed under Business & Commercial, Comment of the Day, Ethics Alarms Award Nominee, Health and Medicine, Professions, Rights, Science & Technology, The Internet