Tag Archives: computers

Morning Ethics Warm-Up, 4/3/2018: Hypocrisy, Exploitation, Fake Definitions And Fake News

Good Morning…

…and believe me, it takes a super-human effort for me to say that right now…

1.  Good. Rep. Esty is not running for re-election. We discussed her hypocrisy in a post two days ago. Now she says, “Too many women have been harmed by harassment in the workplace. In the terrible situation in my office, I could have and should have done better.” This would have been a meaningful and productive statement if she hadn’t previously insisted that she handled the matter correctly and refused to be accountable. She did, however, and mouthing platitudes now should not alter the verdict that she was a cynical and grandstanding #MeToo performer who, when time came to act according to the standards she was demanding of others, failed miserably.

2. Anybody know of an ethical computer protection service? I now have two ghost services torturing me with pop-up ads, slowing down my computer, and generally behaving like a virus because I cancelled them. When I cancel a service I allowed onto my computer, I expect them to say good-bye and leave. I do not recall agreeing in my original contracts that “the undersigned hereby agrees that if for any reason he chooses to end his relationship with ____________, the service will continue to hound him with warnings, special offers, unrequested scans and other harassment until he dies or throws his computer out the window.”

The two companies at issue are AVG and McAfee. I will chew off my foot before I engage either of them again.

3. Big Brother’s way of winning a debate: change the meaning of the terms so you can’t lose.  After the repeated misuses of the term “assault rifle” as a disinformation and fear-mongering tactic by the anti-gun mob were flagged by Second Amendment supporters to the embarrassment of the zealots, Mirriam-Webster rode to the rescue,  changing its online dictionary entry for the term so its ignorant ideological allies could now cite authority:

On March 31, 2018, the following definition was published:

noun: any of various intermediate-range, magazine-fed military rifles (such as the AK-47) that can be set for automatic or semiautomatic fire; also a rifle that resembles a military assault rifle but is designed to allow only semiautomatic fire

Translation: “This is what the term really means, but it also means what ignorant politicians, journalists and activist refer to erroneously as the same thing even though it’s not, because we support them and this will make it easier for them to mislead other without looking dishonest and foolish.”

[UPDATE: There is some question of whether that definition was added before or after Parkland. Reader Steve Langton reports that he read the current version a couple of days after the shooting.]

Continue reading

46 Comments

Filed under Arts & Entertainment, Business & Commercial, Character, Citizenship, Ethics Dunces, Gender and Sex, Government & Politics, Journalism & Media, language, Marketing and Advertising, Workplace

The Wake-Up Call And The Power Cord

As you may have noticed, your host has been involuntarily separated from Ethics Alarms for about 24 hours. Several things occurred that under normal circumstances would have had me dashing off a post while waiting for flights or preparing to check out of my hotel—and there were definitely several comments that had me reaching for a phantom keyboard—but I was without laptop, thanks to leaving the power cord behind in my previous hotel.

So I have a little story to tell. I stayed at a decent Boston hotel last night, not a 4 star hotel like the one I just left  in Atlanta (The Four Seasons), but a nice one, professionally run, dependable. Yet this morning this was my wake-up call, via recording:

“It’s 7 AM. This is your wake-up call for March 8, 2018.”

Almost at the same time, David Hogg was on CNN, explaining how darned easy it was to create a system that would prevent school shootings forevermore.

Wrong. Systems break down, you experience-free, arrogant, disrespectful, know-nothing puppet.  The belief that human beings can devise systems that will solve every problem, or any problem, and do what they are designed to do without failing miserably at the worst possible times and in the worst imaginable ways is signature significance for a fool, or a child. O-Rings fail. Police don’t act on warnings that a kid is violent. Obamacare raises health care premiums.  Political parties end up nominating Hillary Clinton and Donald Trump. Jack Ruby breaks past police security. Communism ends up killing hundreds of millions rather than creating a worker paradise. The Titanic hits the wrong iceberg exactly where it’s weakest. Hitler takes a sleeping pill during the Normandy invasion.

The T-Rex gets loose. Continue reading

14 Comments

Filed under Business & Commercial, Character, Childhood and children, Ethics Alarms Award Nominee, Ethics Train Wrecks, Government & Politics, Marketing and Advertising, War and the Military, Workplace

Comment Of The Day: “Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?”

The comments on this post about the sudden discovery that every computer extant was vulnerable to hacking thanks to two 20-year-old “flaws” were so detailed, informative and excellent that I had the unenviable choice of posting one representative Comment of the Day, or eight. Having just posted eight COTDs on another post last weekend, I opted for one, but anyone interested in the topic—or in need of education about the issues involved— should go to the original post and read all the comments. Forget the post itself—the comments are better.

Here is Extradimensional Cephalopod‘s Comment of the Day on the post, Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

This is not likely to be a popular opinion among professional programmers, but I feel it needs to be said.

The excuse that computers are complex and that testing to remove all of these flaws would take a prohibitive amount of time just doesn’t hold water. I understand that security vulnerabilities are different from outright bugs: security vulnerabilities are only problems because people deliberately manipulate the system in unanticipated ways. Bugs happen when people inadvertently manipulate the system in unanticipated ways. Some of these ways are incredibly sophisticated and may be infeasible to anticipate. However, having supported computers for the past few years, I’ve seen bugs that should have been anticipated, and zero testing would be required in order to do so.

The problem with testing is that the people testing usually understand the software well enough to know how it is supposed to work, or they are given a few basic things to try, but they don’t have time to test a program with heavy use. Luckily, testing is not the problem.

The problem is that in many cases I’ve seen (and I’ve come to suspect most cases across the software industry) the input and output footprints of code modules are not documented (and if your code contains comments laying out the pseudocode structure, I consider you very lucky). From an engineering standpoint, the input footprint of a system or subsystem describes the conditions the system assumes to be true in order to work effectively. The output footprint describes what effects (including side-effects) the system has or could have on its environment, including if the input footprint is not fulfilled. Those aren’t the official names; I’ve just been calling them that. Continue reading

5 Comments

Filed under Comment of the Day, Ethics Alarms Award Nominee, Science & Technology

Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

(That’s Meltdown on the left, Spectre on the right.)

From the New York Times:

Computer security experts have discovered two major security flaws in the microprocessors inside nearly all of the world’s computers. The two problems, called Meltdown and Spectre, could allow hackers to steal the entire memory contents of computers, including mobile devices, personal computers and servers running in so-called cloud computer networks.

There is no easy fix for Spectre, which could require redesigning the processors, according to researchers. As for Meltdown, the software patch needed to fix the issue could slow down computers by as much as 30 percent — an ugly situation for people used to fast downloads from their favorite online services. “What actually happens with these flaws is different and what you do about them is different,” said Paul Kocher, a researcher who was an integral member of a team of researchers at big tech companies like Google and Rambus and in academia that discovered the flaws.

Meltdown is a particular problem for the cloud computing services run by the likes of Amazon, Google and Microsoft. By Wednesday evening, Google and Microsoft said they had updated their systems to deal with the flaw.

Here’s the best part:

“Amazon told customers of its Amazon Web Services cloud service that the vulnerability “has existed for more than 20 years in modern processor architectures.”

We trust the tech giants and computer manufacturers to give us secure devices. We then entrust our businesses and lives to these devices.

That there were such massive “flaws” in every computer, and that it took 20 years for those whom we trusted to discover them, is an unprecedented breach of competence, trust and and responsibility. Imagine auto manufacturers announcing that every car in the world had a “flaw” that might cause a fatal crash. I see no difference ethically.

And why is this story buried in the Times’ Business Section, and not on the front page, not just of the Times, but of every newspaper?

 

61 Comments

Filed under Around the World, Business & Commercial, Ethics Alarms Award Nominee, Journalism & Media, Science & Technology

The Unabomber, The Red Light, And Me [UPDATED!]

I ran a red light last night, and I’m feeling bad about it. Ted Kaczynski made me do it.

It was after midnight, and I was returning home after seeing the pre-Broadway production of the musical “Mean Girls,” based on the cult Lindsay Lohan comedy. I was late, my phone was dead, I knew my wife would be worried, and I was stopped at an intersection where I could see for many football fields in all directions. There were no cars to be seen anywhere.

Ted, , aka “The Unabomber” or “Snookums” to his friends, cited my exact situation as an example of how we have become slaves to our technology. Why do we waste moments of our limited lifespan because of a red light, when there is no reason to be stopped other than because the signal says to. Admittedly, this had bothered me before I read Ted’s complaint. Stop lights should start blinking by midnight, allowing a motorist to proceed with caution, as with a stop sign.  If one isn’t blinking, we should be allowed to treat it as if it is.

Last night, I ran the light. With my luck, there was a camera at the intersection, and I’ll get a ticket in the mail. But..

…whether I do or not doesn’t change the ethical or unethical character of my conduct. That’s just moral luck.

…it was still against the law to run the light, even it I was treating it as a blinking light, because it wasn’t

…breaking the law is unethical, even when the law is stupid, and

…there was no legitimate emergency that could justify my running the light as a utilitarian act.

So I feel guilty. Not guilty enough to turn myself in, but still guilty, since I am guilty.

But Ted wasn’t wrong.

Update: Let me add this; I was thinking in the shower.

On several occasions in the past, I have found myself stopped by a malfunctioning light that appeared to be determined to stay red forever. Is it ethical to go through the light then? The alternative is theoretically being stuck for the rest of my life. So we run such lights, on the theory the frozen stop light is not meeting the intent of the law or the authorities who placed it there, and to remain servile to the light under such circumstances is unreasonable. Yet running it is still breaking the law, and isn’t stopping for a light in the dead of night with no cars to be seen also not consistent with the intent of the law and the light? What’s the distinction?

41 Comments

Filed under Citizenship, Daily Life, Government & Politics, Law & Law Enforcement, Science & Technology, U.S. Society

When “Ick!” Strikes Out Ethics: The Intensifying Robo-Umpire Controversy

[I see that I last wrote about this issue in April, and before that, in June of 2016, and in 2012 before that.Well, it’s worth writing about again, and again, until ethics and common sense prevails.]

This weekend Major League Umpires held a silent protest, wearing armbands in support of colleague Angel Hernandez, whose competence was publicly questioned by Detroit Tiger player Ian Kinsler. In fact, Angel Hernandez is a terrible umpire, and terrible, indeed, even mildly fallible umpires have a problem now that they never had to worry about in the good old days: their mistakes are obvious and recorded for all to see.

Yesterday Red Sox color man and former player Jerry Remy was reminiscing during the Red Sox -Yankee game broadcast about one of his few home runs. He said he had struck out, missing with his third swing by almost a foot, and was walking back to the dugout when the umpire called him back, saying he had foul-tipped the ball. “I know that was wrong, but I’m not going to argue I’m out when the ump says I’m not.” Remy said. He went back to the plate, and on the next pitch hit a home run. “Of course, they didn’t have replay them,” Jerry added.

Before every game was televised and before technology could show wear each pitch crossed the plate, balls and strikes were called definitively by umpires, many of whom proudly had their own strike zones. “As long as they are consistent with it ” was the rationalization you heard from players and managers. It was, however, a travesty. The strike zone isn’t a judgment call; it is defined, very specifically, in the rules. A pitch is either within the legal zone or it is not. A strike that is called a ball when it is not, or vice-versa, is simply a wrong call, and any time it happens can affect the outcome of the at-bat and the game. If you watch a lot of baseball, you know that we are not just talking about strikeouts and walks.  The on-base average when a batter is facing a 2 balls, one strike count as opposed to a 1-2 count is significantly higher. The wrongly called third pitch can change the result of the at bat dramatically.

Since the technology is available to call strikes correctly 100% of the time, why isn’t the technology being used? Actually it is being used, in TV broadcasts. The fan can see exactly when the umpire misses a call, and the broadcasters talk about it all the time. “Where was that?” “That was a gift!”  “Wow, the pitcher was squeezed on that one.” Once, a missed call in a game was virtually undetectable, because one could assume that the umpire had a better and closer view than any fan or broadcaster could have. Now, there is no doubt.

Yet the players, sportswriters and broadcasters still overwhelmingly argue against the use of computer technology to call balls and strikes. It’s amazing. They know, and admit, that  mistaken  ball and strike calls warp game results; they complain about it when it happens, point it out, run the graphics repeatedly to show how badly a crucial call was botched, and yet argue that a completely fixable problem with massive implications to the players, the games and the seasons, should be allowed to persist.

These are the rationalizations and desperate  arguments they advance: Continue reading

30 Comments

Filed under Science & Technology, Sports

The Unibomber Had A Point. [UPDATED]

FX has a new limited series about the hunt for the Unabomber, Theodore John Kaczynski. I didn’t pay much attention to the story when it was going on; I just thought it was one more Harvard-grad-turns-serial-killer episode, and that was that. I certainly didn’t pay attention to his “manifesto.” The series, however, enlightened me.  As I understand it, Ted believed that technology was destroying society, making us all slaves to it, and taking the joy out of life. I have yet to see how blowing people up addressed this problem, but then he shouldn’t have to be right about everything. The evidence has been mounting since 1995, when he killed his final victim,that  the Unabomber  wasn’t quite as crazy as we thought.

I could bury you in links, but will not.  We are slaves, for example, to passwords. I teach lawyers that their devices containing client confidences should, to be properly protective of them under ethics standards, have passwords of at least 18 random letters, characters and numbers, with the password for every such device being different, and all of them changed every month. Or you can go the John Podesta route, use “password.” and get hacked, and eventually disciplined by your bar association, once they decide to get serious.

[CORRECTION: In the original post, I relayed a link to a site where you can check your password to see if it’s been compromised. I had been forwarded the link by another tech-interested lawyer. But as I was just alerted by a commenter (Than you, Brian!) It’s apotential trap and an unethical site, making you reveal your password to check it. I apologize for posting it. See how dangerous and tricky this stuff is? See? SEE?.I fell for the trap of depending on technology to protect us from technology! Ted warned us about that, too.]

Then there is this feature in The Atlantic. An excerpt: Continue reading

37 Comments

Filed under Business & Commercial, Childhood and children, Daily Life, Finance, Science & Technology, The Internet, U.S. Society