Tag Archives: computers

Comment Of The Day: “Look! Computer Professionals Have An Ethics Code!”

There were eight comments on the July 18 post about the Association for Computing Machinery (ACM) ethics code, and four of them were Comment Of The Day-worthy. In addition to the chosen honoree here by Alex, I highly recommend the related comments by Glenn Logan, mariedowd, and Windypundit.

This is an Ethics Alarms record, and speaks volumes about the quality of commentary here.

This is Alex’s Comment of the Day on the post, Look! Computer Professionals Have An Ethics Code!

As a member of the ACM for the past 18 years, I did review earlier drafts and submitted comments. I was especially critical of the vagueness, but in general welcomed the update, as the old code was pretty outdated by now.

I did not think about the enforcement mechanism, but that is because I still don’t see Software Engineering/Programming as a profession. This has been a very contentious point for years. On the one hand, “hackers” (I use this in the original sense of the word, as it describes a very common ethos in the occupation) are terribly skeptical of any authority, and pride themselves that you can become a proficient programmer without formal training. Funny enough, programmers subscribing to this point of view are very supportive of apprenticeships and mentoring… go figure.

On the other hand, corporations will *strongly* resist any sort of licensing, and use the current, informal, certification system as a first filter only. Formal requirements would make software engineers more expensive and possibly lead to some system to deal with liability. Much better to keep to current system with the ability to outsource to the lowest bidder. Continue reading

4 Comments

Filed under Comment of the Day, Education, Ethics Alarms Award Nominee, Professions, Science & Technology, U.S. Society

Look! Computer Professionals Have An Ethics Code!

A new Code of Ethics was recently released by the Association for Computing Machinery (ACM), a professional organization for programmers and technology companies that has aimed to set the tone for ethics in the industry for decades. Its previous ethics code was last updated in 1992, before social media, e-commerce, widespread GPS tracking, the epidemic pf network hacking, bots, trolls, artificial intelligence, and the proliferation of wired cameras on store fronts, house entryways, and family cars, just to name a few of the ominous new developments that has made expanding technology the single greatest ethical challenge in the history of mankind. Most professional codes of ethics have not kept pace with technology, but for a computer organization to be so far behind is embarrassing.

The ACM committee surveyed the international association’s approximately 100,000 members as part of its process. The result is a list of principles and guidelines rather than rules: there is no enforcement mechanism. Nor is there any way to force members to read the thing, much less use it. I’ll say this: the code is ambitious. For example, the Code addresses”The Terminator’s” Skynet scenario, urging members  to take “extraordinary” care to avoid the perils of artificial intelligence, and robots that learn from experience and modify their own actions without the need for re-programming by a human being.

The new code addresses the Big Data ethics issue, and holds that tech companies should collect only the minimum amount of personal information necessary for a task, protect it from unauthorized use, and give users the opportunity to give informed consent regarding their data’s use. This and other provisions in the Code I would mark as “aspirational,” or perhaps “cover” or even “pie in the sky.” Without enforcement, such “rules” amount to lip service at best, deception at worst.

As with most ethics codes, this one indulges in convenient vagueries that purport to give guidance, but really don’t. For example, the Code’s “first principle” states that  the primary obligation of all computer professionals is to “to use their skills for the benefit of society, its members, and the environment surrounding them.” And who determines THAT pray tell? The technicians who made Skynet thought that it would be a boon to humanity, and it ended up destroying humanity. “Benefits” is the most subjective of concepts. Similarly, the code exhorts the technical community to mitigate the negative effects of technologies they are responsible for, and if that can’t be done, perhaps to even  refrain from marketing some products.

Sure.

To help companies and tech workers apply the ethical code’s principles, ACM is launching an“Integrity Project,” which will produce case studies about particular ethical dilemmas, and an “Ask an Ethicist” advice column.

I’m available.

 

8 Comments

Filed under Business & Commercial, Science & Technology, The Internet, U.S. Society, Workplace

Morning Ethics Warm-Up, 4/3/2018: Hypocrisy, Exploitation, Fake Definitions And Fake News

Good Morning…

…and believe me, it takes a super-human effort for me to say that right now…

1.  Good. Rep. Esty is not running for re-election. We discussed her hypocrisy in a post two days ago. Now she says, “Too many women have been harmed by harassment in the workplace. In the terrible situation in my office, I could have and should have done better.” This would have been a meaningful and productive statement if she hadn’t previously insisted that she handled the matter correctly and refused to be accountable. She did, however, and mouthing platitudes now should not alter the verdict that she was a cynical and grandstanding #MeToo performer who, when time came to act according to the standards she was demanding of others, failed miserably.

2. Anybody know of an ethical computer protection service? I now have two ghost services torturing me with pop-up ads, slowing down my computer, and generally behaving like a virus because I cancelled them. When I cancel a service I allowed onto my computer, I expect them to say good-bye and leave. I do not recall agreeing in my original contracts that “the undersigned hereby agrees that if for any reason he chooses to end his relationship with ____________, the service will continue to hound him with warnings, special offers, unrequested scans and other harassment until he dies or throws his computer out the window.”

The two companies at issue are AVG and McAfee. I will chew off my foot before I engage either of them again.

3. Big Brother’s way of winning a debate: change the meaning of the terms so you can’t lose.  After the repeated misuses of the term “assault rifle” as a disinformation and fear-mongering tactic by the anti-gun mob were flagged by Second Amendment supporters to the embarrassment of the zealots, Mirriam-Webster rode to the rescue,  changing its online dictionary entry for the term so its ignorant ideological allies could now cite authority:

On March 31, 2018, the following definition was published:

noun: any of various intermediate-range, magazine-fed military rifles (such as the AK-47) that can be set for automatic or semiautomatic fire; also a rifle that resembles a military assault rifle but is designed to allow only semiautomatic fire

Translation: “This is what the term really means, but it also means what ignorant politicians, journalists and activist refer to erroneously as the same thing even though it’s not, because we support them and this will make it easier for them to mislead other without looking dishonest and foolish.”

[UPDATE: There is some question of whether that definition was added before or after Parkland. Reader Steve Langton reports that he read the current version a couple of days after the shooting.]

Continue reading

46 Comments

Filed under Arts & Entertainment, Business & Commercial, Character, Citizenship, Ethics Dunces, Gender and Sex, Government & Politics, Journalism & Media, language, Marketing and Advertising, Workplace

The Wake-Up Call And The Power Cord

As you may have noticed, your host has been involuntarily separated from Ethics Alarms for about 24 hours. Several things occurred that under normal circumstances would have had me dashing off a post while waiting for flights or preparing to check out of my hotel—and there were definitely several comments that had me reaching for a phantom keyboard—but I was without laptop, thanks to leaving the power cord behind in my previous hotel.

So I have a little story to tell. I stayed at a decent Boston hotel last night, not a 4 star hotel like the one I just left  in Atlanta (The Four Seasons), but a nice one, professionally run, dependable. Yet this morning this was my wake-up call, via recording:

“It’s 7 AM. This is your wake-up call for March 8, 2018.”

Almost at the same time, David Hogg was on CNN, explaining how darned easy it was to create a system that would prevent school shootings forevermore.

Wrong. Systems break down, you experience-free, arrogant, disrespectful, know-nothing puppet.  The belief that human beings can devise systems that will solve every problem, or any problem, and do what they are designed to do without failing miserably at the worst possible times and in the worst imaginable ways is signature significance for a fool, or a child. O-Rings fail. Police don’t act on warnings that a kid is violent. Obamacare raises health care premiums.  Political parties end up nominating Hillary Clinton and Donald Trump. Jack Ruby breaks past police security. Communism ends up killing hundreds of millions rather than creating a worker paradise. The Titanic hits the wrong iceberg exactly where it’s weakest. Hitler takes a sleeping pill during the Normandy invasion.

The T-Rex gets loose. Continue reading

14 Comments

Filed under Business & Commercial, Character, Childhood and children, Ethics Alarms Award Nominee, Ethics Train Wrecks, Government & Politics, Marketing and Advertising, War and the Military, Workplace

Comment Of The Day: “Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?”

The comments on this post about the sudden discovery that every computer extant was vulnerable to hacking thanks to two 20-year-old “flaws” were so detailed, informative and excellent that I had the unenviable choice of posting one representative Comment of the Day, or eight. Having just posted eight COTDs on another post last weekend, I opted for one, but anyone interested in the topic—or in need of education about the issues involved— should go to the original post and read all the comments. Forget the post itself—the comments are better.

Here is Extradimensional Cephalopod‘s Comment of the Day on the post, Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

This is not likely to be a popular opinion among professional programmers, but I feel it needs to be said.

The excuse that computers are complex and that testing to remove all of these flaws would take a prohibitive amount of time just doesn’t hold water. I understand that security vulnerabilities are different from outright bugs: security vulnerabilities are only problems because people deliberately manipulate the system in unanticipated ways. Bugs happen when people inadvertently manipulate the system in unanticipated ways. Some of these ways are incredibly sophisticated and may be infeasible to anticipate. However, having supported computers for the past few years, I’ve seen bugs that should have been anticipated, and zero testing would be required in order to do so.

The problem with testing is that the people testing usually understand the software well enough to know how it is supposed to work, or they are given a few basic things to try, but they don’t have time to test a program with heavy use. Luckily, testing is not the problem.

The problem is that in many cases I’ve seen (and I’ve come to suspect most cases across the software industry) the input and output footprints of code modules are not documented (and if your code contains comments laying out the pseudocode structure, I consider you very lucky). From an engineering standpoint, the input footprint of a system or subsystem describes the conditions the system assumes to be true in order to work effectively. The output footprint describes what effects (including side-effects) the system has or could have on its environment, including if the input footprint is not fulfilled. Those aren’t the official names; I’ve just been calling them that. Continue reading

5 Comments

Filed under Comment of the Day, Ethics Alarms Award Nominee, Science & Technology

Wait, WHAT? NOW They Tell There Are “Two Big Flaws” in Every Computer?

(That’s Meltdown on the left, Spectre on the right.)

From the New York Times:

Computer security experts have discovered two major security flaws in the microprocessors inside nearly all of the world’s computers. The two problems, called Meltdown and Spectre, could allow hackers to steal the entire memory contents of computers, including mobile devices, personal computers and servers running in so-called cloud computer networks.

There is no easy fix for Spectre, which could require redesigning the processors, according to researchers. As for Meltdown, the software patch needed to fix the issue could slow down computers by as much as 30 percent — an ugly situation for people used to fast downloads from their favorite online services. “What actually happens with these flaws is different and what you do about them is different,” said Paul Kocher, a researcher who was an integral member of a team of researchers at big tech companies like Google and Rambus and in academia that discovered the flaws.

Meltdown is a particular problem for the cloud computing services run by the likes of Amazon, Google and Microsoft. By Wednesday evening, Google and Microsoft said they had updated their systems to deal with the flaw.

Here’s the best part:

“Amazon told customers of its Amazon Web Services cloud service that the vulnerability “has existed for more than 20 years in modern processor architectures.”

We trust the tech giants and computer manufacturers to give us secure devices. We then entrust our businesses and lives to these devices.

That there were such massive “flaws” in every computer, and that it took 20 years for those whom we trusted to discover them, is an unprecedented breach of competence, trust and and responsibility. Imagine auto manufacturers announcing that every car in the world had a “flaw” that might cause a fatal crash. I see no difference ethically.

And why is this story buried in the Times’ Business Section, and not on the front page, not just of the Times, but of every newspaper?

 

61 Comments

Filed under Around the World, Business & Commercial, Ethics Alarms Award Nominee, Journalism & Media, Science & Technology

The Unabomber, The Red Light, And Me [UPDATED!]

I ran a red light last night, and I’m feeling bad about it. Ted Kaczynski made me do it.

It was after midnight, and I was returning home after seeing the pre-Broadway production of the musical “Mean Girls,” based on the cult Lindsay Lohan comedy. I was late, my phone was dead, I knew my wife would be worried, and I was stopped at an intersection where I could see for many football fields in all directions. There were no cars to be seen anywhere.

Ted, , aka “The Unabomber” or “Snookums” to his friends, cited my exact situation as an example of how we have become slaves to our technology. Why do we waste moments of our limited lifespan because of a red light, when there is no reason to be stopped other than because the signal says to. Admittedly, this had bothered me before I read Ted’s complaint. Stop lights should start blinking by midnight, allowing a motorist to proceed with caution, as with a stop sign.  If one isn’t blinking, we should be allowed to treat it as if it is.

Last night, I ran the light. With my luck, there was a camera at the intersection, and I’ll get a ticket in the mail. But..

…whether I do or not doesn’t change the ethical or unethical character of my conduct. That’s just moral luck.

…it was still against the law to run the light, even it I was treating it as a blinking light, because it wasn’t

…breaking the law is unethical, even when the law is stupid, and

…there was no legitimate emergency that could justify my running the light as a utilitarian act.

So I feel guilty. Not guilty enough to turn myself in, but still guilty, since I am guilty.

But Ted wasn’t wrong.

Update: Let me add this; I was thinking in the shower.

On several occasions in the past, I have found myself stopped by a malfunctioning light that appeared to be determined to stay red forever. Is it ethical to go through the light then? The alternative is theoretically being stuck for the rest of my life. So we run such lights, on the theory the frozen stop light is not meeting the intent of the law or the authorities who placed it there, and to remain servile to the light under such circumstances is unreasonable. Yet running it is still breaking the law, and isn’t stopping for a light in the dead of night with no cars to be seen also not consistent with the intent of the law and the light? What’s the distinction?

41 Comments

Filed under Citizenship, Daily Life, Government & Politics, Law & Law Enforcement, Science & Technology, U.S. Society