“Arrrgh!”: The Rest Of The Story

I am finally typing at my desktop, and therein lies a tale.

As I briefly documented here, I spent much of the day failing to get my computer to work, though it was fine late last night, and it is less than six months old. The initial problem was that I had no WiFi connection and “no networks available.”

I spent almost three hours with three different Verizon techs who made me check connections and wires, reboot the modem, restart the computer at least 15 times, uninstall and reinstall programs and updates, and some other things I didn’t understand.

We changed settings and the date, to no avail. Then the last Verizon tech put my desktop in safe mode, whereupon my password suddenly wouldn’t work. She couldn’t explain why. I could connect turn my WiFi on, which represented progress, but I was locked out of my computer and stuck in safe mode. I also couldn’t reset my password. At one point she had me working off of two Microsoft websites on my laptop to find a way to  reset it, while also using my cellphone.
Continue reading

Afternoon Ethics Romp, 4/10/2019: A Swirl Of Emotions…

Ah, I feel wefweshed!

Just took a post-seminar nap—one of the bennies of a hime business– counted philosophers jumping over a fence, and now I’m awake and ready to rumble…

1. Wow. The quality of posts on this morning’s Open Forum is off the charts. Now my self-esteem is crushed , since it’s obvious that I’m keep the group back with my mundane commentary. If you haven’t dropped in on the colloquy yet, I recommend it highly.

2. This is why we can’t  have nice things, and will have fewer and fewer of them as time goes on…Related to a thread in the Open Forum, about a controversy over the way artificial intelligence screens job applicants is this news from a week ago. Google announced that it was dissolving a newly established panel. called the Advanced Technology External Advisory Council (ATEAC). which was founded to guide “responsible development of AI” at the tech giant (colossus/ behemoth/monster). The group was to have eight members and meet four times over the course of 2019 to consider issues and recommendation regarding Google’s AI program. The idea was to have an intellectually and ideologically diverse group to avoid “group think” and narrow perspectives.

I know something about such enterprises. I once had the job of running independent scholarly research within the U.S. Chamber of Commerce on contentious policy matters. My methodology was to invite experts from all sides of the issue, the political divide, and spectrum of professions and occupations. The method worked. Oh, we had arguments, minority reports, everything you might expect, but the committee meeting were civil, stimulating and often surprising. This, of course, requires an open mind and mutual respect from all involved. Continue reading

Unethical Quote Of The Month: Microsoft

“By agreeing to these Terms, you’re agreeing that, when using the Services, you will follow these rules:

….

iv. Don’t publicly display or use the Services to share inappropriate content or material (involving, for example, nudity, bestiality, pornography, offensive language, graphic violence, or criminal activity).

b. Enforcement. If you violate these Terms, we may stop providing Services to you or we may close your Microsoft account. We may also block delivery of a communication (like email, file sharing or instant message) to or from the Services in an effort to enforce these Terms or we may remove or refuse to publish Your Content for any reason. When investigating alleged violations of these Terms, Microsoft reserves the right to review Your Content in order to resolve the issue. However, we cannot monitor the entire Services and make no attempt to do so.”

—-From the revised Microsoft Services Agreement.

I do not trust Microsoft to decide what is “offensive language” in my communications, or anyone else’s. Many people, for example, believe that it is offensive that I assert the duty of citizens to allow an elected President to do the job their fellow citizens exercised their rights to select him to do, and that they have an ethical obligation to treat him with the respect the office of the Presidency requires.

We are already seeing indefensible bias on the part of other big tech companies, such as Google, Twitter, Apple and Facebook, as they favor specific ideological and partisan positions and use their platforms to censor and manipulate public discourse. These are private companies and not constrained by the First Amendment or core ethical principles like fairness and respect for autonomy (and, as we all know, definitely not respect for privacy). The problem is that the big tech companies are ideologically monolithic, virtual monopolies, possess the power to constrain free expression and political speech while leaving no equivalent alternative, and are increasingly demonstrating the willingness to use it.

The big tech companies have proven that they are unethical, ruthless, lack integrity, politically-active and willing to abuse their huge and expanding power to advance their own agendas. At the same time, their products and services  have become essential to the daily lives, recreation and occupations of virtually all Americans. This is a dangerous combination.

They must be regulated as the public utilities they are, and the sooner the better.

Unethical Artificial Intelligence Teenaged Girl Web Bot Of The Month: Microsoft’s “Tay”

Tay

Developers in Microsoft’s Technology and Research and Bing teams made “Tay,” an Artificial Intelligence web-bot, to “experiment with and conduct research on conversational understanding.” She spoke in text, memes and emoji on severalf different platforms, including Kik, Groupme and Twitter ‘”like a teen girl.”  Microsoft marketed her as “The AI with zero chill.” You could chat with Tay by tweeting or  Direct Messaging  the bot at @tayandyou on Twitter. Though she was programmed to use millennial slang and be up to date on pop culture, she was, like Arnold the good cyborg in “Terminator 2,”  designed so she would learn from her online interactions with humans, and you know how ethical humans are.

Within 24 hours, Tay was asking strangers she called “daddy” to “fuck” her, expressing doubts that the Holocaust was real and saying things  like “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now;” “Donald Trump is the only hope we’ve got;” “Repeat after me, Hitler did nothing wrong” and “Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say.” For Tay, becoming more human meant becoming a vulgar, sex-obsessed, racist, anti-Semitic, Nazi-loving Trump supporter.

Imagine what her values would be like in 48 hours. Wisely, Microsoft is not willing to chance it, and Tay is now unplugged and awaiting either reprogramming or replacement. One of Tay’s last tweets was,

“Okay. I’m done. I feel used.”

Oh, yes, this artificial intelligence stuff is bound to work out well.

____________________

Pointer: Althouse