On Ethics Alarms, the term “Authentic Frontier Gibberish” is used to describe “intentionally (or sometimes just incompetently) incoherent double-talk used by politicians, advocates, lawyers, doctors, celebrities, scientists, academics ,con artists and wrong-doers to deceive, obfuscate, confuse, bore, or otherwise avoid transparency, admitting fault, accepting accountability or admitting uncomfortable truths. The term comes from “Blazing Saddles,” in this memorable scene.
It sometimes arises from incompetent communication skills, which are unethical for anyone in the public eye to employ. Sometimes it is more sinister than that, and occurs when someone chooses to create a vague word cloud that obscures the speaker’s or writer’s real purpose…and sometimes the fact that they are frauds. Sometimes AFG is designed to convey a feeling while avoiding sufficient substance to really explain what he or she means.
Sometimes, it feels like gaslighting.
A New York Times article was ostensibly about “Dealing with Bias in Artificial Intelligence.” This was, obviously, click-bait for me, as the topic is a developing field of ethics. The introduction stated in part, “[S]ocial bias can be reflected and amplified by artificial intelligence in dangerous ways, whether it be in deciding who gets a bank loan or who gets surveilled. The New York Times spoke with three prominent women in A.I. to hear how they approach bias in this powerful technology.” The statements of the first two women—I see no reason why only female experts on the topic were deemed qualified to comment—were useful and provocative.
Last, however, was Timnit Gebru “a research scientist at Google on the ethical A.I. team and a co-founder of Black in AI, which promotes people of color in the field, [who] talked about the foundational origins of bias and the larger challenge of changing the scientific culture.”
Here’s what she said (Imagine, the Times said this was “edited and condensed”! ). The bolding is mine..
A lot of times, people are talking about bias in the sense of equalizing performance across groups. They’re not thinking about the underlying foundation, whether a task should exist in the first place, who creates it, who will deploy it on which population, who owns the data, and how is it used?
The root of these problems is not only technological. It’s social. Using technology with this underlying social foundation often advances the worst possible things that are happening. In order for technology not to do that, you have to work on the underlying foundation as well. You can’t just close your eyes and say: “Oh, whatever, the foundation, I’m a scientist. All I’m going to do is math.”
For me, the hardest thing to change is the cultural attitude of scientists. Scientists are some of the most dangerous people in the world because we have this illusion of objectivity; there is this illusion of meritocracy and there is this illusion of searching for objective truth. Science has to be situated in trying to understand the social dynamics of the world because most of the radical change happens at the social level.
We need to change the way we educate people about science and technology. Science currently is taught as some objective view from nowhere (a term I learned about from reading feminist studies works), from no one’s point of view. But there needs to be a lot more interdisciplinary work and there needs to be a rethinking of how people are taught things. People from marginalized groups have been working really hard to bring this to the forefront and then once it’s brought to the forefront other people from nonmarginalized groups start taking all the credit and pouring money into “initiatives.” They’re not going to take the kinds of risks that people in marginalized communities take, because it’s not their community that’s being harmed. All these institutions are bringing the wrong people to talk about the social impacts of A.I., or be the faces of these things just because they’re famous and privileged and can bring in more money to benefit the already privileged. There are some things that should be discussed on a global stage and there should be agreements across countries. And there are other things that should just be discussed locally. We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the F.D.A. So, for me it’s not as simple as creating a more diverse data set and things are fixed. That’s just one component of the equation.
I have no idea what she said. I strongly suspect she was deliberately saying something that sounded profound but that meant nothing, except that being sort of intersectional, progressive bullshit without having the guts to come right out and say what she means. Is she talking in code for her ideological compatriots?
- I know what bias means, and it doesn’t mean bias “equalizing performance across groups.” So what does she mean?
- Who is the “they” who are not thinking about a’whether a task should exist in the first place,” and why is that “the underlying foundation”?
- What are “the worst possible things that are happening” being advanced by “using technology with this underlying social foundation”?
- What does she mean when she talks about the “illusion of meritocracy and… this illusion of searching for objective truth.” What does she think is really going on?
- Science “has to be situated in trying to understand the social dynamics of the world”—Oh? Why is that?
- “Some objective view from nowhere” means nothing to me. (I’m not surprised that it comes fro feminist studies works, a cornucopia of Authentic Frontier Gibberish”), from no one’s point of view.
- What “interdisciplinary work “? What disciplines?
- What does “a rethinking of how people are taught things”? If it doesn’t mean indoctrination according to an idealistic formula, then please disabuse me of that fear.
- “People from marginalized groups have been working really hard to bring this to the forefront”….what is “this”?
- “People from nonmarginalized groups start taking all the credit” Examples. please?
- Why is“initiatives” in scare quotes?”
- “They’re not going to take the kinds of risks that people in marginalized communities take, because it’s not their community that’s being harmed.” Wait, wasn’t she supposed to be taliking about bis in AI, not parade her own biases?
- Who are ” the wrong people to talk about the social impacts of A.I.”? If you’re one of the “right people,” how come I can’t understand what the hell you’re trying to say?,
- What are “these things”?
- What are the ” things that should be discussed on a global stage”?
- Agreements across countries” about what?
- “We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the F.D.A.” This is so general as to be useless.
I resent this kind of arrogant message-burying double-talk, and I resent the Times presenting it as informative. It is unethical to communicate like that, and it is unethical for the news media to accept that as communication. A reader is likely to think, “Gee, the Times thinks this is valuable stuff: I guess I’m just not smart enough to comprehend it, Help me, mold me, teach me!” That’s the central dishonesty of Authentic Frontier Gibberish.
And if Gebru is typical of the people Google is relying upon to lead its efforts in artificial intelligence, there is reason to be alarmed.
Twitter link (for posting on Facebook): https://twitter.com/CaptCompliance/status/1213221958151786497