“Authentic Frontier Gibberish” Ethics

On Ethics Alarms, the term “Authentic Frontier Gibberish” is used to describe “intentionally (or sometimes just incompetently) incoherent double-talk used by politicians, advocates, lawyers, doctors, celebrities, scientists, academics ,con artists and wrong-doers to deceive, obfuscate, confuse, bore, or otherwise avoid transparency, admitting fault, accepting accountability or admitting uncomfortable truths. The term comes from “Blazing Saddles,” in this memorable scene.

It sometimes arises from incompetent communication skills, which are unethical for anyone in the public eye to employ. Sometimes it is more sinister than that, and occurs when someone chooses to create a vague word cloud that obscures the speaker’s or writer’s real purpose…and sometimes the fact that they are frauds. Sometimes AFG is designed to convey a feeling while avoiding sufficient substance to really explain what he or she means.

Sometimes, it feels like gaslighting.

A New York Times article was ostensibly about “Dealing with Bias in Artificial Intelligence.” This was, obviously, click-bait for me, as the topic is a developing field of ethics. The introduction stated in part, “[S]ocial bias can be reflected and amplified by artificial intelligence in dangerous ways, whether it be in deciding who gets a bank loan or who gets surveilled. The New York Times spoke with three prominent women in A.I. to hear how they approach bias in this powerful technology.” The statements of the first two women—I see no reason why only female experts on the topic were deemed qualified to comment—were useful and provocative.

Last, however, was Timnit Gebru “a research scientist at Google on the ethical A.I. team and a co-founder of Black in AI, which promotes people of color in the field, [who] talked about the foundational origins of bias and the larger challenge of changing the scientific culture.”

Here’s what she said (Imagine, the Times said this was “edited and condensed”! ). The bolding is mine..

A lot of times, people are talking about bias in the sense of equalizing performance across groups. They’re not thinking about the underlying foundation, whether a task should exist in the first place, who creates it, who will deploy it on which population, who owns the data, and how is it used?

The root of these problems is not only technological. It’s social. Using technology with this underlying social foundation often advances the worst possible things that are happening. In order for technology not to do that, you have to work on the underlying foundation as well. You can’t just close your eyes and say: “Oh, whatever, the foundation, I’m a scientist. All I’m going to do is math.”

For me, the hardest thing to change is the cultural attitude of scientists. Scientists are some of the most dangerous people in the world because we have this illusion of objectivity; there is this illusion of meritocracy and there is this illusion of searching for objective truth. Science has to be situated in trying to understand the social dynamics of the world because most of the radical change happens at the social level.

We need to change the way we educate people about science and technology. Science currently is taught as some objective view from nowhere (a term I learned about from reading feminist studies works), from no one’s point of view. But there needs to be a lot more interdisciplinary work and there needs to be a rethinking of how people are taught things. People from marginalized groups have been working really hard to bring this to the forefront and then once it’s brought to the forefront other people from nonmarginalized groups start taking all the credit and pouring money into “initiatives.” They’re not going to take the kinds of risks that people in marginalized communities take, because it’s not their community that’s being harmed. All these institutions are bringing the wrong people to talk about the social impacts of A.I., or be the faces of these things just because they’re famous and privileged and can bring in more money to benefit the already privileged. There are some things that should be discussed on a global stage and there should be agreements across countries. And there are other things that should just be discussed locally. We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the F.D.A. So, for me it’s not as simple as creating a more diverse data set and things are fixed. That’s just one component of the equation.

Oh. WHAT?

I have no idea what she said.  I strongly suspect she was deliberately saying something that sounded profound but that meant nothing, except that being sort of intersectional, progressive  bullshit without having the guts to come right out and say what she means. Is she talking in code for her ideological compatriots?

  • I know what bias means, and it doesn’t mean  bias “equalizing performance across groups.” So what does she mean?
  • Who is the “they” who are not thinking about a’whether a task should exist in the first place,” and why is that “the underlying foundation”?
  • What are “the worst possible things that are happening” being advanced by “using technology with this underlying social foundation”?
  • What does she mean when she talks about the “illusion of meritocracy and… this illusion of searching for objective truth.” What does she think is really going on?
  • Science “has to be situated in trying to understand the social dynamics of the world”—Oh? Why is that?
  • “Some objective view from nowhere” means nothing to me. (I’m not surprised that it comes fro feminist studies works, a cornucopia of Authentic Frontier Gibberish”), from no one’s point of view.
  • What  “interdisciplinary work “? What disciplines?
  • What does “a rethinking of how people are taught things”? If it doesn’t mean indoctrination according to an idealistic formula, then please disabuse me of that fear.
  • “People from marginalized groups have been working really hard to bring this to the forefront”….what is “this”?
  • People from nonmarginalized groups start taking all the credit” Examples. please?
  • Why is“initiatives” in scare  quotes?”
  •  They’re not going to take the kinds of risks that people in marginalized communities take, because it’s not their community that’s being harmed.” Wait, wasn’t she supposed to be taliking about bis in AI, not parade her own biases?
  • Who are ” the wrong people to talk about the social impacts of A.I.”? If you’re one of the “right people,” how come I can’t understand what the hell you’re trying to say?,
  • What are “these things”?
  • What are the ” things that should be discussed on a global stage”?
  • Agreements across countries” about what?
  •  “We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the F.D.A.” This is so general as to be useless.

I resent this kind of arrogant message-burying double-talk, and I resent the Times presenting it as informative. It is unethical to communicate like that, and it is unethical for the news media to accept that as communication. A reader is likely to think, “Gee, the Times thinks this is valuable stuff: I guess I’m just not smart enough to comprehend it, Help me, mold me, teach me!” That’s the central dishonesty of Authentic Frontier Gibberish.

And if Gebru is typical of the people Google is relying upon to lead its efforts in artificial intelligence, there is reason to be alarmed.

______________________________________

Twitter link (for posting on Facebook): https://twitter.com/CaptCompliance/status/1213221958151786497

 

26 thoughts on ““Authentic Frontier Gibberish” Ethics

  1. Interestingly enough, this kind of rambling avalanche of jargon, devoid of detail or any clear meaning, is precisely the kind of language that AI text generators excel at. Are you certain this author isn’t just a Markov-chain bot?

  2. This kind of thinking/speaking seems rampant from her. Here is her CV:

    http://ai.stanford.edu/~tgebru/

    There is this little tidbit from her thesis:

    “I received my PhD from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. My thesis pertains to data mining large scale publicly available images to gain sociological insight, and working on computer vision problems that arise as a result.”

    jvb

    • Somehow, JB, I remember PhD’s requiring Dissertations. A Thesis was required for a Masters. Am I behind the times?

  3. I don’t know….for me, I didn’t think it was gibberish. I felt like I was tracking it pretty well. This is an upcoming issue of great importance and it’s going to attract some high minded discussion like this. I’m going to try to add some clarifications in Italics and some comments in Bold and lets see if we can get some good discussion.

    ********

    A lot of times, people are talking about the problem of, and elimination of, bias in these tasks in the sense of equalizing performance outcomes across cultural identity groups. They’re not thinking about the underlying foundation, whether a task should exist in the first place, who creates it, who will deploy it on which population, who owns the data, and how is it used?

    For me, she’s saying that people who are trying to address the problem of “bias” are trying to correct the results of the bias, but not what creates the bias in the first place.

    The root of these problems is not only technological. It’s social. Using technology with this underlying social foundation often advances the worst possible things that are happening. In order for technology not to do that, you have to work on the underlying foundation as well. You can’t just close your eyes and say: “Oh, whatever, the foundation, I’m a scientist. All I’m going to do is math.”

    Another fair point. If your technology analyzes your credit worthiness for a home loan by using a data input like aggregate school-wide SAT scores of the public high school you attended 10 years ago, how does that accurately contribute to your individual credit-worthiness today?

    For me, the hardest thing to change is the cultural attitude of scientists. Scientists are some of the most dangerous people in the world because we have this illusion of objectivity; there is this illusion of meritocracy and there is this illusion of searching for objective truth. Science has to be situated in trying to understand the social dynamics of the world because most of the radical change happens at the social level.

    Oddly, I get this point. Not gibberish to me. Data nerds can get lost in numbers and perfect symmetry. They need to take a step back and ask how the data on which the algorithm is based should be interpreted.

    e.g. Give a cartography nerd a map of the world and tell him to draw country borders that “make sense”. Well, obviously, the Iberian peninsula should be one country, right? Forget the social situation on the ground. Who cares if there are Spanish, Catalonian, and Portuguese in the same space, they’re all Iberian, right? What about Britain and Ireland? Well, Ireland is one island, and Britain is another. Forget the English, Scottish, and Welsh identities…they’re just British. The Northern Irish are just Irish.

    We need to change the way we educate people about science and technology.
    An idea. Ok.

    AIScience currently is taught as some objective view from nowhere (a term I learned about from reading feminist studies works), from no one’s point of view.
    A statement of a problem. Ok. Essentially saying that knowledge is being taught as unassailable truth without ownership of where that truth originates and without context.

    But there needs to be a lot more interdisciplinary work and there needs to be a rethinking of how people who will be future AI Scientists are taught things.
    A statement of desire. Ok. Essentially saying future AI Scientists need a more robust learning experience that is rich with context and experience so that their cold and unfeeling lines of code in their AI Machines don’t exacerbate inconsequential data points in unfathomable ways to the detriment of others.

    People from marginalized groups have been working really hard to bring this to the forefront ok and then once it’s brought to the forefront other people from non-marginalized groups start taking all the credit and pouring money into “initiatives.” Evil White People They’re not going to take the kinds of risks that people in marginalized communities take, because it’s not their community that’s being harmed. Evil White People aren’t properly motivated to solve these problems. All these institutions are bringing the wrong people to talk about the social impacts of A.I., or be the faces of these things just because they’re famous and privileged and can bring in more money to benefit the already privileged. Need to hear from those who are directly impacted, those who are the “boots on the ground”, not the elite generals.

    It seems the crux of this paragraph is that people who are figuring out and articulating the problems are building a career worthy of recognition and to continue their paths, only to be brushed aside at the final moment and have their work co-opted by someone who wants to buoy their image and their profile and will abandon the work when the cameras go dark.

    There are some things that should be discussed on a global stage and there should be agreements across countries. And there are other things that should just be discussed locally. We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the F.D.A. An organization that will analyze the effectiveness and detrimental bias in an algorithm and certify it for public use in housing, banking, and medicine.

    So, for me it’s not as simple as creating a more diverse data set and things are fixed. That’s just one component of the equation. It’s a complex problem, and while we need *more* inputs to the data, we need *better* inputs as well.

    • Well, I understand what you wrote, and that makes sense. I salute you for getting that out of her word salad. I wonder if you’re giving her the benefit of a doubt she hasn’t earned. For example, this:

      There are some things that should be discussed on a global stage and there should be agreements across countries. And there are other things that should just be discussed locally. We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the F.D.A.

      An organization that will analyze the effectiveness and detrimental bias in an algorithm and certify it for public use in housing, banking, and medicine.

      What you wrote is essential to understanding what she wrote. She should have included it, if that’s what she meant.

      And, for example,

      We need to change the way we educate people about science and technology.
      An idea. Ok.

      What’s the idea? “We need to change” is meaningless. Change how? To what end?

      The Times should have interviewed you. Or she should hire you as her translator.

      (Now do “whether a task should exist in the first place, who creates it, who will deploy it on which population, who owns the data, and how is it used?”…)

      • To your last sentence, I present this:

        “Task” in this context is a computer subroutine of the AI that checks data. It’s akin of a late 1800s voting booth official giving literacy tests. That “task” was predisposed to disproportionately weed out black voters. Before AI & Computers, someone seeking a small business loan could meet with the loan officer and present themselves as a risk worth taking. After AI & Computers, some 21 year old dweeb doesn’t have authority (or the instincts) to countermand the algorithm.

        So, if the algorithm does a credit check, what are the rules of the credit check? Is it applied on white and black populations equally? Do black people get leniency in lower scores? Is that leniency enough? Who’s feeding the data to the credit bureaus? Are landlords in Urban and predominately black neighborhoods over zealously reporting late rent checks while landlords in white neighborhoods have landlords that are very forgiving and don’t even know how to make such reports to credit bureaus?

          • I once tried to translate “Jeremiah was a Bullfrog” into French as a neat little exercise. I got to the line

            …I’m a hard knock flier and a rain bow rider; A straight shootin’ son of a gun; I said a straight shootin’ son of a gun…

            and realized it was semantically null. It was a bunch of colorful language devoid of meaning. It invites the listener to come imagine what these nonsensical lyrics might mean.

            While I have not ever written A.I. code, I was a few a few steps away from a colleague while in grad school who did. She had never written such a program before, but it took her about two weeks to learn. It is really not that hard to do, given the proper coding skills and background. It is almost a trivial task in fact (Microsoft and other companies are even offering prepackaged A.I. programs now to make it even easier).

            All an A.I. does is find patterns. But it does it blindly. It doesn’t understand anything, it just replicates whatever has already occurred. If you gave an A.I. the task of choosing a president, and fed it basic biographical information about a president, and equivalent information about a potential candidates, it would select a tall white guy. The algorithm does know or care if selecting a white guy is good, bad, or irrelevant. The data is inherently biased towards white men as effective past presidents.

            Similar to musicians auditioning anonymously behind a screen, the programmer could eliminate certain biasing effects from the dataset, such as removing photographs, names, and gender from the biographical information. This would force the algorithm to select other criteria, but it is still a blind selection. Further, if accomplishments are listed as desirable outcomes, the algorithm will simply spit back the biases of the human evaluators. If the evaluators say that Jimmy Carter’s handling of Iran was admirable for instance, the algorithm will select someone of similar background likely to replicate that “admirable” outcome in Iran.

            Artificial intelligence can be powerful if the datasets are carefully prepared. Given how trivially easy it is to set one up, it is very likely that they will be used inappropriately by people who are careless or deliberately manipulative with their data (gee, who might benefit from an election decided by an A.I. under the color of objectivity?…).

            When I read something like Dr. Gebru’s statement, I have to stop and consider how much is really communicated by the text, and how much I am reading into it given my own background. Like Tim, I could “translate” what she says. However, am I conveying and rephrasing her ideas, which I can do because her ideas are less foreign to me. Or am I reading the tea leaves left by a prophetic frog, and subconsciously reciting my own beliefs (thus inappropriately using Dr. Gebru to validate them)?

            Poorly constructed algorithms that uncritically reinforce what has already occurred are actually the secondary problem. The real problem is vague communication that lets listeners uncritically reinforce their own beliefs. Widespread use of properly calibrated data and public understanding of the limits of A.I.-based decisions cannot be achieved if those studying the matter do not communicate well.

            • Song lyrics are where Authentic Frontier Gibberish really thrive, because they don’t have to communicate anything but feelings and support the music. Lyricists like Billy Joel, Leonard Cohen and Paul Simon—even Bob Dylan—don’t trade in AFG, but the Beatles, primarily John Lennon, of course, frequently indulged. What do the lyrics to “Come Together”, “Lucy in the Sky “Strawberry Fields,” or, especially, “I am the Walrus” mean? Not much, I suspect, but like “Jabberwocky,” they are fun to listen to…

  4. Here is what I got from her AFG:

    Some scientists talk about equal outcomes yet they don’t understand their own bigotry. Scientists think they’re smart and important but they are really dumb about what the marginalized go through. They need to examine their unconscious bigotry and think about how their tasks may be harmful the marginalized.

    Western science is currently too patriarchal and racist and has always been. The marginalized are trying to get scientists to see that, but privileged altruistic white men take over and ruin our efforts. We should be able to tell large audiences what we think and have heads of governments understand us. There should be rules and people like us probably should be in charge of them.

    Conclusion: White guys suck!

    Sadly she actually has a point but it gets weighed down in (ironically) dialectical nerdy white guy academic blubbering. Bias in programming is an important topic but the way she’s talking about it will ensure no meaningful action is taken because only those who also like to waste an inordinate amount of time to say the same thing in three words, will get it.

    • Correction:
      …because only those who also like to waste an inordinate amount of time to say what could have been said in three words, will get it.

      • Both your and Tim’s translations have a lot of merit. Tim’s is what an educated professional in the field or at least close to it might get if it was in a professional journal. My training in the nascent field in the 80s was long before data mining or mass inferences drawn as much from the softer art of sociology as the hard science of binary coding, AI must be interdisciplinary in order to create anything of use. AI + medicine= diagnostic and Rx aids. AI+ wikipedia= Watson winning Jeopardy. AI + art of strategy= games like Call of Duty. This doesn’t mean that a company built on selling AI to the masses for a big profit can’t run aground if it gets caught up in Woke agendas despite the need for profit, ref the Bioware implosion a few years ago. But instead of talking about AI issues she drifted away into political agenda, and you cannot interpret the infinite masses of data accurately if you are wearing blinkers. Data is neutral until you try to interpret it usefully.

        Her speech was intended for the general public, but it was NOT written at the understanding of the GP. An important responsibility of any scientist, regardless of gender, race, or taste in music styles, is to communicate clearly to their audience. You don’t talk the same to a bunch of boys and girls you want to encourage to work hard to study in STEM areas, as you do to other professionals, or as you do to suck up to grant bearing foundations. Her words were either edited too much, to remove context and meaning, or not enough to allow rambling that highlights or inserts for the editors’ own bias. That is possible and hard to tell from reading.

        That means Mrs Q’s summary makes more sense and is coherent and briefer, Tim’s critique is field-specific, but this was presented to the GP, and without that expertise, Mrs Q’s is what a vast majority will take away. My eyes glazed in her speech after a few paragraphs and I have some background. What were the editors thinking, were they thinking, to put something out like this? The sty in their eyes is no longer mere planks,

  5. “And if Gebru is typical of the people Google is relying upon to lead its efforts in artificial intelligence, there is reason to be alarmed.”

    It’s worse than that, Jack. She’s the person they sent out to represent Google’s AI efforts to the New York Times. That means she’s not merely “typical”, but she’s the ideal. The prototype. The public face.

    It’s clear that Google has many very talented, highly intelligent people on their payroll. Certainly many whose eloquence matches their intellect. I suspect the PR folks at Google felt it was more important that the company be represented by a certain type of person, that checks certain identity boxes, than be represented by someone who can communicate clearly. Alternately, they sent her out with instructions to obfuscate and blast nonsensical word-clouds out into the universe because they don’t want to have a substantial discussion about their AI work. Neither possibility shines a positive light on Google.

    Perhaps the most charitable possibility is that none of the serious engineers at Google wanted to waste time on an interview for a dying newspaper, so they let Doubletalk McGibberish take it, while they stayed in their offices and labs and did actual work.

    • Took the words out of my mouth, Wim. I read an article about or an interview with a South African woman whose making a career of the idea that mathematics is racist. This woman is saying IT is racist. Too many Chinese and Indian guys in high tech, not to mention white guys. Not enough people and women of color. Radical change is needed. It’s not Authentic Frontier Gibberish, it’s boilerplate Black Lives Matter agitprop.

  6. Ok, so I read all the comments posted so far, and a couple things come to mind.

    First, one more grain of sand in the ever-growing pile of why I love this place. You all are super-smart (much smarter than this guy) and really good at getting at the actual meaning of things. I am in the same basic arena as Timnit Gebru (software development) and was completely lost as to what was being said. Thank you so much for your work at clarifying her words.

    Second. Her response begs a few questions. So if Gebru was trying to say that IT was racist or biased or discriminatory, why not just say that? If Google believes that, why couch the idea in language that most people (or…I think most people) could never interpret? Are they afraid of communicating the message in “plain text”, even when “plain text” is what will be best understood? Is this a “code” language…something that only certain people react to? You know, like some “secret society” gesture or nod…they get it but the masses have no idea…?

    • Joel, I think this is an entire worldview that requires its own vocabulary. It’s an analytical tool that’s taught to kids in college. Unless you have the vocabulary, you can’t express the worldview. And yes, these college educated (?) people are initiates and acolytes. They recite the same prayers over and over. It’s like reading a breviary or memorizing the Quran. It’s also analogous “When all you have is a hammer, every problem looks like a nail.”

      And you’re right. This blog is more fun than a barrel of monkeys. An island of common sense in a sea of inanity.

  7. Thirty years as an engineer and Project Manager created this little guy who lives in my head. These companies touting that they have A.I. drive that little fellow crazy.

    They have expert systems. The systems have been given all the answers the programmers could come up with, with whatever biases that slip in, intentionally or not. Think of it as very large check lists.

    These systems do not think, they calculate. Chess programs, for example, simply look at every set of moves possible from the current configuration, to some predetermined number of moves ahead. Each end point is assigned a probability of advantage for the computer. When the tree is complete, the computer selects the highest probability and takes the first move in that sequence. When the human player reacts, it does the entire operation over again.

    This is not thinking. There is nothing intelligent about it. This is essentially what is being touted as A.I. today.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.