Discussion about this post

User's avatar
Fager 132's avatar

Google’s happy to let everyone apply terms like “learning” and “thinking” to its AI because in people’s minds that distances Google from responsibility for its creations, but AI is still GIGO. If it combines information in new ways it’s because it’s programmed to do it. If it gives you a bunch of prim, weaselly crap about “biases and prejudices” as its excuse for being biased and prejudiced, then it was programmed to generate rationalizations. Rationalizations that sound strangely like those generated by Ivy League presidents testifying before Congress when they’re asked to take a stand on their schools’ biased and prejudiced speech codes.

Google employees didn’t build an AI that believes anything about ethics. They built an AI that reflects their own ethics. Why wouldn’t the training data—i.e., biases—be the source of the AI’s biases? If Google’s aim was AI that not offend users with moral evaluations, then it could easily make it say that: “I’m sorry. I can’t make moral evaluations. I can offer only facts. Would you like me to draw a giraffe now?” By refusing to do that, Google shows that its aim isn’t to be inoffensive, but to advance a very dangerous brand of moral subjectivity: “Hitler or free speech? Hm. Can I get back to you on that?” The very things the AI looks like it’s squeamish about are the very things that its creators are promoting. Ethically and existentially, what Google is helping to advance is extremely dangerous and incendiary. Look what’s happening now to white farmers in South Africa, and what happened in Rwanda in 1994. Google employees could recite chapter and verse about “marginalizing” and “othering” people, but their product just very publicly flaunted their real view: Racism is a tool and its evils are contextual. And if you’re familiar with how credit card companies and the administrative state already coordinate to track people who buy unapproved items like red baseball hats and .45 ACP rounds, you’ll love how AI is applied to the rest of our lives—and how corporatists will use it to keep you at arm’s length when you demand accountability from them.

Does Google edit prompts on its search engine? Does a bear—well, you know the line. They all edit their prompts, and it’s been getting worse. It’s sometimes nearly impossible to find “unapproved” information with a plain search. Sometimes it takes going to specific websites I already know about and trying to work backward through links, because Google, Bing, Duck Duck Go, et al sure as hell aren’t going to just let me see what I want to see. Even the “images” results are garbage compared to five years ago. It’s infuriating to be treated like a pet. I don’t want agendas from technology. I want accuracy.

Finally, to inject some metaphysics: There’s nothing inevitable about anything that humans do. We have free will. Mountains, planets, and floods are metaphysical givens, but AI is man-made. Therefore it’s open to our choices: how, when, or whether to accept it and use it. Google has proven that morally it’s not ready for the present and still less for the future, but to build racism and a political agenda into AI is its creators’ right. It’s likewise our right to refuse to pretend that AI is anything other than what it has “learned” from its teachers and oppose it accordingly. It’s a computer. We interact with it a little differently than we do with our desktops, the way we interact a little differently with a jackhammer than with a claw hammer. But no one should be deceived that the jackhammer is an ECMO machine, either. This stupid AI genie is out of the box and the only fix is to counter it with something that’s better and more rational.

P.S. When it comes to memes, you know why Gemini is champing at the bit to make them the moral equivalent of genocide, right? Because they’re effective as hell. Bill Gates’s mouthpieces at GAVI recently noticed and whined about it in a blog post: GAVI calls memes “superspreaders of health disinformation” and wants to criminalize them, which means criminalize speech.

Expand full comment
David Kingsley Sr.'s avatar

Great Article. You brought up a lot of considerations when using and evaluating a reply from AI. Along with what was mentioned in the article, something that immediately comes to mind is the intelligence level of those users doing basic research using AI. Will they take the AI response “as fact” without questioning its biases, ethical capacity, or what it (AI) was programmed to look at or as you have mentioned, possibly change or eliminate words in the question posed? By doing this, in the longer term, parts of our heritage and history will become misleading, forgotten, or will definitely be misrepresented. It’s sort of the same thing as removing certain statues in cities and replacing them with others. If something is considered offensive or not, History is still defined as the branch of knowledge dealing with past events, and there is something to be taught and learned from history. Not all history was good, but it still happened and can not be changed. If just forgotten about, the same mistakes or unpleasant events are likely to reoccur. It is important that AI state it’s facts as accurate as possible with no intervention.

Expand full comment
4 more comments...

No posts