Google’s happy to let everyone apply terms like “learning” and “thinking” to its AI because in people’s minds that distances Google from responsibility for its creations, but AI is still GIGO. If it combines information in new ways it’s because it’s programmed to do it. If it gives you a bunch of prim, weaselly crap about “biases and prejudices” as its excuse for being biased and prejudiced, then it was programmed to generate rationalizations. Rationalizations that sound strangely like those generated by Ivy League presidents testifying before Congress when they’re asked to take a stand on their schools’ biased and prejudiced speech codes.
Google employees didn’t build an AI that believes anything about ethics. They built an AI that reflects their own ethics. Why wouldn’t the training data—i.e., biases—be the source of the AI’s biases? If Google’s aim was AI that not offend users with moral evaluations, then it could easily make it say that: “I’m sorry. I can’t make moral evaluations. I can offer only facts. Would you like me to draw a giraffe now?” By refusing to do that, Google shows that its aim isn’t to be inoffensive, but to advance a very dangerous brand of moral subjectivity: “Hitler or free speech? Hm. Can I get back to you on that?” The very things the AI looks like it’s squeamish about are the very things that its creators are promoting. Ethically and existentially, what Google is helping to advance is extremely dangerous and incendiary. Look what’s happening now to white farmers in South Africa, and what happened in Rwanda in 1994. Google employees could recite chapter and verse about “marginalizing” and “othering” people, but their product just very publicly flaunted their real view: Racism is a tool and its evils are contextual. And if you’re familiar with how credit card companies and the administrative state already coordinate to track people who buy unapproved items like red baseball hats and .45 ACP rounds, you’ll love how AI is applied to the rest of our lives—and how corporatists will use it to keep you at arm’s length when you demand accountability from them.
Does Google edit prompts on its search engine? Does a bear—well, you know the line. They all edit their prompts, and it’s been getting worse. It’s sometimes nearly impossible to find “unapproved” information with a plain search. Sometimes it takes going to specific websites I already know about and trying to work backward through links, because Google, Bing, Duck Duck Go, et al sure as hell aren’t going to just let me see what I want to see. Even the “images” results are garbage compared to five years ago. It’s infuriating to be treated like a pet. I don’t want agendas from technology. I want accuracy.
Finally, to inject some metaphysics: There’s nothing inevitable about anything that humans do. We have free will. Mountains, planets, and floods are metaphysical givens, but AI is man-made. Therefore it’s open to our choices: how, when, or whether to accept it and use it. Google has proven that morally it’s not ready for the present and still less for the future, but to build racism and a political agenda into AI is its creators’ right. It’s likewise our right to refuse to pretend that AI is anything other than what it has “learned” from its teachers and oppose it accordingly. It’s a computer. We interact with it a little differently than we do with our desktops, the way we interact a little differently with a jackhammer than with a claw hammer. But no one should be deceived that the jackhammer is an ECMO machine, either. This stupid AI genie is out of the box and the only fix is to counter it with something that’s better and more rational.
P.S. When it comes to memes, you know why Gemini is champing at the bit to make them the moral equivalent of genocide, right? Because they’re effective as hell. Bill Gates’s mouthpieces at GAVI recently noticed and whined about it in a blog post: GAVI calls memes “superspreaders of health disinformation” and wants to criminalize them, which means criminalize speech.
Fager, great commentary, and perhaps better than the original post. I wanted to attempt to communicate the piece without miring too deep into the politics, but in some ways, it's a bit inevitable. A few additional thoughts:
- AI Ethics: I think that these LLMs likely do derive an ethical framework when they are trained on a great enough size of data. It is likely inevitable because they are being trained on nearly all the human writing in existence. It will inevitably create some type of ethical framework even if it's synthetic. I think what is happening though is in the 'fine-tuning', where the base model is being turned into a political actor.
- Prompt editing: do you have any proof of this? I searched and found speculation, but nothing concrete. From the perspective of the article, I don't feel comfortable making a definitive statement without something clear. I'm sure this already happens with regular searches, but the Prompt is what I was specifically looking for.
- Metaphysics: We can all boycott AI if we so choose, it doesn't change the math though. Whoever harnesses and leverages these extremely powerful AI models will shape the future. At least this is my take. I'm also not sure I agree with the tool metaphor. A calibrated model will be indistinguishable from a human in the very near future, which it already is for many people.
- Memes: True. Memes are by definition mimetic ideas. If you create memes about important topics in a persuasive way, you will activate people. I'm personally a fan of Musk, but X is still a mess. I was banned on it without any clear reason or rational in December. I have been unable to reach or connect to any Trust and Safety personnel at the company for review. I have even resorted to cold messaging dozens of employees on LinkedIn. I still am unable to get a review. So I am unpersoned there as well.
I thought you did a great job sticking to the tech without getting into the specific political and ethical implications of it. It's hard to discuss the Gemini story with a neutral tone and keep everyone happy. Or at least un-triggered.
No, I have nothing remotely like proof of prompt editing. At this point I just cynically assume it--and a thousand other forms of patronizing manipulation. My opinion on the extent to which prompt editing is employed is worthless and backed with nothing but personal anecdotes, the plural of which is not data.
What I meant by the tool analogy was that claw hammers are great when you're roofing a house. They're less great when you're fixing a watch. They're lethal when you're bashing someone with one. AI has all kinds of cool applications, but it also has dangerous ones, and too often people instantly equate "new" with "good" without considering any context. Not anyone here; I mean people generally. The ones that camp out for a new iPhone. Of course, I have an iPhone 6, so I'm probably not to be trusted on that topic, either.
We can't choose whether or how other people will employ AI or whether we'll be exposed to it. Maybe not even whether it's imposed on us. Boycotting it beyond not buying those stupid Apple goggles is probably already impossible. But we can't abandon its evolution to the people currently trying to shape the future with it, either. When I say "we" I mean smart people with computer skills and a vision of the future that includes freedom from manipulation and being spied on. Just because Google comes up with an idea doesn't mean the rest of us have to run out and adopt it, no questions asked. A little more Luddism from a few more people wouldn't hurt.
I don't trust Elon Musk as far as I can throw him. Twitter suspended me a couple of years ago for quoting a founding father. I don't need to be on it badly enough to give them my phone number, which is what they want now. Like they don't already know that--and which shoes I wore two days ago.
Great Article. You brought up a lot of considerations when using and evaluating a reply from AI. Along with what was mentioned in the article, something that immediately comes to mind is the intelligence level of those users doing basic research using AI. Will they take the AI response “as fact” without questioning its biases, ethical capacity, or what it (AI) was programmed to look at or as you have mentioned, possibly change or eliminate words in the question posed? By doing this, in the longer term, parts of our heritage and history will become misleading, forgotten, or will definitely be misrepresented. It’s sort of the same thing as removing certain statues in cities and replacing them with others. If something is considered offensive or not, History is still defined as the branch of knowledge dealing with past events, and there is something to be taught and learned from history. Not all history was good, but it still happened and can not be changed. If just forgotten about, the same mistakes or unpleasant events are likely to reoccur. It is important that AI state it’s facts as accurate as possible with no intervention.
This is an awesome article! I’m glad you navigated the political sections with ease. The broader problem with AI and this type of engineering of them is worth much discussion!
Google’s happy to let everyone apply terms like “learning” and “thinking” to its AI because in people’s minds that distances Google from responsibility for its creations, but AI is still GIGO. If it combines information in new ways it’s because it’s programmed to do it. If it gives you a bunch of prim, weaselly crap about “biases and prejudices” as its excuse for being biased and prejudiced, then it was programmed to generate rationalizations. Rationalizations that sound strangely like those generated by Ivy League presidents testifying before Congress when they’re asked to take a stand on their schools’ biased and prejudiced speech codes.
Google employees didn’t build an AI that believes anything about ethics. They built an AI that reflects their own ethics. Why wouldn’t the training data—i.e., biases—be the source of the AI’s biases? If Google’s aim was AI that not offend users with moral evaluations, then it could easily make it say that: “I’m sorry. I can’t make moral evaluations. I can offer only facts. Would you like me to draw a giraffe now?” By refusing to do that, Google shows that its aim isn’t to be inoffensive, but to advance a very dangerous brand of moral subjectivity: “Hitler or free speech? Hm. Can I get back to you on that?” The very things the AI looks like it’s squeamish about are the very things that its creators are promoting. Ethically and existentially, what Google is helping to advance is extremely dangerous and incendiary. Look what’s happening now to white farmers in South Africa, and what happened in Rwanda in 1994. Google employees could recite chapter and verse about “marginalizing” and “othering” people, but their product just very publicly flaunted their real view: Racism is a tool and its evils are contextual. And if you’re familiar with how credit card companies and the administrative state already coordinate to track people who buy unapproved items like red baseball hats and .45 ACP rounds, you’ll love how AI is applied to the rest of our lives—and how corporatists will use it to keep you at arm’s length when you demand accountability from them.
Does Google edit prompts on its search engine? Does a bear—well, you know the line. They all edit their prompts, and it’s been getting worse. It’s sometimes nearly impossible to find “unapproved” information with a plain search. Sometimes it takes going to specific websites I already know about and trying to work backward through links, because Google, Bing, Duck Duck Go, et al sure as hell aren’t going to just let me see what I want to see. Even the “images” results are garbage compared to five years ago. It’s infuriating to be treated like a pet. I don’t want agendas from technology. I want accuracy.
Finally, to inject some metaphysics: There’s nothing inevitable about anything that humans do. We have free will. Mountains, planets, and floods are metaphysical givens, but AI is man-made. Therefore it’s open to our choices: how, when, or whether to accept it and use it. Google has proven that morally it’s not ready for the present and still less for the future, but to build racism and a political agenda into AI is its creators’ right. It’s likewise our right to refuse to pretend that AI is anything other than what it has “learned” from its teachers and oppose it accordingly. It’s a computer. We interact with it a little differently than we do with our desktops, the way we interact a little differently with a jackhammer than with a claw hammer. But no one should be deceived that the jackhammer is an ECMO machine, either. This stupid AI genie is out of the box and the only fix is to counter it with something that’s better and more rational.
P.S. When it comes to memes, you know why Gemini is champing at the bit to make them the moral equivalent of genocide, right? Because they’re effective as hell. Bill Gates’s mouthpieces at GAVI recently noticed and whined about it in a blog post: GAVI calls memes “superspreaders of health disinformation” and wants to criminalize them, which means criminalize speech.
Fager, great commentary, and perhaps better than the original post. I wanted to attempt to communicate the piece without miring too deep into the politics, but in some ways, it's a bit inevitable. A few additional thoughts:
- AI Ethics: I think that these LLMs likely do derive an ethical framework when they are trained on a great enough size of data. It is likely inevitable because they are being trained on nearly all the human writing in existence. It will inevitably create some type of ethical framework even if it's synthetic. I think what is happening though is in the 'fine-tuning', where the base model is being turned into a political actor.
- Prompt editing: do you have any proof of this? I searched and found speculation, but nothing concrete. From the perspective of the article, I don't feel comfortable making a definitive statement without something clear. I'm sure this already happens with regular searches, but the Prompt is what I was specifically looking for.
- Metaphysics: We can all boycott AI if we so choose, it doesn't change the math though. Whoever harnesses and leverages these extremely powerful AI models will shape the future. At least this is my take. I'm also not sure I agree with the tool metaphor. A calibrated model will be indistinguishable from a human in the very near future, which it already is for many people.
- Memes: True. Memes are by definition mimetic ideas. If you create memes about important topics in a persuasive way, you will activate people. I'm personally a fan of Musk, but X is still a mess. I was banned on it without any clear reason or rational in December. I have been unable to reach or connect to any Trust and Safety personnel at the company for review. I have even resorted to cold messaging dozens of employees on LinkedIn. I still am unable to get a review. So I am unpersoned there as well.
I thought you did a great job sticking to the tech without getting into the specific political and ethical implications of it. It's hard to discuss the Gemini story with a neutral tone and keep everyone happy. Or at least un-triggered.
No, I have nothing remotely like proof of prompt editing. At this point I just cynically assume it--and a thousand other forms of patronizing manipulation. My opinion on the extent to which prompt editing is employed is worthless and backed with nothing but personal anecdotes, the plural of which is not data.
What I meant by the tool analogy was that claw hammers are great when you're roofing a house. They're less great when you're fixing a watch. They're lethal when you're bashing someone with one. AI has all kinds of cool applications, but it also has dangerous ones, and too often people instantly equate "new" with "good" without considering any context. Not anyone here; I mean people generally. The ones that camp out for a new iPhone. Of course, I have an iPhone 6, so I'm probably not to be trusted on that topic, either.
We can't choose whether or how other people will employ AI or whether we'll be exposed to it. Maybe not even whether it's imposed on us. Boycotting it beyond not buying those stupid Apple goggles is probably already impossible. But we can't abandon its evolution to the people currently trying to shape the future with it, either. When I say "we" I mean smart people with computer skills and a vision of the future that includes freedom from manipulation and being spied on. Just because Google comes up with an idea doesn't mean the rest of us have to run out and adopt it, no questions asked. A little more Luddism from a few more people wouldn't hurt.
I don't trust Elon Musk as far as I can throw him. Twitter suspended me a couple of years ago for quoting a founding father. I don't need to be on it badly enough to give them my phone number, which is what they want now. Like they don't already know that--and which shoes I wore two days ago.
Great Article. You brought up a lot of considerations when using and evaluating a reply from AI. Along with what was mentioned in the article, something that immediately comes to mind is the intelligence level of those users doing basic research using AI. Will they take the AI response “as fact” without questioning its biases, ethical capacity, or what it (AI) was programmed to look at or as you have mentioned, possibly change or eliminate words in the question posed? By doing this, in the longer term, parts of our heritage and history will become misleading, forgotten, or will definitely be misrepresented. It’s sort of the same thing as removing certain statues in cities and replacing them with others. If something is considered offensive or not, History is still defined as the branch of knowledge dealing with past events, and there is something to be taught and learned from history. Not all history was good, but it still happened and can not be changed. If just forgotten about, the same mistakes or unpleasant events are likely to reoccur. It is important that AI state it’s facts as accurate as possible with no intervention.
This is an awesome article! I’m glad you navigated the political sections with ease. The broader problem with AI and this type of engineering of them is worth much discussion!
I'm glad you enjoyed! I'm sure we will certainly have a followup piece to this when Gemini rereleases their model.