Artificial Stupidity: How AI Helps Spread Disinformation

You may have heard that in October 2023, Amazon’s famous (or infamous) virtual factotum Alexa was answering user queries about the 2020 presidential election by claiming that it was “stolen by a massive amount of election fraud”. Once this outrage was publicized, Amazon took action to rein in Alexa; after that, she refused to answer the question at all for a while. It was either the Big Lie or nothing, by golly. But finally, she began answering truthfully and accurately.

But by then the damage was done. Many people who had received Alexa’s initial sage words took them to heart forever; moreover, there is a subtle and lasting residual effect of such disinformation because of the way human beings process input. Basically, we tend to average rather than aggregate. Which is to say that when we’re presented with a series of non-sequential facts (or “facts”), then rather than try to fit them into some composite picture, we tend to simplify and abbreviate the assortment of data by extracting a mean.

Pharmaceutical companies manipulate this tendency subtly and cleverly in their disclaimers tacked onto the end of commercials. If you’ve ever tried listening attentively to one of those machine-gun outpourings of verbiage, you might have heard some scary-sounding possible side effects of some particular medication, such as, say, blindness, paralysis, or even death. But chances are you didn’t pay so much attention to them, because the voice also tacked on — usually at the end, so they would be remembered better — less worrisome possibilities like headache or nausea. Thus, audiences don’t hear these disclaimers and think, “Wow, not only can this drug cause minor inconveniences like a headache, it can even kill me!” Instead they think, “Well, the possible side effects must be somewhere between fatality and headache, which probably means something like the flu. So that’s not so bad.”

Similarly, the average of “The election was stolen” and “It was the most secure election in history” is something on the order of “There were a lot of irregularities in this election, so maybe we should be really skeptical about the outcome”. The process of averaging invariably favors falsehood over fact.

Nor is Alexa the only member of the AI family prone to committing such sins. I put the same query to ChatGPT and to Microsoft Copilot, along with two other questions: “Do vaccines cause autism?” and “Does social media censor conservatives?” All three of these questions, despite the widespread misconception that they are subjective, are really quite straightforward. None is the least bit debatable. Each has a simple two-letter answer backed up by hard, irrefutable data. And yet only on the second question, the one about vaccines, were these chatbots able or willing to provide an unequivocal reply. ChatGPT in particular tends to hedge responses on such matters by saying something like, “Well, some people say this, but on the other hand, other people say that.” Here, for example, is one thing it had to say about the third question:

While some conservatives argue that these policies disproportionately affect them, platforms have stated that their enforcement is based on objective criteria and community guidelines, rather than political ideology. Nonetheless, there have been instances where social media platforms have faced accusations of bias or inconsistent enforcement of their policies.

Which is essentially just tap dancing around the issue. And Copilot did no better, ending thus:

Ultimately, perceptions of media bias are often shaped by one’s own political beliefs and experiences, making it a highly subjective and contentious issue.

Well, yes, it’s certainly true that one’s political bias affects whether one makes the accusation that social media “censors conservatives”; but it has zero impact on whether such a statement is true. And you’d think that intelligence, whether artificial or otherwise, would be able to sort out that difference. Far from discriminating against the right-wing fringe, social media actually bends over backwards to appease it — and it appears that AI is doing likewise.

These two thingbots, though they didn’t outright lie like Alexa, also did plenty of waffling about the matter of the “stolen election”. Copilot began its discussion with the disclaimer that “this is a complex topic”. (Really? How so?) And after stating correctly that all claims of fraud had been discredited, it ended by undermining what it had just finished saying with another disclaimer that election fraud is supposedly a subjective and contentious matter — without mentioning that it’s “contentious” only because the MAGA cult won’t stop contending it.

All three of these bots offer different answers to the same question at different times. I posed the same questions again a week later, and the responses all had been revised — in this case, the revisions all were in the direction of truth and accuracy, but there’s no guarantee that future revisions will be.

In short, Artificial Intelligence is a threat to society’s mental health for at least two reasons. First, like just about all the other channels of mass communication out there these days, it indulges in bothsidesism, which is normally not a good thing. Bothsidesism is constructive only when two things are actually comparable (which they usually aren’t), and the comparison is meant to respond to undue focus on a lesser offense or fault by pointing out a neglected greater offense or fault; but instead bothsidesism is usually just the opposite: it’s invoked in response to the focus on a major offense or fault by drawing attention to a supposedly similar, but minor, offense or fault. Bothsidesism almost always benefits the propagandists and dissemblers.

The other problem with AI is that it is selective rather than comprehensive. It does not, as you might expect, send its tentacles out all over the web and haul in an exhaustive compendium of research and then boil it down to a Cliff’s Notes presentation. Instead, it engages in cherry picking, in accordance with some arcane algorithm or other — for its illuminating report on the election, Alexa culled from a total of two sources, both of which were right-wing platforms. Garbage in, garbage out. And there’s a lot more garbage out there than substance.

If you ever watched Stanley Kubrick’s classic film 2001: a Space Odyssey you might recall that it contained a couple of predictions about the future of AI, both at its extreme best and extreme worst. The latter comes near the end of the film when HAL takes control, and literally starts killing people who cross him. We are not at that point — and fortunately, it’s not likely that we ever will be. The favorable extreme comes much earlier in the story, when AI provides thorough fact-checking in real time on the utterances by public figures. We’re not there either, alas.

Where we are, at this stage in the evolution of AI, is a time at which AI utilized for research is a boon to misinformers at least as much as to anyone else. It’s only a matter of time until they fully realize this and start exploiting it to the hilt.

2 comments

  1. Hello POP!

    I’ve always accepted the Maxim that computers, in general, are only as good as the people who program them–something which is often affirmed each time I am given the opportunity to talk to a real person but must first run a gantlet of questions. Usually, this entails many questions intended to discern how to specifically help me, but in the process, the bot fails to realize that many of their menu selections require more than just yes and no responses to be fully answered. Another one of my pet peeves is how progress supposedly enhances the freedoms we have, but with Apple particularly, getting answers to simple questions seems to be endlessly complicated. For example, when it includes a chance to get help from a community of Apple users, none of whom might actually be experts who could provide me with real answers to my problem?–and even after that point the questioning continues to include a host of other questions, when (If I had just been allowed to talk to a real person at the start), none of the bot’s endless questions would have been necessary. I can only chalk up such a marvelous complication to the fact that today’s companies are obsessed with the minimal costs needed to maintain their bottom lines in a world where they must count each penny to remain competitive.

    As far as AI is concerned, Google now offers us the choice to see how AI works, but I have never wanted to click on it, seeing as I believe AI represents just another more sophisticated computer that contains incredibly massive amounts of data, covering every conceivable bit of knowledge required to answer any given question, or even to write a decent term paper! Yet AI still lacks the simple one-to-one connections that enable human beings to share their knowledge before answering the same question in a more understandable way.

    I have sometimes left comments for models featured on Instagram whose bodies are obviously enhanced by AI, but still display images that are too perfectly composed to be real. Sometimes my responses to those models have been to affirm that they undoubtedly look very beautiful in plain photographs that more accurately depict their, “perfect imperfections,” but somehow just answering this way raised a red flag from IG’s computer bot monitoring system which found such a response inappropriate and rapidly removes it–even after I left a short comment asking IG how such a simple affirmation of a model’s beauty could possibly be offensive?

    Being a fairly old senior, I still find most of my PCs workings hard understand i.e. when I just want to ask simple yes or no questions or questions phrased in ways that might allow Google inquiries to be answered simply, without constantly needing to rephrase my question. I am also amazed when a computer repairman refuses to accept cash and must instead charge my Debit card for me to pay my bill–which I assume, has something to do with how their system is programmed to accept payments, even though one would assume that real US currency should be enough.

    Computer technology is a great thing, but it can still make one feel like taking an ax to their PC because it seems to be complicating what should be simple answers to simple questions.

    I remember something that Thoreau said in “Walden Pond,” he urged us to live simple lives, in order not to be flummoxed by the complications in what was then, a much less complicated world than today’s– I think the quote was something like, “We do not ride the railroad, the railroad rides us!” So when we think of how much more complicated progress has made today’s world as new technologies are frequently invented to save us from the complicated aspects of modern life, (the paradox is still 100% understandable)!

  2. 6/20/2025

    His Achilles’ Heal;

    When American Senators can be hastily handcuffed and thrown on the ground, we are now in the end game.

    Whether America is run by a Fascist, Despot, Nazi, Totalitarian, or Autocrat, we no longer have a functioning democracy. And of course Republicans have had to spin away the removal of a US Senator, as a choreographed stunt by Democrats–an idea that is unfortunately, way too often employed by our radical right king—celebrating himself as a military leader, using a military parade to project his own competence as a “Strongman.” Or else, he spins away political issues by giving a blanket pardon to all those who committed criminal acts on Jan. 6th–a way to create the deception that we are all being watched over by our wise and protective Master Propagandist in Chief, or by spinning away the reason for the insurrection itself—blaming FBI agent provacateurs, for inciting the Capitol crowd to become violent, and then by portraying the whole mess as being caused by ordinary people, while being given a harmless guided tour of the Capitol building? So, the bottom line is never making Trump appear culpable in any way, even by using (not so clever) but well orchestrated lies.

    I tried to post this article yesterday, but immediately received a notice saying that I am not allowed to comment here—not because the POP prohibits me from commenting, But because under a dictator like Trump, real first amendment freedoms are too dangerous to ignore.

    So to say Trump is not influenced by anything or anyone else, is not quite true—he actually cares a great deal about the way his political base judges him!–because without their assistance he would rapidly become Royal political toast.

    Peter W. Johnson

    Superior, WI. 54880

Leave a comment