Why The News Media Are Spreading Fake News In The Name Of Fighting It
Why are The Guardian, Le Monde, and Die Zeit so concerned about irrelevant bots?
This morning at 5:20 am, I received an email from an investigative journalist with Le Monde in Paris. In it, the reporter said that apparently fake Twitter accounts were promoting an agenda that “closely matches” my views on nuclear energy and renewables. Some of them mentioned me by name and replied to my tweets. “Did you, directly or indirectly, retain the services of an online marketing company that could be responsible for this online campaign?” the reporter, Damien Leloup, asked me.
I was appalled by the implication that I was receiving help from the “disinformation-for-hire industry,” which Leloup said he was investigating “along with colleagues from over 20 international newsrooms, including The Guardian, Zeit, Der Spiegel, and El Pais,” and I was immediately concerned that the newspapers were preparing to falsely accuse me of being involved in any way with the accounts. I replied that neither I nor anyone associated with me had done so and that I had no idea why such tweets existed.
I then called Leloup and interviewed him by phone. I asked him point blank if he had any evidence of my wrongdoing and if he or his colleagues were planning on making an accusation. He said no. With my main concern addressed, I asked him additional questions. Leloup was polite and forthcoming and agreed to send me two of the Twitter accounts that somehow engaged with my content, which he did.
Both tweets appear to be from obviously fake or “bot” accounts. One had 111 followers, and the other one had just 9. One had zero likes or retweets, and the other had a single like. The one named “Ruby Thompson” follows me and, when I clicked on the account’s direct message, I discovered that the operator of the account had messaged me last year asking me to sign a petition, which I hadn’t seen at the time.
One tweet mentioned the closure of a road from a battery fire, something I reported on at the time; the other mentioned an L.A. Times article about potential groundwater contamination from solar panels. While the tweets appear to be fake, the information in both of them is accurate, and nobody has even disputed it. As such, even if both accounts are indeed bots, neither was a case of “disinformation.”
On the phone, I asked Leloup how many bot accounts were in the network he was studying.
“A few hundred,” he said.
“A few hundred?” I repeated, surprised by the low number.
“Between 100 and 300,” he clarified.
“So 100-300 accounts mentioned me?” I asked.
“The number that mentioned you was lower,” he said. “Some mentioned you, and others hashtagged your name. You probably wouldn’t have noticed it.”
In his email and on our call, Leloup suggested that perhaps PG&E, the owner of Diablo Canyon, hired a “disinformation-for-hire” Internet firm to support me.
“My working theory is that it’s more of a service that you can hire for a limited time,” said Leloup. “Anybody could be the client, including industrial interests or, in the California cluster, it could be Diablo Canyon management.”
But it is highly unlikely that Diablo Canyon management was behind the accounts. First, Governor Gavin Newsom had already made the decision to keep the plant operating in the spring of 2022; both tweets were dated September 27, 2022. Second, our successful effort to save Diablo Canyon had been aggressively opposed by the owner of the plant, PG&E, since 2016, when it caved into pressure from Newsom and others to shut down the plant.
Leloup acknowledged that his speculation about Diablo Canyon was probably wrong, noting that the “messaging [of the tweets] doesn’t seem to fit with that,” he said.
“But,” he added, “anything is possible if it is a fee for service, and anybody could hire those bots. There are companies that specialize in influence operations; we have some in France and the UK. You just need to have contact with that kind of company, and you have a lobbying firm will push a point of client or piece of legislation. I don’t want to say it’s common.”
But it’s not just uncommon, there’s no evidence of it happening at all. I have been in a large and complicated debate on Twitter since 2014, when I joined the platform, on a huge range of topics relating to climate change, nuclear energy, and renewables. The people on my side and those on the other side of those arguments are all real people.
In other words, bots, fake or even anonymous accounts have no significant “influence” whatsoever, at least in the fields we cover. My political opponents, and critics, are elected officials, climate activists, and to a lesser extent, other journalists. They’re not bots, either on my side or on my opponents’ side.
LeLoup insisted, “You can use bots and influence operations more generally to try and shift the narrative about a person or party or company, and it isn’t surprising that people would try to take advantage of that for-profit and political gains.” But he provided zero evidence that “bots and influence operations” had, in fact, influenced “the narrative about a person or party or company,” either in California or anywhere else.
Why would Le Monde, along with three of Europe’s largest newspapers, as well as at least 17 other newsrooms, be investigating the “disinformation-for-hire industry” if the reporter it assigned to the story can’t point to a single instance of disinformation, much less industry backing or influence on the public?