Wednesday 27 Nov 2024
By
main news image

KUALA LUMPUR (Sept 1): The elections in America, India, Indonesia, Mexico and Taiwan will be among the first in the era of widely available generative AI, leading to growing fears of a supercharged spread of propaganda and disinformation.

In a recent guest essay for The Economist, Yuval Noah Harari — an Israeli author, public intellectual, historian and professor — argued that new generative-AI tools will change the course of history by, among other things, convincing the electorate to “vote for particular politicians”.

It is true that AI will probably increase the amount of political disinformation and make it easier to personalise propaganda messages.

The magazine said it is important to be precise about what generative-AI tools like ChatGPT do and do not change. Before they came along, disinformation was already a problem in democracies.

It wrote that the corrosive idea that America’s presidential election in 2020 was rigged brought rioters to the Capitol on Jan 6 — but it was spread by Donald Trump, Republican elites and conservative mass-media outlets using conventional means.

It added that activists for the BJP in India spread rumours via WhatsApp threads.

Meanwhile, propagandists for the Chinese Communist Party transmit talking points to Taiwan through seemingly legitimate news outfits.

All of this was done without using generative-AI tools.

What could large-language models change in 2024?

The magazine said one thing is the quantity of disinformation: if the volume of nonsense were multiplied by 1,000 or 100,000, it might persuade people to vote differently.

A second concerns quality.

Hyper-realistic deepfakes could sway voters before false audio, photos and videos could be debunked.

A third is microtargeting.

With AI, voters may be inundated with highly personalised propaganda at scale.

Networks of propaganda bots could be made harder to detect than existing disinformation efforts are. Voters’ trust in their fellow citizens, may well suffer as people begin to doubt everything.

The Economist said while this is worrying, there are reasons to believe AI is not about to wreck humanity’s 2,500-year-old experiment with democracy.

It said many people think that others are more gullible than they themselves are.

The magazine wrote that in fact, voters are hard to persuade, especially on salient political issues such as whom they want to be president.

It highlighted social media platforms, where misinformation spreads, and AI firms say they are focused on the risks.

It wrote that OpenAI, the company behind ChatGPT, says it will monitor usage to try to detect political-influence operations.

Big-tech platforms, criticised both for propagating disinformation in the 2016 election and taking down too much in 2020, have become better at identifying suspicious accounts (though they have become loath to arbitrate the truthfulness of content generated by real people).

Alphabet and Meta ban the use of manipulated media in political advertising and say they are quick to respond to deep fakes.

Other companies are trying to craft a technological standard establishing the provenance of real images and videos.

The Economist said technological determinism, which pins all the foibles of people on the tools they use, is tempting, but it is also wrong.

The magazine said that although it is important to be mindful of the potential of generative AI to disrupt democracies, panic is unwarranted.

Before the technological advances of the past two years, people were quite capable of transmitting all manner of destructive and terrible ideas to one another, it said.

      Print
      Text Size
      Share