Friday 22 Nov 2024
By
main news image

This article first appeared in Digital Edge, The Edge Malaysia Weekly on September 23, 2024 - September 29, 2024

Gone are the days when scammers relied on basic editing software like Adobe Photoshop, which itself is rapidly becoming a relic of the past as we enter the era of deepfakes — hyper-realistic videos generated using artificial intelligence (AI) that are capable of deceiving even the most discerning eyes. While the convincing forgeries present a significant threat to trust in online content, are deepfakes truly the enemy?

According to the World Economic Forum Global Risks Perception Survey 2023-2024, AI-generated misinformation and disinformation have emerged as the most serious global risk anticipated in the next two years. The combination of deepfakes and advanced AI chatbots will create a frightening new reality where manipulation becomes personal, targeting specific vulnerabilities.

However, some say deepfake technology is just a tool, neither good nor bad. It comes down to how one uses it, says Peerapong Jongvibool, senior director for Southeast Asia at cybersecurity firm Fortinet. “Deepfake is not a new technology, but recently because of social media and how the new generation is consuming information [it has become a severe issue that should be addressed].”

Deepfake images are being used more frequently, especially in politically motivated videos in the run-up to elections. Deepfakes can also be used in more targeted cyberattacks, potentially leading to large-scale fraud.

“It happens more often on social media, where there are a lot of clickbaits on videos that may be used as a link to a scam or to trick people to go into,” says Peerapong.

For social media platforms, the sheer volume and speed of content uploaded daily make it difficult for them to monitor and verify each piece of media in real time. Developing and implementing advanced detection algorithms also requires significant resources.

These technical challenges notwithstanding, there needs to be a serious discussion on the responsibilities of social media platforms.

“For example, how should social media and content platforms handle deepfake content, including detection, removal and user reporting processes? Perhaps they can require mandatory disclosure and labelling of all deepfake content to be clearly labelled as AI-generated or manipulated? This would help viewers identify synthetic media,” says Vignesa Moorthy, founder and CEO of broadband telco ViewQwest.

Public awareness is crucial in combating scams. Initially, people may panic when they receive such calls. However, by sharing their experiences on social media, individuals can collectively raise awareness and educate others on these scams.

“Obviously, there is a lack of awareness. That’s why deepfakes are so successful. But then again, the tactics can be quite convincing. I think first of all, we have to be self-aware and a lot of education is required,” says Peerapong.

This ongoing effort, though requiring constant vigilance, can empower the public to identify such deceptive tactics and avoid falling prey to scams.

Fighting fire with fire

On the flip side, AI can be used to combat these threats. AI models are progressively improving in the detection of deepfakes by analysing subtle inconsistencies in facial movements, light reflections and other minute details.

“However, the accuracy can vary depending on the quality and sophistication of the deepfake, and effective training of AI models requires large datasets of both real and fake media, which can be challenging to compile,” says Vignesa.

An additional limitation is that AI models can be biased based on the data they are trained on, potentially leading to false positives or negatives, especially with new or less common deepfake techniques.

“Many of these detection strategies and technologies are in the early stages of development and are far from 100% in terms of success rate. So, the best advice for most organisations is to establish very clear protocols and processes for critical tasks like the release of funds or sensitive company information,” he says.

The use of out-of-band verification — like a direct call — to verify the identity of the requestor can be in place as a fail-safe. Furthermore, it is good to train organisations to be sceptical of videos and phone calls that have unexpected requests and take additional verification steps.

“Diverse datasets that include various demographics and types of media to train AI models is key. There should be continuous learning mechanisms that allow AI models to adapt to new deepfake techniques as they emerge,” says Vignesa.

Transparency and clear documentation of the decision-making process, coupled with regular audits, should also be implemented to help identify and mitigate biases and errors.

“Most importantly, and probably the most challenging as this type of technology advances, is adherence to ethical standards in AI development. Public, private and international bodies must ensure that detection technologies are used responsibly and do not compound the problem,” he says.

Finding the weakest link

Cybersecurity has never been more important. It is crucial that companies advocate for a more active approach to addressing AI-generated threats and misinformation that use deepfake technology.

“In my personal point of view, the defensive strategies can be quite passive. We never identify who is the weakest link. We just say there is ransomware, but we never identify who [the targeted groups are],” says Peerapong.

The focus should shift towards account profiling, particularly of those who engage with social media heavily. For example, senior citizens or youths who are constantly engaged with online content who may be more susceptible to deepfake scams.

“A lot of people are talking about the platforms and AI, but at which level are they actually engaging with the platform and AI, right? I think if anybody is truly serious about putting effort into integrating AI into their platforms and using it to prevent the risk of cyberthreats, we need to have more people developing the technology and integration for it. Not just because they are hopping on the hype of the platform or the topic of AI,” says Peerapong.

For instance, the European Union has been proactive in tackling disinformation, including deepfakes, by publishing strategies that emphasise public engagement and the creation of independent fact-checking networks.

International bodies can also advocate for legal frameworks, such as the Deepfakes Accountability Act in the US, to regulate the creation and distribution of deepfakes. The act mandates that creators digitally watermark deepfake content. Additionally, it criminalises the failure to identify malicious deepfakes, including those that depict sexual content, involve criminal activities, incite violence or interfere with elections through foreign involvement.

“As a start, ahead of more comprehensive frameworks, standards can be set for specific prohibited uses of deepfakes, such as in political campaigns and during elections, to avoid spreading deliberate misinformation,” says Vignesa.

In April, South Korea implemented a 90-day ban on politically motivated AI-generated content before its election. Meanwhile, Singapore is discussing a temporary deepfake ban as a way to tackle AI falsehoods during its general elections.

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's App Store and Android's Google Play.

      Print
      Text Size
      Share