Tuesday 15 Oct 2024
By
main news image

The threat of deepfake content has become a prevalent issue since 2017, where a user started a viral phenomenon by combining machine learning software and AI to create inappropriate content with the faces of famous celebrities. Utilising a form of artificial intelligence called deep learning to manipulate and produce falsified pieces of content, deepfakes are the 21st century’s answer to Photoshop.

As the technology continues to develop and spread, deepfakes have started becoming a concern of the public. The World Intellectual Property Organisation states that deepfakes can cause problems such as violation of human rights, right of privacy and personal data protection rights.

With this technology being relatively new, the public has not yet acquainted itself to the dangers of this technology. According to a survey conducted by iProov in 2022, only 29% of the global population know what a deepfake is and 43% of them would not be able to tell the difference between authentic content and a deepfake.

Lack of A Legal Framework

Arguably, deepfake content is technically not illegal, as this is a common technology utilised in the film industry to paste the face of an actor on another stunt person. If used properly, deepfake technology can be used to protect the identities of individuals without fear of putting themselves in danger, this could be useful in witness protection programmes where key testimonies are needed in the court of law. It can also be used as a form of self-expression where users are able to express their purpose, ideas, and beliefs online.

However, without a specific legal framework in place, false events could be created, existing videos could be manipulated to present a false narrative and the reputation of a person could be damaged by a video of something that never happened. Fake content that are created with malicious intents such as altering the words or actions of a politician, for example, could trigger nations to act against one another. 

Currently, the Communication and Multimedia Act (CMA) 1998 and the Common Law Defamation are the only forms of legal mitigation that Malaysia has for deepfakes. The CMA 1998 explains that content depicts sound, test and pictures that can be stored or communicated electronically, can be acted against if it is deemed inappropriate or offensive, while the Common Law Defamation entails severe punishment for libel and slander that can damage the reputation of the targeted party.

In 2019, the public had one of its first tastes in deepfake content, where a cabinet minister was involved in an alleged sex video scandal. Until today, this scandal remains as speculations and there has not been definitive proof of whether these contents are real or not.

A Potential Solution

Various initiatives are being conducted by the government to continue fighting against the spread of deepfakes and misinformation, such as the Sebenarnya.my initiative and imposing punishments on WhatsApp Group admins if they allow false information to be spread. However, the effectiveness varies as many cases still slip through the cracks due to a lack of education among the public.

At Taylor’s University’s, its Law School and School of Computer Science understands the vulnerabilities faced by the public and is in the early stages of developing its Deepfake Mitigation Model. The Model experiments with having users vote on the authentication of the sample content and in turn, be rewarded with digital tokens.

The School of Computer Science research team proposes a trust index model, one that combines the judgement of users and artificial intelligence algorithms to detect and mitigate deepfake content. Through this multi-disciplinary learning model, Taylor’s Law School students will be able to scrutinise the legal frameworks surrounding deepfakes and propose solutions from studying existing cases and legal initiatives in curbing deepfakes in Malaysia.

“It was a fruitful collaborative experience with our peers in the School of Computer Science as we worked together to develop the Deepfake Mitigation Model. As deepfakes are a relatively new concept in Malaysia, much research was conducted to help improve the Model’s prototype. While it was challenging to understand the terminologies used with our peers on the technical developments of the projects, the results were eye-opening when combining two very different disciplines in law and computing,” said Poong Yuan, a Taylor’s Law School graduate who is currently pursuing her master’s in international Commercial Law in the University of Warwick, United Kingdom.

Currently, the Model has been tested in cases which involves individuals claiming to be victims of deepfakes, with plans to continue developing the project in March 2023. To further strengthen the model’s efficacy, there search team aims to also include collaborations with several industry partners to solidify the model as a viable legal framework to tackle the rising deepfake cases.

      Print
      Text Size
      Share