This article first appeared in Digital Edge, The Edge Malaysia Weekly on February 12, 2024 - February 18, 2024
A celebrity makes a dynamic entrance on screen, navigating the scenes seamlessly, engaging in dialogue and interacting with fellow characters in a recently released film. While such a scenario may seem unremarkable in ordinary circumstances, the intriguing twist lies in the fact that this particular celebrity died years, or even decades, ago. In the era of artificial intelligence (AI), this may soon become commonplace.
How is this even possible? Using AI technology, a digital clone of the actor is created and used. And the celebrity will look like any other actor in the film, talking, walking and interacting with others as one would expect them to.
This is already happening. Paul Walker was brought back posthumously for his role as Brian O’Connor in the extremely popular racing franchise, The Fast and the Furious. Carrie Fisher who shot to fame as Princess Leia was also brought back to life in the Star Wars film The Rise of Skywalker.
Companies such as Digital Domain have been at the centre of this revolution in their work with virtual humans. Virtual humans are essentially lifelike virtual characters. Films like The Curious Case of Benjamin Button, which required Brad Pitt to be aged and de-aged, use Digital Domains’ technology. Thanos, the villain in the extremely popular Marvel superhero franchise, was also a product of Digital Domain, which transformed actor Josh Brolin into a computer generated (CG) antagonist.
This is done using Digital Domain’s machine learning (ML) tools such as Masquerade 2.0 and Charlatan. Masquerade 2.0 leverages deep learning to capture real-person performances and translate them into CG characters. Charlatan is able to generate realistic digital doubles out of various source materials.
“The rise of AI has opened up a world of possibilities, showcasing remarkable automation capabilities at an astonishing pace, particularly in the domains of virtual humans, visual effects and visualisation,” says Daniel Seah, CEO of Digital Domain.
“As for generative AI, we have been exploring its potential in creating digital assets. Although it is still in the early stages, we see it as a useful tool that helps us produce realistic and lifelike performances that fit the stories we are part of creating. Our experience with ML has given us insights into how we can use this technology to achieve our goals.”
Digital Domain has brought celebrities back from the dead. For instance, Tupac Shakur, who has been dead since 1996, was revived for the Coachella Valley Music and Arts Festival in 2012. The same was done for Teresa Tang, in a virtual comeback where her hologram performed three songs with singer Jay Chou in his concert in Taipei in 2013.
All this is fine and dandy, until the ethical considerations of the technology are put under the microscope.
“In the realm of AI, there is a growing concern about the misuse of an actor’s likeness without proper compensation or consent. This has led to debates on intellectual property, contractual agreements and ethical considerations,” says Seah.
“As technology advances, the ability to realistically recreate digital likeness becomes more sophisticated. This raises the question of who owns the rights to these digital representations and how actors should be compensated for their use in various media.”
It has become apparent that both actors and writers are not exactly thrilled about this technology. Anil Kapoor, the famous Bollywood actor, had a significant victory in New Delhi over the unauthorised use of his likeness developed using AI, according to The Guardian.
“I suspect the problem celebrities have with their likeness being deepfaked is twofold. First, no one wants to have a fake version of them saying things they didn’t say, or worse that they don’t agree with. Second, no one wants a fake them that they can’t profit from,” says Dr Rachel Gong, deputy director of research at Khazanah Research Institute.
“If the studios were prepared to fork out money, would celebrities object so much? As it is, the family of Paul Walker was fine with his digital recreation to finish Furious 7. Michael Douglas has considered licensing his likeness so it can only be recreated with consent after his death.”
The cherry on the cake is the strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) last year, which contested the use of AI to profit from an actor’s image. This was being done without approval or any residuals.
The SAG-AFTRA strike resulted in the establishment of clear guidelines and legal frameworks to ensure creative professionals like actors are fairly compensated for the use of their digital likeness in various forms of media.
“The use of AI technology has raised ethical concerns, particularly regarding AI likeness. It is important to obtain informed consent, ensure privacy and prevent misuse such as deepfake manipulation,” says Seah.
“In terms of legal aspects, it is essential to recognise the right to publicity, protect against defamation and address copyright and intellectual property issues. Clear ownership agreements, licensing permissions and technological safeguards are necessary to navigate these complexities responsibly.”
It is hard to know what is true and what is false in a world with AI and deepfakes. This lends to the problem of AI being used to deceive everyday people and not only those in the limelight. For example, The Star reported a deepfake video of a Malaysian leader endorsing a get-rich-quick scheme.
“Malicious use of AI-generated videos and audios of public figures can misrepresent politicians, misinform the public and undermine democracy by threatening a fair election process. It further disrupts national security, intrudes on individual privacy and creates fake scandals. The affected individuals and the entities they represent will suffer from reputation damage,” says Andrei Kwok, a senior lecturer at Monash University Malaysia.
“Most people don’t have the skills, capacity or time to test and evaluate every piece of media they see to judge their veracity. It’s not just about whether this person is real. A lot of people don’t follow up on whether the person is trustworthy. Even if it is somehow possible to identify all the deepfakes, that doesn’t solve the problem of fraud and misinformation,” says Gong.
The Malaysian Communications and Multimedia Commission’s (MCMC) records show that 2.4 billion suspicious calls were blocked from 2018 to August 2023, action was taken on 4,051 phishing websites and 81 million unsolicited SMSes were blocked from 2021 to Aug 31 last year. Worryingly, billions have also been lost to scams on over-the-top (OTT) streaming services and e-commerce platforms.
MCMC categorises OTT services into six types — audio, video, media, messaging, commerce and gaming. In Malaysia, OTT platforms and the misuse of AI are increasing and damaging part of the cybersecurity and online harm threat landscape, says Derek John Fernandez, commissioner of online harms and network security committee at MCMC. That is why he has been calling for better regulation of OTT services.
There are four criteria that make an online scam: anonymity, access to a telecommunications network, access to an account or payment system and targeting information. “Nearly all persons intending to commit a crime or engage in causing online harm do not wish to get caught and therefore want to remain anonymous,” Fernandez explains.
To reduce the element of anonymity, he prescribes a few methods. People who use network facilities need to be registered with a strong proof of identification. Additionally, communication without the identification of the sender should not be allowed on any platform or service.
Fernandez says all persons who receive unsolicited communications should be entitled to receive the identity of the actual person from the service provider. This means all service providers, which include e-commerce platforms, will need to adopt a strict know-your-customer policy.
If this is not followed, substantial fines will need to be enforced on OTT service providers when there are serious breaches of identification issues or non-compliance with licensing conditions. “Perhaps it is time we accept the fact that today an equally potent weapon is not only the gun or the bomb, but rather the control of information,” he says.
Several OTT companies declined to comment on the matter when Digital Edge reached out to them.
MCMC currently uses a combination of Sections 211, 233, 244 and 263 of the Communications and Multimedia Act 1998 to regulate OTT platforms.
Nevertheless, Gong points out that solving the problem would not require all users to disclose their identity.
“There have been many cases where platforms host content that is intended as protest or dissent in oppressive or dangerous conditions. Real identity registration in those cases would be antithetical to the purpose of [those] platforms and the internet,” she says.
“Anonymous content will still need to be verified, but forcing everyone to register with their real identity likely means the content will be created and distributed anonymously on the platform anyway.”
She cites the example of messages being sent on WhatsApp that require registration using a verifiable phone number.
“How can [WhatsApp] be held responsible for the deepfake content that a user downloaded from the internet and shared on the WhatsApp platform? And can that user be held responsible — if they could even be identified as the original sharer of fake content — if the user thought the content was real?” she poses.
Instead Gong suggests using amplification algorithms, whereby platforms are responsible for the content they amplify. “Platform fact checking algorithms and human teams need to be able to do some content verification and content moderation before pushing content out and there needs to be some accountability and action taken when clearly false information is being circulated on a platform,” she notes.
Kwok says there is a need for more awareness of deepfake scams. “Since deepfakes usually thrive on fake news and disinformation and tend to sensationalise issues, one needs to always verify the source of information. Despite the good intention to share with friends and family members, one should refrain from reflexively circulating content without verifying their credibility.”
In the grand scheme of things, the deepfaking of celebrities is just part of a larger issue. Fake political messaging or revenge porn are things that will need a different set of laws, asserts Gong.
And regulation is needed so that there are no commercial or mass distribution of deepfakes without the persons’ consent, she says. “If that means no more deepfake Tom Cruise if the real Tom Cruise says he doesn’t consent, then so be it,” she adds.
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple's App Store and Android's Google Play.