08, Dec, 2023

Deepfake explained: Misuses, impact on victims and precautions needed

Share via

As deepfakes continue to make headlines, we speak to experts to decode the unsettling technology, the risk it carries and possible preventions against it

The use of AI and deepfake videos is expected to increase

Late theoretical physiologist, cosmologist and author Stephen Hawking once said, “Success in creating effective AI [Artificial Intelligence] could be the greatest event in the history of our civilisation. Or the worst.” The recent furore over deepfakes was a result of targeted celebrity fakes, starting with actress Rashmika Mandanna whose face was morphed onto a body of another woman wearing a dress with a plunging neckline; followed by Katrina Kaif’s edited clips from her recent movie Tiger 3; Kajol’s deepfake of Get ready with me, where she undresses herself. India’s Prime Minister Narendra Modi hasn’t been spared either; here, a man looking exactly like him is involved in a garba performance leading to the Prime Minister recently raising a concern about the misuse of AI on public platforms. Amid the rise in contrasting opinions and advice platforms, three experts break down the technological terms, the extent of its misuse, the psychological impact on victims and caregivers, and possible precautionary measures.

Katrina Kaif, Rashmika Mandanna and Kajol have had deepfake videos made on them. Pics Courtesy/Instagram; Representation pic

Kajol, Katrina Kaif and Rashmika Mandanna have had deepfake videos made on them. Pics Courtesy/Instagram; Representation pic

Onus on intermediaries

 

Ritesh Bhatia, cyber-crime investigator and founder of V4WEB Cybersecurity, has been spreading awareness against deepfakes and deepnudes since 2018. He tells us that this is just the beginning. “Deepfakes and deepnudes have been around for a while. Take the recent [July earlier this year] AI scam that involved a man from Kerala, for instance, where his supposed childhood friend video-called him, they spoke about the good old days, and the man transferred R40,000 to him on request, only to later realise that it was a deepfake. Such cases are common and will rise. This is now being picked up more by the media because it involves these celebrities and, of course, more recently with the Prime Minister.”

Ritesh Bhatia
Ritesh Bhatia

While he foresees deepfakes being used the most in defence and politics in the near future, he points out that it will also be a major concern with a generation that is warming up to a tech-savvy lifestyle. “Imagine if someone morphs my son’s face and video-called my father to inform him that he is badly bruised. My father would believe him, and transfer any amount of money for treatment for who he thinks is his grandson, but is actually an AI creation.”

Spot the fake

So, how do we gauge the person (the video caller) to know if it’s real and or a deepfake? Bhatia has a warning for all of us. “Many posts are currently doing the rounds on social media, where they suggest looking at the angles, edges and lighting, etc. It is not possible for the common man to tell the difference, especially during times of emergency. Deepfakes are realistically made in a way that they will need an expert with the right training and the tools to verify.” The sole way that he thinks one can steer clear of such scams is by asking personal questions.

A moment from the viral deepfake video of Prime Minister Narendra Modi performing garba. Pic Courtesy/Twitter
A moment from the viral deepfake video of Prime Minister Narendra Modi performing garba. Pic Courtesy/Twitter

“The first step is to not react. If it’s your son or family, ask them what they ate last night, or what you talked about. Be particularly cautious in cases that involve money and remember to take everything with a pinch of salt,” he suggests, adding that he thinks the ultimate responsibility falls on social media community in cases where deepfake videos or pictures are going viral. “They have the tools to recognise AI content. If they can put out disclaimers before sensitive videos; it is their responsibility to also add tags that warn the viewers of possible AI-generated posts.”

Why me?

Nirali Bhatia, cyber psychologist and psychotherapist, cites the example of her client who was a victim of deepnude. “She received a message on social media where the perpetrator sent her a picture in which her face was morphed on a blurred, nude body. They threatened to release her sex tape, and the moment they sent her the link, she did not open it. Instead of blocking them, she deleted the app. And now, she is traumatised and fearful of society. Her deleting the app [and not being able to use it again] is a case of phobia. This is similar to how a person with aqua phobia will react — a belief that they will drown if they step into water, despite it being irrational— or how people who are scared of spiders will spot one from miles away, and feel as though they’re crawling all over their body.”

Nirali Bhatia
Nirali Bhatia

Further, comparing cases where someone’s real sex tapes or nude pictures are shared online, Bhatia tells us that the victim’s recovery in such cases is more likely to be easier than the person who is a victim of deepfakes/nudes. “In the latter case, it’s a question of ‘Why me?’ For those whose [real] sex tapes or nudes go viral, they might know that they were in some way responsible for it. They accept it. For them, the question is ‘What next?’” She says, adding that the trauma is greater when the person becomes a victim due to no fault of their own. And that journey from why me to what next takes a long time. But with the right kind of support and reaction, recovery can be quicker. “In most cases, even if you have one person who tells you they believe you and are unafraid to stand by you in public can make all the difference.

These could be parents, caregivers or friends. The shock factor will be there. But in this case, and especially in a generation where we all understand so little about technology, always remember to give people you know the benefit of doubt. React calmly and don’t be afraid of seeking help, whether it’s for the victim, the caregivers or friends,” she advises.

Boredom maketh the bully

Despite frequent cases of misuse that Bhatia encounters, she is positive that technology can never be entirely bad. “Boredom is one of the major reasons that creates a bully. In this case, it can either be entertainment or money. But there is no denying that AI or deepfakes have their own advantages as well. Saying that they’re only here for the worse will be like the times when people were unsure about the introduction of computers,” she says.

Manoj Omre
Manoj Omre

Visual/AI designer Manoj Omre offers a peek into the future of the beneficial side of technology, “First, making deepfakes would take a lot of time. Now, you can generate a video of nearly 90% accuracy within 20 minutes, and an image within a few clicks! This can be useful, especially when it comes to filmmaking and editing. If you want to screen test, say Amitabh Bachchan as a lawyer or a Punjabi, you can quickly substitute it using AI. If it doesn’t fit right, move on to the next actor. This will save so much time and effort to manually give him the look. This can also be applied in cases of stunt doubles, where instead of hiring people to do the stunt sequence for an actor, they can simply create deepfakes. It will save money and time.”

While there are currently no laws on who can or cannot use the technology, he says that prominent people can opt for copyrights, as Bachchan recently did for his voice. “Copyrighting comes as a good solution till the time law enforcers figure out a way to deal with the misuse. As for those who are experimenting with it, they should be made aware and remember to keep good intentions as priority,” he signs off.


Share via