InvestigateTV – The technology used to create realistic videos of an internationally acclaimed actor, a former US president, and even a world leader in the midst of violent conflict is being used on ordinary Americans.
The reason: Using your friendly face can easily trick people you know into a scam.
These bizarre counterfeits are created with advanced computing designed to mimic the human brain.
In the artificial intelligence (AI) community, videos are called “deepfakes”. The term is used for audio, images or videos that have been manipulated to appear real.
Deepfakes use a form of AI called “deep learning”, a technology that attempts to copy the way humans think and learn. This is also where the “deep” in deepfake is derived.
Recently, Tiktok user Chris Ume went viral with his deepfake of actor Tom Cruise. In 2018, director Jordan Peele and Buzzfeed circulated a deepfake of former president Barack Obama to warn people about advances in technology and how they could spread misinformation through misuse.
When it comes to tracking these potentially problematic videos, one of the few organizations with deepfake data is Sensity, an Amsterdam-based company that uses deep learning and computational technologies to detect deepfakes.
According to Sensity, deepfakes surfaced in late 2017 and online numbers grew rapidly. In 2018, the company tracked over 7,000 deepfake videos online. In December 2020, their report shows the number skyrocketed to over 85,000 deepfakes online.
Sensity data only tracks incidents involving public figures. It does not include incidents involving individuals. However, hackers simulate more than celebrities and politicians.
Hacked and deepfake
Kyle Hawkins knows this all too well. He unwittingly entered the world of deepfakes when his social media accounts were hacked in February 2022.
Hawkins is an insurance agent specializing in health insurance and retirement planning in Richmond, Virginia.
One day he opened Instagram and said he saw a message from an old friend. Hawkins thought the friend was seeking his services and looking for help.
“I got a message via Instagram from someone I was friends with there and I assumed the same thing had happened to them, but I had no idea,” Hawkins said.
Turns out that friend was hacked. When Hawkins clicked on a link in the post, he said he quickly lost control of his account.
“I didn’t think anything about it,” Hawkins said. “And then I was able to kind of get Instagram that morning, and then by the time I checked, it was lunchtime, it was all gone.”
Hawkins said his Instagram and linked Facebook account were hacked, opening up his followers to similar attacks.
This is where he said the deepfake started. Hawkins said a 16-second deepfake video was sent to his friends and followers encouraging them to invest in bitcoin mining. He confirmed that the video looks and sounds like him.
“It looks real, but they send it to people. They’ve done others, I think,” Hawkins said.
He said the video has been posted to Instagram stories every day since the initial hack. In it, he said the “fake Hawkins” shared how much money he made from bitcoin. The thing is, Hawkins said he has never invested in cryptocurrency.
“I don’t have Bitcoin, so I didn’t do that,” Hawkins said.
Hawkins said he contacted both social media platforms in hopes of shutting down his account, but his Instagram and Facebook accounts are still active.
Expansion and Regulation of Deepfake
Ben Coleman, CEO of Reality Defender, works with organizations and government agencies to analyze audio, images and video to protect the privacy of individuals, as well as to fight against fraud, inappropriate content and to seek a solution to the rise of deepfakes.
“The face-swapping are deep forgeries,” Coleman said. “Some of them are funny, and some of them are used for fraud.”
He said the videos could also be potentially dangerous.
On March 16, during the Russian military action in Ukraine, a deepfake surfaced on Ukrainian President Volodymyr Zelensky’s social media. The video showed Zelensky giving a speech. However, he was pixelated and had a deeper voice than usual. After the video was branded a deepfake, Meta – Facebook’s parent company – quickly removed the video from all of its platforms and posted the next statementclaiming that the company “promptly reviewed and removed this video for violating our policy against misleading manipulated media, and notified our peers on other platforms.”
This wasn’t the first time Meta had tackled deepfakes. Ahead of the 2020 presidential election, the company banned deepfakes and other manipulated videos citing dangerous tactics that could mislead the public.
In a 2020 Facebook press release, the company said it was working on the issue and “strengthen their policy against deceptive manipulated videos.” Facebook’s Manipulated Media Politics indicates that non-parody or satirical videos edited to mislead people, or videos that use AI to appear authentic will be removed.
There are no public figures on how many deepfake videos Facebook has removed, but in a statement the company said it was “working with others in this area to find solutions with real impact.”
In September 2019, the company created a “Deep Forgery Detection Challenge” which asked experts in the field to help create open-source tools to detect deepfakes.
Meta has also partnered with outlets like Reuters to help identify deepfakes and offer free online training on how to identify manipulated visuals.
Ben Coleman said that while companies and social media organizations are trying to tackle the problem, significant hurdles remain.
“A lot of times these companies have big challenges because they have human moderators and human moderators just can’t tell the difference between right and wrong anymore,” Coleman said.
Senator Rob Portman (R-OH) presented a invoice in Congress last year to ask the Department of Homeland Security and the White House Office of Science and Technology Policy to create a temporary national fake provenance task force. The bill was referred to the Homeland Security and Governmental Affairs Committee and was “ordered to be reported favorably without amendment.”
Coleman said there are no current policies in the United States that require companies to report synthetic and fake media in the same way they currently report underage nudity and violence.
“For the majority, [companies are] asking users to report things,” Coleman said. “They expect users to be experts, and if they see something, they have to say something, and then it gets sent to a team of human moderators.”
Public and private deepfake solutions
According to Coleman, Reality Defender is currently working on creating a browser extension and website to help consumers spot deepfakes from their personal computers.
But Reality Defender is not alone in the fight against deepfakes.
At the University of Virginia, a team of third-year students is developing a website for the public, where one day consumers could upload questionable videos and photos to check if they’re fake.
Two of these students, Ahmed Hussain and Sam Buxbaum, are studying computer science and physics. The pair won first prize in the Innovative Discovery Science Platform (iDISPLA) competition. Their proposal, which targeted the fight against deepfakes using AI, came about after the duo saw a rise in deepfake videos surfacing on the internet.
“It’s certainly possible that deepfakes over the next five years will be nearly indistinguishable from real people in some cases,” Hussain said. “They get to the point where it’s quite difficult to do that.”
Hussain said he believes the solution is not to fight fire with fire, but to use blockchain, a system that records information and makes it difficult to hack or cheat the system.
Buxbaum said his website would allow people to upload a video and the algorithm would tell if the video is fake.
“Some of the things that are different between a deepfake and a real video are only detectable by a computer, but they still make you feel weird when you watch it,” Buxbaum said.
Protect your account and detect deepfakes
As online solutions and lawmakers catch up with technology, Coleman suggested several steps to prevent a hacker from using your photos and videos to create deepfakes:
- Secure all your social accounts and have a different password for each
- Enable two-factor authentication
- If a video seems off, report it to the platform you’re using, pick up your phone, and call the person
When it comes to spotting deepfakes, researchers from the Massachusetts Institute of Technology suggest watching the facial features in the video:
- Watch how the eyes and lips move
- Look to see if the skin is too smooth or too wrinkled
- Check for abnormal shadows in video or photo
- Don’t click on any link associated with a video that makes you feel uncomfortable
Kyle Hawkins said his experience made him wary of social media and this new style of cyber scam.
“Just be very careful these days about anything you put out there, post to, reply to, or click on.”
Copyright 2022 Gray Media Group, Inc. All rights reserved.