New technology can provide ample opportunities to improve our marketing and branding. Just as often though, it seems to spawn new threats. Deepfakes – faked videos made by machine learning algorithms – can now convincely imitate real people, from CEOs to politicians to celebrities. The software takes real videos and uses them to create convincing fakes, allowing any individual to record a video using someone else’s face and voice.
Where did Deepfakes come from?
Deepfakes were originally found outside of mainstream websites, and people on these sites used it to insert ex-girlfriends and celebrities into pornography. Already, there are clear implications here for how this could damage someone’s reputation.
While some internet users used it for more harmless purposes, such as inserting the face of Nicolas Cage into movies and news clips, comedian and filmmaker Jordan Peele (director of critically acclaimed horror film Get Out) teamed up with Buzzfeed to demonstrate just how dangerous videos like these could be.
The resulting video below supposedly shows Obama mocking Trump, but in reality, Obama’s face and voice were superimposed onto footage of Peele for a convincing fake. Watch below, but bear in mind it contains some foul language:
What do Deepfakes mean for everyone else?
Deepfakes may pose a major threat to public figures in general, and more so than fake news or Photoshopped images have in the past. Because the videos appear so realistic, it’s difficult to spot fakes – a problem which grows worse by the day as the technology develops. This means more people will believe the videos are real.
Worse still, the prevalence of fake videos could undermine public trust about every source of information, because we could never tell what is real anymore.
While Deepfakes take some technical skill at the time of this writing, they are getting easier to make by the day, as demonstrated by an increasing number of apps used to create them. Normal people will soon have the ability to convincingly fake someone’s face and voice in video.
That means managers, public speakers, and employees of controversial companies. It means anyone who makes decisions that might make someone unhappy in the process. Anyone who once said the wrong thing to the wrong person. Even anyone who has ever been romantically involved with another person.
If you find yourself in a situation where someone else doesn’t like how events played out, people who don’t like your choices now have the ability to create a new reality using doctored video.
What are people doing about Deepfakes? And what can you do?
Currently, The Department of Defense is developing tools to combat the threat of Deepfakes. Additionally, tech companies like YouTube, Reddit and Facebook have banned Deepfakes from their platforms, and claim to be working to protect their community members. However, none have offered concrete explanations for what they plan to do to this effect.
Unfortunately, there isn’t much to do directly to defend yourself against Deepfakes, aside from making the extra effort to consider those around you whose lives are affected by your decisions. New technologies will slowly roll out to work against Deepfakes, but for now, any public figure should be mindful and vigilant about the types of attacks this technology allows.
Deepfakes also create a tricky situation regarding public presence. On one hand, the more your videos exist online of your face, the easier time someone could have building a realistic recreation. However, this does not mean we should all hide from the web – just the opposite, in fact. If we build a strong online presence where we consistently establish our personality and values in our videos, we can more easily discredit attacks made by those who wish to fraudulently tarnish our reputations.
If you wish to learn more about using video to build your brand online, follow the link below.
– Drew Estes, August 2018 –
Other popular posts by Drew Estes: