Copyright @ 2024 Global Cyber Alliance | Sitemap
In your social circle, friends circulating an obviously Photoshopped image of your face in place of that of an action movie star is pretty harmless, but what if someone could *digitally* make you naked?
The latter is real — and very scary. Deepfakes have gotten so malicious that someone could take a non-nude photo of you, remove your clothes digitally and turn that innocent, ugly Christmas sweater into an explicit image.
The website publishing these altered photos, which Wired magazine did not name to avoid amplification, had more than 50 million visits in the first 10 months of 2021.
There is no telling what the professional, legal, and mental ramifications are to these images and what corners of the web they lurk in.
This is a problem because reporters are taught to “trust, but verify,” meaning even the simplest fact or piece of information needs to be vetted.
Deepfakes, at their core, are media manipulation. But, there is a significant difference between a humorous or unflattering Photoshop edit and more sophisticated activities that the National Security Agency (NSA) calls a “threat to national security” – especially when you consider that mass producing these fakes requires little technical skill.
Beyond the obvious disinformation and malicious intent, a 2019 report from the Brookings Institution notes that deepfakes breed uncertainty, even making people hesitant to share accurate information. Moreover, a bad actor could deflect an accusation by claiming the source material is faked.
From a supposed deepfake confession video in Myanmar to even an AI voice of the late Anthony Bourdain, deepfakes try to resemble fact, but they’re not. This fiction is so powerful now that it’s even drawing the attention of Hollywood, as filmmakers consider it a “goldmine.”
Thankfully, there’s tech to combat this. Companies including Amazon Web Services, Microsoft, and Facebook have helped spur deepfake detection tools last year. Journalists should keep these tools in mind when vetting user-generated content (UGC), especially if it’s used in the reporting process. TV newsrooms, for example, often pay for breaking news footage if they’re unable to get to a scene.
Journalists need to be mindful of deepfakes in their news gathering process, especially in the context of elections. As the electoral process becomes more hypersensitive to outside factors, bad actors could use deepfakes could sow more doubt on an already contentious election cycle.
At its core, if something sounds too good to be true, it probably is. Deepfakes are no exception.
This blog was co-authored by Anthony Cave, the Craig Newmark Journalist Scholar at the Global Cyber Alliance, and Thomas Jung, a former GCA Craig Newmark Veteran Scholar.