IT uses the GAN-based Reface AI akin to the Reflect face-swapping application. While the ramifications of the tech appear to be a little less serious and just a bit of fun, it does lower the entry level for those to experiment with the trickery.Ī report from The Next Web points out the tech used is familiar.
The app is available for both iOS and Android requires users to simply take a selfie and then choose a GIF file in order to magically replace a face in the original image with your own. You can make yourself Chris Pratt’s ‘surprised’ face, Jennifer Lawrence’s ‘OK’ face, or even Leo toasting with a martini glass. However, a new app called Doublicat will enable the average smartphone owner to superimpose their own face onto any GIF they can lay their paws on, in just a matter of seconds. So far Deepfake videos have been used to make it appear as if people are saying or – in the worst cases – doing things they haven’t, but the tech has been mostly limited to those with the requisite knowhow to use it. In fact, they’re so scary, even the Freddy Krueger of the tech world Facebook is dropping the ban hammer. If such a framework were enforced by app stores and other stakeholders critical to an app’s success, it could help create a safety standard for deepfake apps that all developers would have to follow in order to be published.Deepfakes videos are one of many tech boogeymen promising to haunt humanity in the new decade. Given all this, what could be plausibly done to minimise deepfake apps’ misuse? One approach could involve the creation of an app safety framework for developers, including measures such as threat assessments, limited access without user authentication, or even moratoria on releasing new capabilities that lack harm-mitigation strategies.
Capturing the near-limitless variety of malicious uses is currently impossible to automate, while manual moderation would be unfeasible given the volume of content being generated online. Preemptively detecting and blocking malicious content would also prove difficult given the wide range of possible harms that could be wrought through this budding technology.
A watermark would notify viewers that a video is fake, but developers might be reluctant to place one where it would obstruct the image entirely, meaning it could simply be cropped out of frame. For detection tools to be effective at stopping malicious deepfakes, they would need to be widely adopted by the social media platforms and messaging apps – but no social media platform currently has deepfake detection in their media upload pipelines, and implementing detection on messaging apps like WhatsApp or Telegram would require monitoring users’ conversations, a significant change to these services’ current privacy-focused model.Īnother is how reliable these security measures would be. One is how developers roll them out in the first place. While developers’ readiness to address misuse of their apps is promising, deploying these security features poses several challenges. As the technology becomes more powerful and pre-training less restrictive, developers might see a competitive advantage in opening up their apps to user-uploaded content in an “off-rails” approach.
In order to quickly generate high-quality face-swaps with one or a few user images, apps “pre-train” their generative models on a number of popular movie scenes, such as the twins from The Shining, or Sean Bean’s “one does not simply walk into Mordor” meme from The Lord of the Rings. But these restrictions are often the outcome of technological limitations rather than a deliberate security choice. Many deepfake apps address these concerns by being “on rails”, or restricted: users can only swap faces into a selection of scenes from pre-approved films or shows. Both examples point to a worrying future where deepfake apps could create harmful fakes on a massive scale, threatening anyone whose images are online. Although the video wasn't realistic, similar scenarios in the future may be more convincing. Over 100,000 of deepfake images of women and minors were shared on Telegram channels counting over 100,000 members.įears that deepfake apps could fuel the problem of political disinformation and deceptive content online were also sparked in April 2020, when Donald Trump retweeted a crudely manipulated video of Joe Biden lolling his tongue and twitching his eyebrows. One of this article’s co-authors recently discovered a “deepfake pornography bot” on the messaging app Telegram, which allowed users to upload pictures of clothed women and “strip” them by generating their deepfake nude images. Glimpses of this misuse are already visible.