Year in Review: 2021 in Disinformation and Deepfakes

Disinformation, or deliberately misleading or biased information, is nothing new. But 2021 demonstrated just how fast and far it can go: From online conspiracy theories to false claims on the pandemic to the presidential election and everything in between, disinformation is as dangerous as ever. Furthermore, deepfakes, or hyper convincing AI-manipulated media, represent a powerful subset of this problem, and often only add fuel to the fire. 


This year, deepfakes continued to disproportionately and violently impact women in increasingly sophisticated schemes. For example, criminals in India extorted women on Instagram by releasing deepfake porn to their friends and families if they refused to pay, a new “nudifying” website similar to the now-deleted DeepNude App emerged to digitally undress women in hundreds of thousands of photos daily, and incidents of deepfake sexual harassment continually tested the courts’ ability to respond.


These ongoing threats to safety (and trust) online also caught the attention of a variety of companies over the past 12 months. Meta (previously Facebook), reported the development of a deepfake identification algorithm, and six months later, announced support for the UK Revenge Porn Helpline’s StopNCII.org to end the sharing of revenge porn on the platform. Additionally, Microsoft and Adobe have rallied behind controlled-capture image verification startup Truepic with $26 million of funding. Of course, these are just a few of the many innovators engaged in developing real solutions to disinformation and malicious deepfakes. 


The government, regulatory, and public sectors have also stepped in new ways since 2020. For instance, the US Army has developed its own deepfake detection tool, DefakeHop, which it believes is far more “robust, scalable, and portable” than traditional detection methods, the US Senate Homeland Security Committee formed a Deepfake Task Force to curb the dissemination and impact of malicious deepfakes, and the FBI issued a stark warning that “foreign actors are already using synthetic content in influence campaigns'' and to “attack the private sector.” We expect interest and collaboration between the private and public sectors to only increase in the new year and beyond.


However, it’s important to note that deepfake technology presents positive use cases, too. This year, we’ve seen deepfakes train employees and teach languages, unveil the latest trends in high fashion, flawlessly dub iconic films to reach new audiences, and bring loved ones back to life through animation. The possibilities really are endless.


There’s no limit to human creativity, and only bad behavior can hold us back from realizing the full potential of this technology. With thoughtful policy intervention and united stakeholders, DeepTrust Alliance will continue to change that in 2022.


Next
Next

Interview: Improving Deepfake Policy