DeepTrust: Censored

Misinformation is often perceived as a boring, Cold War relic, so I work hard to get people’s attention. In May, I got more than my audience’s attention. I got the attention of Youtube. In fact, YouTube censored my live presentation on the threat of deepfakes during the Ethereal Summit.  How did I find myself squarely in the middle of the deepfake and content moderation controversies currently ensnaring the social media platforms?  

I’ve learned that a picture of a couple embracing in bed (with no actual nudity) is just the ticket to capture people’s interest, especially during full-day virtual conferences. I use a REAL stock photo (purchased from Shutterstock) and pose the question, “What would you do if this photo showed up on your Facebook/Instagram/ Twitter feed with your mother/sister/daughter/wife’s face?” I aim to push the audience to realize that “entertainment” can have a tangible impact on the people in their lives. This message is particularly important, given that 96% of deepfakes are non-consensual deepfake porn. 

During the Ethereal conference, it seems this image may have caused my Youtube livestream to be cut off. A simple message appeared where my presentation should have been: “This video has been removed for violating Youtube’s Terms of Service.” Automated content moderation is incredibly difficult.  Most likely, the combination of the word “deepfake” and the image’s color palette triggered automated filters, though no official reason was given.  As a private institution, Youtube has the right to remove any content that they deem inappropriate.  Given the volume of content that must be reviewed, it is inevitable that automated moderation will frequently get it wrong. It may even be a good business decision to err on the side of overzealousness. 

Unfortunately for me, it meant DeepTrust Alliance’s message was shut down. I lost an important platform for educating the public on the risks of deepfakes and potential solutions. Ironic, huh?  In this instance, though, it was mostly an inconvenience; the conference continued to livestream on its own website. Yet for human rights activists, labor organizers and whistleblowers, especially outside the United States, social media may offer the only platform to disseminate unpopular or politically inconvenient information in authoritarian societies. The missteps of automated content moderation can silence important voices in major political and economic events. One particularly poignant example is the role Facebook played in shutting down the posts of human rights activists calling out the violence against the Rohingya in 2017 and 2018. 

Google, Twitter, TikTok and many others have created new policies, made significant investments and rolled out innovative technology in an attempt to solve these problems at scale. Regrettably, minor snafus like the one I experienced still happen in the midst of large corporate conferences; you can only imagine what happens to marginalized individuals and their messages.  Most existing platforms rely on keywords, color palettes, image recognition or human moderators to identify problematic, unsuitable or harmful content. Content moderation policies are a source of debate and contention and a topic for a much more extensive blog - but even if they are fair and well-calibrated, with existing tools and technology it can be easy for things to get missed or over edited in the extremely nuanced communication of social media.

Determining how to strike the right balance of limiting harmful media, but still permitting difficult stories is a matter of critical importance for our entire society. This includes not just politicians,  government, civil society and activists, but every economic actor who relies on social media and its advertising platforms to get their message out.  It is clear that information consumers and providers need more tools and capabilities to determine what is real and what is manipulated. These tools are fundamental to humanity’s ability to determine truth in the digital age. 

One of the core objectives of the DeepTrust Alliance is to develop solutions that tackle the problems of manipulated media. Over the course of 2019’s FixFake Symposia, our community identified and discussed over 150 different solutions and proposals to fight disinformation. We invite you to engage, add your voice to the discussion, and learn more about potential solutions here

 

Previous
Previous

DeepTrust Alliance open-sources synthetic media landscape

Next
Next

Deepfakes & Disinfo 101: Reading List