Just As Everyone's Starting To Worry About 'Deepfake' Porn Videos, SESTA Will Make The Problem Worse | Spanlish


Post Top Ad


Post Top Ad

Sunday, March 18, 2018

Just As Everyone's Starting To Worry About 'Deepfake' Porn Videos, SESTA Will Make The Problem Worse

Over the last few months, if you haven't been hiding under a tech news rock, you've probably heard at least something about growing concerns about so-called "Deepfakes" which are digitally altered videos, usually of famous people edited into porn videos. Last month, Reddit officially had to ban its deepfakes subreddit. And, you can't throw a stone without finding some mainstream media freaking out about the threat of deepfakes. And, yes, politicians are getting into the game, warning that this is going to be used to create fake scandals or influence elections.

But, at the same time, many of the same politicians suddenly concerned about deepfakes are still pushing forward with SESTA. However, as Charles Duan notes, if SESTA becomes law, it will make it much more difficult for platforms to block or filter deepfakes:

Under it, websites that have “knowledge” that some posted material relates to illegal sex-trafficking can be deemed legally responsible for that material. What it means for a website to have “knowledge” remains an open question, especially if the site uses automatic or artificial intelligence systems to review user posts. Therefore, this language opens the door to a potentially wide range of lawsuits and prosecution.

The worst case scenario is that, to avoid having “knowledge” of sex trafficking, Internet services will stop content-moderation entirely. This scenario, which some experts call the “moderator’s dilemma,” would most likely affect smaller websites—including message boards and forums that serve special interests—that can’t afford the advanced filtering systems or armies of content editors that the big sites use. These smaller sites have already faced difficult problems with content moderation, and would be even less likely to spend resources on cleaning up after their users if doing so might lead to a lawsuit.

Duan points out that it goes beyond just the moderator's dilemma aspect of this. Even if sites do decide to moderate, at this point Congress is making it clear that whatever moral panic of the day that excites it may lead to new laws demanding action. But if they're desperately chasing the last problem, they have even less time to deal with the new one.

One of the good things about CDA 230, is that it actually allows platforms to experiment and try out different ways of moderating content. If they fail (as they often do!), they hear about it from their users, or the press, or from politicians. In short, they're allowed to experiment, but have incentives to try to find the right balance. But if Congress is enacting carve-outs that make any failure to properly filter a crime, then it becomes almost impossible, and the incentive is just to avoid doing anything at all. That's not at all healthy.

Permalink | Comments | Email This Story

No comments:

Post a Comment