Somebody has to get rid of it. Live streams of mass shootings, videos of children being beaten and graphic images of bestiality are all too easy to find on the internet, even though most people do not want to look at or even think about such stuff. It is little wonder that social media platforms employ armies of people to review and remove this material from their networks.
Maya Amerson is one of these people. Amerson began working as a content moderator for Reddit in 2018, reviewing violent and disturbing content and identifying posts that violated the company’s terms of use. After three years on the job, she began suffering from panic attacks and symptoms of post-traumatic stress disorder. She repeatedly sought support from her employer but didn’t get it. Now, she is taking Reddit to court.
Currently pending before the San Francisco Superior Court, Amerson’s lawsuit alleges that the Reddit management ignored her requests to move to a different position with less exposure to this kind of material, even after she returned from a 10-week medical leave following her PTSD diagnosis. The lawsuit also claims that Amerson’s supervisors belittled her after she came back to the office. This led to her resignation in September 2022. She filed the suit three months later. When we asked Reddit for its side of the story, we were told that we could attribute the following to a company spokesperson: “At Reddit, our employees’ well-being is a top priority and we offer a robust set of resources for content moderators.”
The allegations in Amerson’s lawsuit are specific to Reddit, a social media giant valued at $6.6 billion that has garnered praise for its unique and decentralized approach to content moderation. But they tell a story that goes well beyond Reddit. A growing number of legal actions are taking aim at tech platforms over their treatment of content moderators and the psychological hazards of the work. These cases highlight the tension inherent in a job that cannot yet be automated and is routinely exported to low-wage contractors overseas.
They also raise an essential and unresolved set of questions: Is vicarious trauma inevitable in the work of content moderation? Or can the tech giants who hire scores of moderators around the world do more to meet the psychological needs of the workforce? If so, are any of the major platforms doing it right?
A brief history of the “front page of the internet”
Founded in 2005 as the aspirational “front page of the internet,” Reddit allows people to create discussion groups, known as subreddits, on topics that interest them. Each of the over 100,000 subreddits across the platform has its own ethos and culture. And each subreddit has its own set of rules that are enforced in addition to the company’s own content policy.
What makes all this work? At the heart of Reddit lies an army of thousands of volunteers who carry out the grueling work of content moderation. Known on the platform as mods, these volunteers oversee each subreddit, removing content and banning accounts that violate their established community rules and norms. Some of the platform’s largest subreddits, like r/politics, have more than 1,000 volunteer mods. The company also employs a paid staff of moderators, like Amerson, to ensure that the content posted on the site does not run afoul of the law or of the company’s own content policy.
Across Reddit, it is these volunteer mods who conduct most of the content moderation on the site. According to Reddit’s most recent transparency report, volunteer moderators were behind 59% of all content removals in 2021, while paid content moderators handled most other removals.
Reddit characterizes its moderation approach as “akin to a democracy, wherein everyone has the ability to vote and self-organize, follow a set of common rules, and ultimately share some responsibility for how the platform works” and maintains that its bottom-up system “continues to be the most scalable solution we’ve seen to the challenges of moderating content online.”
In doing this work, however, volunteer mods are providing essential quality control services for Reddit without getting paid a dime. A 2022 study from Northwestern University attempted to quantify the labor value of Reddit’s estimated 21,500 volunteer moderators and concluded that it is worth at least $3.4 million to the company annually.
While the volunteer moderators do their work for free, many are unaware of the monetary value of the labor they perform for the company. “We wanted to introduce some transparency into this transaction to show that volunteer moderators are doing a very valuable and complex job for Reddit,” said Hanlin Li, the co-author of the report.
Volunteer moderators often describe their work as a “labor of love,” Li explained, but they also have been forthcoming about the need for more institutional support to combat hate speech and harassment. “Over the years we’ve seen a lot of protests by moderators against Reddit, essentially because they thought that they were not supported adequately to do this volunteer job,” she explained.
This has been a longstanding tension at Reddit, which developed a reputation early on for taking a laissez-faire approach to content moderation and hosting subreddits rife with bigotry and hate speech. This came to a head with a misogynistic incel community that formed on the platform in 2013 and became the ground zero for the 2014 Gamergate online harassment campaign that targeted women in the video game industry. The following year, when Reddit’s former CEO Ellen Pao announced that the platform would ban five subreddits for harassment, she was inundated with online abuse. Shortly after, she resigned. The incel subreddit wasn’t banned by the website until 2017.
Despite its unique reliance on a volunteer-driven content moderation model, the site has flown under the radar of researchers and reporters, generating less attention than other social networks. But the lawsuit could change this.
Lawsuits target tech giants over moderator trauma, mistreatment
Maya Amerson’s case against Reddit is a reminder that the company, despite its unique approach to moderation, is not immune to the same kinds of allegations of worker harm that have formed the basis of lawsuits against the world’s largest tech platforms.
That includes TikTok, Facebook and YouTube. TikTok was sued by two former content moderators last spring: They alleged that the company failed to create a safe and supportive working environment as they reviewed disturbing material including videos of bestiality, necrophilia and violence against children. Facebook agreed to a $52-million settlement in 2020 to compensate former content moderators who claimed their work caused psychological harm. And YouTube recently settled a $4.3-million lawsuit filed by former moderators who said they developed mental health issues while on the job.
These suits all focus on a relatively new labor force and speak to an industry-wide concern about the toll of moderators’ work. “Generally speaking, content moderation is kind of the new frontier,” Nicholas De Blouw, one of Amerson’s attorneys, told me. “In 1992, there wasn’t such a thing as content moderation. I think that’s the interesting aspect of these cases, is seeing how the law protects people that are in roles that never really existed.”
Many social media platforms have shifted away from employing moderators in-house and now outsource their jobs to third-party contractors. Vendors like Accenture, CPL and Majorel employ people all over the world, from India to Latvia to the Philippines, typically expecting fast turnaround times on content review but offering relatively low hourly wages in return. In lawsuits, employees allege they’ve been exploited, made to work in substandard conditions and denied adequate mental health support for the psychological effects of their work.
After TIME magazine showed how moderators in Kenya — working for Sama, a subcontractor of Meta — were paid as little as $2.20 per hour to review violent material while operating in a “workplace culture characterized by trauma,” one worker took both Sama and Meta to court. Sama has since canceled its remaining contracts with Meta. A separate investigation by the Bureau of Investigative Journalism highlighted the plight of a “legion of traumatized” and underpaid Colombian moderators working under grueling conditions for a TikTok contractor. “You have to just work like a computer,” a moderator said of the job in Colombia. “Don’t say anything, don’t go to bed, don’t go to the restroom, don’t make a coffee, nothing.”
Some social media researchers have suggested that eliminating the outsourced labor model could be a step toward improving conditions for workers. New York University professor Paul M. Barrett, who authored a 2020 report on outsourced content moderation, has called on companies to stop farming out the work and instead employ moderators in-house, where they would theoretically have more access to mental health support, supervision and proper training.
But moving the job in-house alone won’t resolve all the alleged issues with this line of work, as Amerson’s lawsuit against Reddit makes clear. After all, she was not an outsourced moderator — she was a direct employee of the company. As Barrett explained, directly employing moderators does not guarantee that they will receive everything that employees in these roles need.
“There’s an irreducible aspect to this work that makes it hazardous,” he said. “If you’re doing this work, there’s a danger you’re going to run into very difficult, offensive, and unsettling content. You can’t really completely avoid that. The question is: How well-trained are you, how well-supervised are you, and are there options for you to get the kind of mental and emotional support that would seem to be common sense in connection with this kind of work?”
Are any companies doing this well? Barrett said he was “unaware of a major platform doing content moderation well in terms of directly employing moderators who receive proper training, supervision and mental health support.”
What would incentivize platforms to implement these changes? While they could reform themselves, put an end to outsourced moderation and provide in-house employees with better benefits, compensation, oversight and training, there’s no reason to expect this to happen. Under Elon Musk’s leadership, Twitter has moved in the opposite direction, gutting content moderation teams. Layoffs at YouTube have left the company with just one employee overseeing global misinformation policy.
In an area that shows no appetite for self-regulation, actual oversight may be the only solution. Barrett has proposed enhancing the consumer protection authority of the Federal Trade Commission to regulate the social media industry. Such an expansion could give the FTC the authority to ask companies outright about their content moderation practices and to verify whether they live up to their commitments to consumers outlined in their terms of service and community standards guidelines, Barrett explained.
“They make promises just like other companies,” he said. “My argument is that the FTC could ask: So how many moderators do you employ? Do you have enough people to get this done? How do they interact with the automated system? And is all of that adequate, really, to fulfill the promises you make?”
A policy shift like this also would require Congressional action, which is hard to imagine in our gridlocked era of governance. But without major changes to the industry, it’s also hard to imagine lawsuits like Amerson’s — and the concerns that undergird them — going away anytime soon.