What Facebook Can Learn from Twitter About Combating Hate Speech
With a nationwide reckoning of systemic racism and a presidential election looming, Facebook and Twitter are facing pressure to address hate speech and misinformation on their platforms.
Both Facebook and Twitter faced controversy in 2016 over the spread of “fake news” on their platforms in the lead up to the presidential election. A study conducted by misinformation researchers found that more than 25 percent of voting-age adults visited a fake news website supporting either Clinton or Trump prior to the 2016 election. While it is unclear how much misinformation actually affected the results of the election, it did cause people to reconsider how much they could really trust what they read on social media. If you’re concerned, check out these 18 sneaky signs you’re actually reading fake news.
With a little over a month until the November 2020 presidential election, Facebook and Twitter are once again in the spotlight for how they will address misinformation on their platforms, especially as more and more people rely on mobile devices for their political news. This year, the stakes are even higher. Months of protests against racial injustice have led to a call for social media giants to do a better job at policing hate speech on their platforms.
How Twitter took a harder stance against hate speech than Facebook
Facebook has faced more controversy than Twitter because of its more relaxed approach to combating hate speech. This gap between the two platforms first emerged in May, when Twitter labeled a Trump tweet about mail-in ballots with a note to “get the facts about mail-in ballots.” Also in May, Twitter placed a public interest notice on a Trump tweet about the protests in Minneapolis that violated its policies on glorifying violence.
These were the first time Twitter had policed Trump’s speech—an unprecedented move for a platform that has historically been the president’s favorite. The move was applauded by civil rights groups. Facebook, however, left the same two Trump posts untouched. Following several more misleading Trump tweets about voting this fall, Twitter also recently announced that it would “label or remove false or misleading information intended to undermine public confidence in an election or other civic process.”
How Facebook became the center of a major boycott
Facebook’s failure to act on misleading Trump posts led to the creation of the Stop Hate for Profit Coalition by civil rights organizers. This campaign called on major companies to halt their advertising spending on Facebook until the platform took a harder stance against hate speech. Many of Facebook’s top advertisers followed through, including Disney. A few pulled their spending from Twitter, too, signaling that both companies still had a lot of work to do. Find out which other brands have donated millions towards racial justice.
In July, nearly a month after the Stop Hate for Profit boycott’s beginning, Facebook executives met with civil rights organizers to discuss how Facebook could better address hate speech on its platform. While it seemed like this meeting might be the end of the battle, the Stop Hate for Profit Coalition released a statement shortly afterward saying that “it was abundantly clear…that Mark Zuckerberg and the Facebook team is not yet ready to address the vitriolic hate on their platform.”
Hate speech on Facebook contributed to violence in Kenosha, Wisconsin
Facebook came under fire again in August following protests over the shooting of Jacob Blake in Kenosha, Wisconsin. A few hours before a scheduled protest in Kenosha, a militia group calling itself the “Kenosha Guard” posted on its page “Any patriots willing to take up arms and defend our city tonight from evil thugs?” A few hours later, a 17-year-old White man allegedly killed two people and injured another during the protest.
Despite receiving several complaints about the Kenosha Guard page before the protest began, Facebook did not remove the page until after the shooting occurred. In a video posted on his Facebook page, Facebook CEO Mark Zuckerberg called this failure an “operational mistake” and said that the page should have been taken down for violating its policies on encouraging violence.
Twitter has not been targeted by the Stop Hate for Profit Coalition. It also did not face controversy following the Kenosha protests. In fact, posts on Twitter were used to track down the Kenosha shooter, who has since been charged with first-degree intentional homicide, attempted intentional homicide, and a misdemeanor weapons charge. As Facebook tries to repair its image and combat hate speech more effectively, there are some things it could learn from Twitter’s approach.
It all starts at the top
The differences in CEO opinions at Facebook and Twitter may be one reason Twitter has pulled ahead in the race to combat hate speech and misinformation. Facebook COO Sheryl Sandberg recently told MSNBC that “when the president violates our hate speech standards or gives false information about voter suppression or coronavirus, it comes down.” However, CEO Mark Zuckerberg has stated that content that would otherwise be taken down for violating community policies will be left up when it comes from a noteworthy politician, because of the content’s newsworthiness.
Zuckerberg has consistently used the argument that Facebook should not be an “arbiter of truth,” online, specifically calling out Twitter for fact-checking and labeling Trump’s tweets in May. Twitter’s CEO Jack Dorsey responded to this comment in a series of tweets shortly afterward, arguing that Twitter’s actions did not make it “an ‘arbiter of truth,'” but that the company was attempting to “show the information in dispute so people can judge for themselves.” Maybe if Zuckerberg were willing to compromise a little on his views and learn from Dorsey’s belief in providing all the facts, Facebook would have fewer problems with misinformation and hate speech.
Learn from users
Addressing hate speech is a recent endeavor for both Facebook and Twitter. Twitter began updating its content policy in 2018. In the same year, Facebook expanded its content review team. However, Twitter actually asked users for help developing its new content guidelines. It opened a feedback form that received 8,000 responses. On the other hand, Facebook relied on its internal team to update its content policy, in addition to hiring external civil rights auditors. If Facebook were to listen to its users the way Twitter has, it could find more success in creating a safe and accepting community online.
Take swifter action
Facebook made headlines in August for banning hundreds of pages and groups associated with the conspiracy group QAnon, a pro-Trump faction that believes “the world is run by a cabal of Satan-worshiping pedophiles who are plotting against Mr. Trump while operating a global child sex-trafficking ring,” according to the New York Times.
While Facebook only just took action against a group that has aggressively spread misinformation and even committed acts of violence, Twitter began taking down thousands of QAnon accounts in July, a whole month before Facebook did. Facebook needs to learn to act more quickly, like Twitter, so that dangerous hate speech comes down before it causes violence.
Listen to civil rights leaders
Facebook has failed to reach an agreement with the Stop Hate for Profit Coalition. On Wednesday, the group organized another boycott of Facebook and the company’s subsidiary Instagram. For 24 hours, a group of celebrities including Kim Kardashian West, Sacha Baron Cohen, and Mark Ruffalo did not post on their Facebook and Instagram accounts to show their solidarity with the movement to combat hate speech and misinformation on these platforms. Facebook will continue to be the target of boycotts like this one until it can meet the Stop Hate for Profit Coalition’s demands, and take a stand against hate speech the way Twitter has.
Things will heat up for Facebook and Twitter this November
As November approaches, all eyes will be on Facebook and Twitter to prevent the spread of hate speech and misinformation on their platforms as Americans prepare to get out and vote. Americans can hope for these platforms to do a better job than they did in 2016. But the responsibility of parsing out the truth and condemning hateful and racist content may ultimately fall on each individual.
A great place to start is this new Facebook feature that helps you spot fake news before you fall for it.