As a business, Facebook is more successful than ever. On Wednesday afternoon, it reported another quarter of huge growth, with nearly 2 billion people actively using the service and revenue up 49 percent in the first quarter compared with a year ago.
But with the company’s vast reach has come another kind of problem: Facebook is becoming too big for its computer algorithms and relatively small team of employees and contractors to manage the trillions of posts on its social network.
Earlier Wednesday, Mark Zuckerberg, the company’s chief executive, acknowledged the problem. In a Facebook post, he said that over the next year, the company would add 3,000 people to the team that polices the site for inappropriate or offensive content, especially in the live videos the company is encouraging users to broadcast.
The announcement comes after Facebook Live, the company’s popular video-streaming service, was used to broadcast a series of horrible acts to viewers, including a man boasting about his apparently random killing of a Cleveland man and the murder of an infant in Thailand.
More broadly, the company has been criticized for doing a poor job weeding out content that violates its rules, including the sharing of nude photographs of female Marines without their consent and illegal gun sales.
Facebook is also grappling with the limitations of its automated algorithms on other fronts, from the prevalence of fake news on the service to a News Feed that tends to show people information that reinforces their views rather than challenges them.
Despite Mr. Zuckerberg’s pledge to do a better job in screening content, many Facebook users did not seem to believe that much would change. Hundreds of commenters on Mr. Zuckerberg’s post related personal experiences of reporting inappropriate content to Facebook that the company declined to remove.
Most of the company’s reviewers are low-paid contractors overseas who spend on average of just a few seconds on each post. A National Public Radio investigation last year found that they inconsistently apply Facebook’s standards, echoing previous research by other outlets.
Zeynep Tufekci, an associate professor at University of North Carolina who studies online speech issues, said that Facebook designed Live to notify your friends automatically about a live feed — something guaranteed to appeal to publicity seekers of all sorts.
“It was pretty clear to me that this would lead to on-camera suicides, murder, abuse, torture,” she said. “The F.B.I. did a pretty extensive study of school shooters: The infamy part is a pretty heavy motivator.”
Facebook has no intention of dialing back its promotion of video, including Live, telling investors on a conference call Wednesday that it would continue to rank it high in users’ news feeds and add more advertising within live videos and clips.
Advertising is Facebook’s lifeblood, accounting for most of the company’s revenue and profit. In the first quarter, the company earned $3.1 billion, up 76 percent from the previous year.
Debra Aho Williamson, an analyst with the research firm eMarketer, said that all the negative publicity about Facebook’s problems with horrific content and fake news appears to have hurt user satisfaction levels. Adding more content monitors is aimed at reassuring Facebook’s 1.94 billion users, she said.
“If people feel safe on Facebook, they will be more engaged and will use it more often,” Ms. Williamson said. “And if they use it more often, there will be more inventory for advertising.”
The company is trying to strike a balance between censorship and free speech. Facebook video has been used to share millions of personal stories and to document events of immense public interest, such as a series of police shootings of unarmed black m
Although there is little question that live-streamed murder does not belong on the service, the company has come under fire when it has stopped violent broadcasts like Korryn Gaines’s fatal standoff with police in Maryland last year.
“All policies need to recognize that distressing speech is sometimes the most important to a public conversation,” said Lee Rowland, a senior staff attorney at the American Civil Liberties Union who works on free speech issues.
She said that the decision to hire more moderators can only help the company make better judgments, especially about live events where fast decisions can be critical. “Humans tend to have more nuance and context than an algorithm,” Ms. Rowland said.
But Ms. Rowland said Facebook must also be more clear to the public about its rules on making those calls.
Mr. Zuckerberg called the recent episodes of violence “heartbreaking” and said the company wanted to make it simpler and faster for reviewers to spot problems and call in law enforcement when needed.
In the conference call with investors, he said that artificial intelligence tools would eventually allow reviewers to do a better job of reviewing content. “No matter how many people we have on the team, we’ll never be able to look at everything,” he said.
Facebook is not the only internet company to wrestle with these problems. Google has struggled with similar issues involving its YouTube video service and an automated advertising system that sometimes places marketers’ ads next to questionable content.
Philipp Schindler, Google’s chief business officer, said in an interview this week that like Facebook, his company believed the internet was so vast that machine learning had to work hand-in-hand with human reviewers to improve vetting.
”We don’t think the problem over time should involve humans, because of the scale of the problem,” he said. “But we are definitely using humans. We have invested pretty heavily in humans because they are training the machine learning.”
Source: New York Times