Facebook staff have warned for years that when the company races to become a global service, it fails for police harassment content in countries where these speeches will cause the most dangerous, according to interviews with five former employees and internal company documents seen by Reuters.
For more than a decade, Facebook has encouraged to become the dominant online platform of the world.
Currently operating in more than 190 countries and offers more than 2.8 billion monthly users who post content in more than 160 languages.
But its efforts to prevent their products from being channels for hatred speech, inflammatory rhetoric and wrong information – some of which have been blamed for inciting violence – have not balanced its global expansion.
Internal documents seen by the Reuters Show Facebook have known that it has not employed enough workers who have language and knowledge skills about local events needed to identify unpleasant posts from users in a number of developing countries in a number of developing countries.
The documents also show that the Facebook-made intelligence system employs to eradicate such content often does not match the task; And that the company has not facilitated for its own global users to mark posts that violate site rules.
These shortcomings, employees warn in these documents, can limit the company’s ability to promise their promise to block hatred speeches and other rule solving posts in places from Afghanistan to Yemen.
In a review of the Facebook internal message board last year on ways TheFirm identified violations on its site, one employee reported “significant gaps” in certain countries at the risk of real-world violence, especially Myanmar and Ethiopia.
The documents were among the disclosure caches made for the Securities and Exching Commission and US Congress by Facebook Whistleblower Frances Haugen, a former product manager who left the company in May.
Facebook spokesman Mavis Jones said the company has native speakers around the world reviewing content in more than 70 languages, and experts in humanitarian and human rights issues.
He said these teams worked to stop harassment on the Facebook platform in places where there was an increasing risk of conflict and violence.
Jones said Facebook based its decision where it had to spread AI on market size and state risk assessment.
It refused to say in how many countries did not have a functioning speech function that functions.
Facebook also said it had 15,000 content moderators that reviewed the material from its global users.
“Adding more language skills has become a major focus for us,” Jones said.
Reuters.