"/>

麻豆中文字幕丨欧美一级免费在线观看丨国产成人无码av在线播放无广告丨国产第一毛片丨国产视频观看丨七妺福利精品导航大全丨国产亚洲精品自在久久vr丨国产成人在线看丨国产超碰人人模人人爽人人喊丨欧美色图激情小说丨欧美中文字幕在线播放丨老少交欧美另类丨色香蕉在线丨美女大黄网站丨蜜臀av性久久久久蜜臀aⅴ麻豆丨欧美亚洲国产精品久久蜜芽直播丨久久99日韩国产精品久久99丨亚洲黄色免费看丨极品少妇xxx丨国产美女极度色诱视频www

Facebook reports increased posts of graphic violence in Q1 2018

Source: Xinhua    2018-05-16 05:30:39

SAN FRANCISCO, May 15 (Xinhua) -- Facebook on Tuesday unveiled for the first time a transparency report that shows an increasing number of posts identified as containing graphic violence in the first of quarter of 2018.

"Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter," the report said.

It said the growth was a possible result of a higher volume of graphic violence content shared on Facebook in the first three months of this year.

Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.

The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier.

Facebook, the world's largest social media company, said it has recently developed metrics as a way to review the content shared on its platform, and the transparency report reviewed the content posted in the community during the period from October 2017 through March 2018.

The content audited included graphic violence, hate speech, adult nudity and sexual activity, spam, terrorist propaganda (ISIS,al-Qaeda and affiliates), and fake accounts.

Facebook took action against 2.5 million pieces of content in the first quarter, up 56 percent over the previous quarter.

It also took action on 837 million pieces of content for spam, 21 million for adult nudity or sexual activity, and 1.9 million for promoting terrorism.

A total of 583 million fake accounts have been disabled in Q1 2018, down from 694 million in the first quarter of 2017, according to the report.

"We estimate that fake accounts represented approximately 3% to 4% of monthly active users on Facebook during Q1 2018 and Q4 2017," the report said.

Facebook CEO Mark Zuckerberg said in a post also on Tuesday that his company is employing artificial intelligence tools to remove spam before users report it.

"Most of the fake accounts were removed within minutes of being registered," he said, adding that his top priorities this year are to keeping people safe and developing new ways to improve governance.

Editor: Mu Xuequan
Related News
Xinhuanet

Facebook reports increased posts of graphic violence in Q1 2018

Source: Xinhua 2018-05-16 05:30:39

SAN FRANCISCO, May 15 (Xinhua) -- Facebook on Tuesday unveiled for the first time a transparency report that shows an increasing number of posts identified as containing graphic violence in the first of quarter of 2018.

"Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter," the report said.

It said the growth was a possible result of a higher volume of graphic violence content shared on Facebook in the first three months of this year.

Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.

The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier.

Facebook, the world's largest social media company, said it has recently developed metrics as a way to review the content shared on its platform, and the transparency report reviewed the content posted in the community during the period from October 2017 through March 2018.

The content audited included graphic violence, hate speech, adult nudity and sexual activity, spam, terrorist propaganda (ISIS,al-Qaeda and affiliates), and fake accounts.

Facebook took action against 2.5 million pieces of content in the first quarter, up 56 percent over the previous quarter.

It also took action on 837 million pieces of content for spam, 21 million for adult nudity or sexual activity, and 1.9 million for promoting terrorism.

A total of 583 million fake accounts have been disabled in Q1 2018, down from 694 million in the first quarter of 2017, according to the report.

"We estimate that fake accounts represented approximately 3% to 4% of monthly active users on Facebook during Q1 2018 and Q4 2017," the report said.

Facebook CEO Mark Zuckerberg said in a post also on Tuesday that his company is employing artificial intelligence tools to remove spam before users report it.

"Most of the fake accounts were removed within minutes of being registered," he said, adding that his top priorities this year are to keeping people safe and developing new ways to improve governance.

[Editor: huaxia]
010020070750000000000000011105091371816571