Wednesday, August 22, 2018

Facebook continues to fight fake news with user ‘trust scores’

Facebook is assessing whether or not its users are trustworthy.

A recent article by The Washington Post reported that Facebook rates users by giving them a score between zero and one based on how trustworthy their actions seem.

The Post’s Elizabeth Dwoskin interviewed Tessa Lyons, a Facebook product manager leading the fight against misinformation on the platform, who said the system helps Facebook stop the spread of fake news.

Facebook assigns users a score by using flags to mark particular behaviors, one of which is users’ history of reporting content.

PC Mag reported:

In 2015, Facebook began rolling out an option to let people flag false news stories over the platform; go to the "…" icon on a Facebook post and click the "Give feedback on this post" option.

Unfortunately, content flagging systems can also be gamed. All it takes is a mob of online users to report a post is fake news or hate speech to trigger the company to investigate and potentially misinterpret the complaints as legit.

The system not only serves to help Facebook fight the spread of misinformation and fake news, but also helps decrease users’ ability to remove content that they disagree with or simply don’t like.

[RELATED: Get the skills you need to become a trusted advisor to leaders.]

The Washington Post reported:

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.

A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.

The Next Web’s Rachel Kaser wrote:

So if a person consistently flags a news source as fake when Facebook itself doesn’t judge the source to be untrustworthy, then it judges that person to be untrustworthy. Lyons implied the company takes this to mean the person reported the site out of an ideological disagreement: “I like to make the joke that, if people only reported things that were [actually] false, this job would be so easy! People often report things that they just disagree with.”

Though the idea of Facebook assigning its users a credibility score can sound disturbing, being able to flag certain behaviors and users who frequently exhibit them can also help the platform better allocate its resources it has brought on to fight fake news.

Vox reported:

Social media companies rating people might feel a little Black Mirror-esque, and the lack of transparency about how the system works is a little unsettling. But the more Facebook discloses about the trustworthiness rankings, the easier it becomes for bad actors to game the system and for activist groups to successfully silence content they don’t like.

Facebook seems to be using the rating system more as a way to direct resources and deploy fact-checkers to review content. Just because a post is flagged as potentially false — or because a user has a high or low trustworthiness rating — doesn’t mean it will automatically be left up or taken down.

Facebook also struck out against The Post’s headline, refuting that there is a standardized score system it’s doling out to each user.

Quartz reported:

A Facebook spokesperson responded with a statement. “The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading,” the statement reads. “What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.”

Overall, the action can help Facebook live up to its promises to decrease the spread of false reports and misinformation on its platform, a crisis it’s been battling to overcome since the 2016 presidential election.

Engadget reported:

… [T]here are reasons for Facebook to use reputation rankings. Far right groups have regularly used false reporting for harassment, and investigations into reporting can draw attention to content that runs afoul of Facebook's policies. The firm's executives were skeptical when they saw a surge of activists reporting Alex Jones and InfoWars for promoting hate speech and false conspiracies, but that still drew attention that ultimately led to Facebook banning Jones and InfoWars for policy violations. As nebulous as the rating system is, it might curb abuse and bolster legitimate complaints.

However, that doesn’t mean that some users won’t still have concerns that Facebook is monitoring their actions.

“The thought that Facebook might actually quash your speech because of an internal metric you don’t know anything about is more alarming than the idea they’re judging you in the first place,” Kaser wrote.

(Image via)



from PR Daily News Feed https://ift.tt/2LjEGBd

No comments:

Post a Comment