Friday, September 15, 2017

Facebook: ‘We know we have more work to do’

Technological advancements have been a boon to marketers and social platforms, but relying too much on automated systems can lead to trouble.

Such is the lesson Facebook is learning after it recently made headlines for allowing brand managers to target users based on anti-Semitic phrases.

On Thursday, ProPublica reported:

Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

Forbes reported:

ProPublica noted that the anti-Semitic categories it reviewed represented too few Facebook users to enable an ad campaign on their own. However, Facebook’s ad platform automatically recommended additional categories to ProPublica, such as “Second Amendment,” in order for the audience to meet reach requirements. The recommendation suggests a correlation between anti-Semites and people interested in guns. ProPublica also targeted its ads among categories such as “Nazi Party” and “the SS.” When Facebook approved ProPublica’s ads, its system automatically changed the category “Jew hater” to “Antysemityzm,” the Polish word for anti-Semitism.

Re/code reported:

The way this works is that advertisers using Facebook’s automated ad buying software can target users based on specific information that they’ve added to their profile. Users can enter whatever they went on their profile under categories like field of study, school, job title or company. Facebook’s algorithm then surfaces these labels when ad buyers (or journalists) go looking for them.

In this case, users were entering things like “Jew hater” under “field of study,” which meant it showed up in the targeting search results, and was an actual option for ad buyers.

Facebook took down the targeting options after ProPublica contacted the organization, but the problem didn’t stop there.

Slate reported:

Contacted about the anti-Semitic ad categories by ProPublica, Facebook removed them, explaining that they had been generated algorithmically. The company added that it would explore ways to prevent similarly offensive ad targeting categories from appearing in the future.

Yet when Slate tried something similar Thursday, our ad targeting “Kill Muslimic Radicals,” “Ku-Klux-Klan,” and more than a dozen other plainly hateful groups was similarly approved. In our case, it took Facebook’s system just one minute to give the green light.

Facebook’s product management director, Rob Leathern, gave Re/code the following statement:

We don't allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.

[RELATED: Join us for the Social Media #Mashup at Disneyland.]

Facebook has already faced increasing pressure to fix its algorithms and advertising practices when it comes to promoting and showing content that is deemed fake news, and came under fire when it admitted to selling political ads to what were probably Russian accounts.

Forbes reported:

The potential for Facebook’s advertising platform, which reaches 2 billion people, to be misused by bad actors has come under heightened scrutiny in the aftermath of the U.S. presidential election. Last month, Facebook told federal investigators that it sold about $100,000 in political ads during the 2016 election season to “inauthentic” accounts, likely affiliated with and operated in Russia.

The platform is working on fixing its content problems, however.

On Thursday, TechCrunch reported:

Facebook established formal rules for what kinds of content can’t be monetized with Branded Content, Instant Articles, and mid-roll video Ad Breaks. These include depictions of death or incendiary social issues even as part of news or an awareness campaign.

This is a big deal because it could shape the styles of content created for Facebook Watch, the new original programming hub its launched where publishers earn 55% of ad revenue.

However, Facebook would do well to extend its work with publishers and influencers into bulking up its staffing to regulate areas where algorithms might cross the line.

The Atlantic reported:

To Jonathan Zittrain, a professor of law at Harvard University, that story suggests the entire way that tech companies currently sell ads online might need an overhaul.

… His point: Algorithms can seem ingenious at making money, or making T-shirts, or doing any task, until they suddenly don’t. But Facebook is more important than T-shirts: After all, the average American spends 50 minutes of their time there every day. Facebook’s algorithms do more than make the platform possible. They also serve as the country’s daily school, town crier, and newspaper editor. With great scale comes great responsibility.

(Image via)

from PR Daily News Feed http://ift.tt/2x7cw8E

No comments:

Post a Comment