As tragic events unfold, people clamor for facts.
Unfortunately, as credible news outlets take time to source information for their stories, unscrupulous online parties can use Google’s algorithm to reach massive numbers of readers quickly.
Just weeks after the mass shooting in Las Vegas, a gunman opened fire in a Texas church, killing 26. Google’s algorithms had not been adjusted since the Las Vegas massacre, despite calls for an overhaul, and misinformation again ran rampant.
Google has once again been called out for algorithmically encouraging the spread of dubious, politically charged speculation and misinformation around a topical news event.In the latest instance of the algorithmic amplification of misinformation, the news event in question is a shooting in a Texas church on Sunday. Authorities have identified 26-year-old Devin Patrick Kelley as the perpetrator.
Users of Google’s search engine who conduct internet searches for queries such as “who is Devin Patrick Kelley?” — or just do a simple search for his name — can be exposed to tweets claiming the shooter was a Muslim convert; or a member of Antifa; or a Democrat supporter.
Part of the problem is the Google feature “Trending on Twitter,” which highlights popular or heavily retweeted statements.
All of these links appeared high up in the search results, just below the “Top Stories” modules in the “Trending on Twitter” box. To Google’s credit, as the hours have gone by, the less-reliable information has been replaced by reputable sites doing actual journalism.But the damage has been done. Despite the lack of any real evidence about the ideology behind the attack, a search for the shooter’s name now suggests you might want to append “antifa” to your search.
In a statement to Buzzfeed News, Google took responsibility for the search results, even though the content was sourced from Twitter.
Google told BuzzFeed News in a statement that the results are based on its own internal algorithm, not Twitter's search function.
"The search results appearing from Twitter, which surface based on our ranking algorithms, are changing second by second and represent a dynamic conversation that is going on in near real-time," Google said. "We’ll continue to look at ways to improve how we rank tweets that appear in search."
Fake news, especially that proliferated through search algorithms, has become a thorny reputational problem for Google, which has long been protective of its proprietary code and secretive about how it polices the internet.
The spread of unverified or deliberately falsified information from gutter-level sources in the wake of crises, aided by venues like Google and Twitter, has become a real problem with real consequences. In the hours after the shooting, Texas Rep. Vicente Gonzalez fell for a reoccurring far-right social media meme claiming comedian Sam Hyde was responsible for the shootings and repeated that information during a live CNN broadcast.Google, Twitter, and Facebook have all regularly shifted the blame to algorithms when this happens, but the issue is that said companies write the algorithms, making them responsible for what they churn out.
Google had a spokesman talk to Gizmodo. The phone interview included messaging that was at times regretful but never apologetic.
Gizmodo continued:
[…]Google’s public liaison for search Danny Sullivan told Gizmodo in a phone interview that the company wants to ensure the Twitter module is not pulling misinformation. He added Google is not simply relaying tweets suggested by Twitter but ranking them itself, though fine-tuning the process was a “moving target” they are trying to hit—and that the Twitter results function returned more trustworthy results over time as more reliable sources reported on search terms like the shooter’s name.“It’s simply not a case of we’re taking in exactly what Twitter’s putting out,” Sullivan said. “... And it’s important because on the one hand you might say, it would be great if we could show whatever Twitter’s doing and it’s not our fault, but that’s not what’s happening nor is that sort of something we want to reach for. The concern here is, is there is something on our search results page that needs to be improved—we want to improve it.”
Once again, Google pledged to fix the problem.
Gizmodo reported:
[Google spokesman] Sullivan added Google staff had been monitoring results in real time and will continue to tweak what appears on those pages in the future.“You can try to deconstruct how the Twitter results are showing up on the page,” Sullivan said. “... We weren’t happy that those tweets that people were pointing out to us were showing up that way. We’re like, okay, we may need to make some changes here. For whatever reason those are getting there, it wasn’t by intent, it wasn’t by design, and it wasn’t something we’re striving to keep.”
Some feel Google’s response falls short:
You say Big Tech is starting to really fight fake news. I say... https://t.co/7ptuSvHJka
— Noah Shachtman (@NoahShachtman) November 6, 2017
we should relentlessly call out @facebook @twitter @google when they enable bots and fake news to float to the top. every time.
— Oliver Willis (@owillis) November 6, 2017
@Google @AdSense QUIT SPONSORING FAKE NEWS. GET. THIS. FIXED.
— Laurie Cassell OCT (@cassell001) November 7, 2017
[FREE DOWNLOAD: 13 tips for preparing for a crisis]Michael Fineman of Fineman PR addresses the difficulty of adopting the proper tone when communicating after a tragedy.
In his blog he writes:
The most crucial communications rule is to show compassion and concern for human life and those personally affected, and show it genuinely, both in content and—often more important—tone. It is critical to stick to the facts or what is known, avoid speculation and correct misinformation.
He continues:
Inhuman events require a human response, disciplined approach, and evidence of collaboration for the greater good.
PR Daily readers, how might Google and the internet community at large mitigate the torrent of fake news?
(Image via)
from PR Daily News Feed http://ift.tt/2zE2gr8
No comments:
Post a Comment