POP QUIZ

Google is finally admitting it has a filter-bubble problem

Trapped in our echo chambers.
Trapped in our echo chambers.
Image: Kim Kyung-Hoon/Reuters

Try this exercise: Open Google in two tabs and search “are reptiles good pets” in one and “are reptiles bad pets” in the other. In both instances, Google will faithfully attempt to advise you on your reptilian pet quandaries, but you’ll notice that the answers are different.

In the first instance, Google will feature a snippet of text from “10 Great Reasons Why Reptiles Make Good Pets,” a nod to your enthusiasm. In the second, it will feature a snippet from “It’s Cruel To Keep Reptiles As Pets,” in line with your hesitation. Essentially, the search engine reinforces your pre-conceived biases as a side effect of the way its algorithm parses your question.

Image for article titled Google is finally admitting it has a filter-bubble problem
Image: Screenshot/Google
Image for article titled Google is finally admitting it has a filter-bubble problem
Image: Screenshot/Google

In a blog post published Jan. 30, Google admitted to the issue and said it is actively working on solutions. The details have yet to be worked out. It will likely surface multiple snippets in the future, to offer varying points of view. “There are often legitimate diverse perspectives offered by publishers, and we want to provide users visibility and access into those perspectives from multiple sources,” said Matthew Gray, the software engineer who leads the team that works on featured snippets.

The way snippets currently work, a Google spokesperson told Quartz, is by surfacing text from the top result of a search query. Results are algorithmically ranked by a balance of authoritativeness (how often a site is linked to by other sites) and relevancy (how well the content matches the query). When a user asks “are reptiles bad pets,” the algorithm uses the exact text to assess for content relevancy. As a result, a site that uses the words “reptiles,” “bad,” and “pets” more often would naturally be ranked above another site that uses “reptiles,” “good,” “pets.”

Google wants to combat this effect by training its search engine to recognize a question’s intent rather than just its literal syntax. That way, regardless of whether a user typed “are reptiles good pets” or “are reptiles bad pets,” the search engine would interpret the question as “how do reptiles rate as pets” and proceed to surface two different snippets—each presenting one side of the argument.

This approach, relatively straightforward for mundane questions about pets, gets stickier once the questions turn to more controversial topics. The blog post neatly avoided mention of how this would affect search results with political bias, amid increasing concern that platforms like Facebook and Twitter are deepening political divides. Google did not directly address a Quartz question on the matter. But the spokesperson did engage with climate change as an example.

Under Google’s proposed system, if a user typed “is climate change real,” she would see snippets presenting arguments for and against the existence of climate change. The company spokesperson agreed that this is indeed how the system would theoretically work, emphasizing that Google wouldn’t want to be the arbiter of truth but rather a purveyor of knowledge to help inform its users’ decisions. Google said it’s too early to conclude how it will actually handle controversial topics, given that the feature hasn’t yet rolled out.

The company plans to introduce the feature in stages, targeting rollout of the initial phase for the second quarter. The first update will touch easier, safer questions, like “what is New York City’s tax rate,” which would surface snippets for both sales and income tax. Trickier questions that involve bias and controversial opinions won’t be addressed until a later stage.

The solution isn’t yet perfect and Google hasn’t solved everything, the company spokesperson said, but it’s making progress in grappling with the issue of filter bubbles and the reinforcement of existing bias. The first step, after all, is admitting you have a problem.