If you’re under the impression that Google is piping the raw, unfiltered world to your computer monitor or smart phone every day, then you’re mistaken. And that mistake could cost you — especially if enough of us make that mistake, in which case it could start to cost all of society.
Like hundreds of thousands of other people, I was intrigued when I saw the video of Eli Pariser’s TED talk called “Beware online ‘filter bubbles’”, and was eager to read the book Pariser wrote on the same topic, The Filter Bubble: What the Internet Is Hiding from You. Though the book is well-written and an interesting read throughout, Pariser’s central premise – that “because the filter bubble distorts our perception of what’s important, true, and real, it’s critically important to render it visible” — could easily have been dealt with in a feature length article.
But ultimately, that’s not the strongest argument in The Filter Bubble anyway.
In the book, Pariser exposes a much larger problem: the abdication by technologists of their moral and ethical responsibilities while reshaping our public discourse. Pariser’s critique of personalization would have been more powerful as supporting material for his broader observations about the role technology is playing — or not playing — in our democracy.
Even after reading The Filter Bubble, I was left wondering: Is personalized filtering really that grave or widespread an issue? How many people get the majority of their news from the internet, anyway? Of those, how many people rely primarily on Google or Facebook, to the exclusion of any other source where they might get a more balanced view? Is it possible to game the filters by clicking on links from liberal and conservative sources?
I still don’t know, because Pariser doesn’t really endeavor to find out for us. I was hoping he would dig deep into media analysis and prove that filtering of online news sources by Google and Facebook was somehow more insidious than the filtering we all do every day when we choose to turn on CNN or the BBC instead of FOX News, or when we get Mother Jones delivered to our mailbox instead of tuning into Rush Limbaugh’s radio show. But there is no such deep analysis.
Pariser offers a few reasons why he believes the filter bubble is different from the filtering we all do on a daily basis in his introduction (you’re alone in your own bubble on the internet, that bubble is invisible, and you don’t get to choose whether or not you want to be in it), but offers mostly anecdotal evidence to support his assertions. Anyone who can only be convinced by someone that has crunched the numbers might be less than satisfied by the case Pariser makes here.
My other lingering misgiving was that it seemed like there was a very easy solution: Surely Google and Facebook could implement a “don’t filter my results” button if enough of us demand one. If Pariser’s object was to render the filter visible, maybe a well-placed series of articles and interviews coupled with a petition (perhaps on SignOn.org…?) would have been a better strategy. And of course, it’s incredibly easy to pop over to the Drudge Report or World Net Daily or FOX News any time I want to see how the other side is spinning things. Seems like overkill to devote a whole book to examining the problem of personalized filters.
But, like I said, this was never Pariser’s most powerful critique of the ways technologists are reshaping our society and how we engage with it. And that’s what ultimately saved this book for me.
Pariser’s examination of Mark Zuckerberg’s libertarian leanings and views on personal identity are especially revealing. Zuckerberg apparently believes we all have one single identity, and to pretend otherwise is to be fake — hence Facebook only allows you to have one identity. (Which really puts Google+’s “Circles” into context, no?)
Zuckerberg’s views on identity are shallow, but not necessarily harmful. More troubling is the apolitical stance taken by many technologists. Here Pariser uses anecdotal evidence to great effect, relating his troubles in even getting a Google spokesperson to understand what he was talking about when asked, “What [is] the code of ethics… that Google uses to determine what to show to whom?” He never got a clear answer, and that’s incredibly worrisome.
Does Google have a responsibility to return search results that prioritize credible sources about 9-11 instead of conspiracy theories or dangerously reductive arguments like “They hate our freedom”? Should someone searching “Obama birth certificate” get results heavy on birther nonsense or the copy of Obama’s birth certificate provided by the White House? I don’t necessarily know the answer to those questions, but I’d always assumed the folks at Google spend a lot of time thinking about them.
Instead, as Pariser states: “Too often, the executive of Facebook, Google, and other socially important companies play it coy: They’re social revolutionaries when it suits them and neutral, amoral businessmen when it doesn’t.” These technologists, he says, often display “a willful blindness to how their design decisions affect the daily lives of millions.” For the people proudly reshaping our society to ignore the ethical and moral implications of the ways in which they’re doing so is profoundly troubling, and Pariser is right to call them out for it.
Within this larger context, Pariser’s assertions about the perils of personalization gain new relevance. It’s just one more way in which technology is skewing our window on the world while the folks designing that window ignore the skewing, or pretend it’s not happening.
In the end, I suspect The Filter Bubble has accomplished exactly what Eli Pariser had hoped it would: Dragging these little-discussed issues into the light and forcing a dialog on the moral and ethical imperatives inherent in the technologies that are shaping an increasingly large portion of our lives. I may quibble with his presentation of the issues, but overall I’m very glad Pariser wrote this book. It was high time someone did.