When you type a query into Google, say, "Mitt Romney," the search engine uses a complicated algorithm that takes into account what it knows about your surfing and clicking habits to spit out personalized search results. Other search engines and social media networks use similar algorithms to filter media streams. But how is this personalized system affecting our exposure to perspectives outside the ones we currently hold? Is the Internet spinning us into an information cocoon?
Not according to New America Schwartz Fellow Evgeny Morozov. On Tuesday evening, Morozov will argue at an Intelligence Squared U.S. debate in New York that the internet’s many filters (the Facebook newsfeed, Google’s algorithmic search results) are expanding – not narrowing – our political horizons. The motion up for debate: “When it comes to politics, the Internet is closing our minds.” The debaters on the other side, arguing for the motion, will contend that personalized filters reinforce our existing beliefs, imprisoning us in an idea echo chamber and inhibiting our discovery of new viewpoints. Delve caught up with Morozov last week to learn more about the origins of the debate, and how he thinks filters really impact political thought and discourse today. The email conversation is excerpted below.
Delve: When did this debate over web filters and personalized search results began?
There have always been concerns about filters and gatekeepers; as such, this is not a problem new to the Internet. As far as the popular debate about the Internet is concerned, we can look back to 1995 when Andrew Shapiro wrote an influential piece for The Nation, where he worried about the rise of filters and information cocoons. Then came Cass Sunstein's book Republic.com. A decade later we got Eli Pariser's The Filter Bubble. Some of these concerns are very similar, but technological innovations, of course, add some new twists.
Delve: What did you think about the theory then, and has your perspective on this changed at all since the earlier days of the Web?
I don't really have a simple answer - it's complicated. The big question - is the Internet making us more extremist in our views? - remains unanswered to this date and I'm increasingly unsure if it's even the right question to ask. At the same time, I think we shouldn't treat customization of search results or messages on social networking sites as either extremely useful or harmful – [it] all depends on the context. If you have 5000 friends on Facebook and each of them post a message, it's inevitable that some filter would be required if you want meaningful engagement. This is where the debate gets tricky: How will these filters be designed and implemented? What the defaults should be? How much transparency? These are all tricky questions, but it's very hard for me to accept that someone can reasonably defend the position that no, we'd be better off without filters, because this is the only way to know everything that matters. I don't think it's a realistic position to take.
Delve: If “Is the Internet making us more extremist in our views?” isn’t the right question to ask, what is? Or how should it be tweaked?
In my own research, I've become very suspicious of the very idea of "the Internet". This is such a loaded term and it means so many things to so many people that I think we need a moratorium on using this term in public debate. Much better would be to limit our discussion to particular technologies: social networking sites, blogs, search engines - and then break those down into even smaller areas of inquiry: the Newsticker feature on Facebook, the Flipboard app on your iPad, SearchYourWorld functionality on Google. All of these have different effects on how we consume information. To think that we can somehow sum them all up under the rubric of the "Internet" seems to me highly unlikely - and even counterproductive.
Delve: Without using the dirty I-word, then, how have all of these social networking sites, blogs, and search engines impacted political discourse?
The message of Eli Pariser's Filter Bubble book is that Google and Facebook's embrace of personalization filtering makes it less likely that liberals will be exposed to conservative ideas and vice versa. I doubt this is happening - and, furthermore, I think there are some benefits (to be spelled out in the debate - I won't be "previewing" this argument now) to personalization that might actually decrease polarization and be a good thing for public debate. My main point is that filters can be good for the political process - we shouldn't reject them outright for fears of a "filter bubble."