When social media was first popularized, not long after the rise of the Internet itself, it seemed like a great equalizer. In the physical world, you barely rub shoulders with people that look and think different from you–online, there are endless possibilities to expose one’s self to new lines of thought. The ability to connect with people spanning nations and continents, in theory, could broaden the horizons of everyone with a router. So why doesn’t it?
Theory is always rosier than practice, and in practice, social media has become a bubble more insular than we could have imagined. Rather than pushing boundaries, our social feeds generally serve to validate our pre-existing outlooks. This happens without our even knowing it, and because it’s so comfortable, it slid under the radar for a while.
Of course, the makers of Facebook and Google probably weren’t intending to create echo chambers over the course of their sites’ evolution. Social media bubbles came with good intentions: technology got smart enough to learn what users like, and thus, serve more of it to them. It works like this: you input your information, your interests, your friends, and begin to follow and like pages that appeal to you. Algorithms take note of these factors and filter your feed to reinforce your ideal world.
To a great extent, this is practical. If the computer knows you’re married, it’s not going to target engagement ring ads at you; if you’re 60+, Buzzfeed probably isn’t for you. Algorithms help match people with pages, services, and brands that will be useful to them specifically. In doing so, it blocks out the things they don’t like: public figures, news articles, and sponsored advertisements, for example. From a business perspective, content is wasted if the wrong demographic consumes it.
The real problem emerges when it comes to politics, diversity of thought, and news, all of which are interconnected. Social media bubbles first came into spotlight after the Brexit, when Britain voted to separate from the EU to the surprise of many. After Donald Trump’s unprecedented victory here in the US, it became even clearer that social media bubbles polarize voters. This explains why most left-of-center voters thought the president-elect didn’t have a chance–even the pundits! Democrats were only getting one half of the conversation, just as Republicans were. This fact served to polarize each side and create an inflated sense of confidence in one’s outlook, also known as confirmation bias.
It’s hard to imagine what social media would look like without these bubbles, especially since they are self-perpetuating. But if technology could create polarizing content filters, certainly it could also offer a solution. For those of us willing to get uncomfortable, manual fixes do exist. On Facebook, liking pages and news sources with opposing politics can be the first step. In order to see more content, you’ll have to like things you disagree with — so you may want to change your settings to make your ‘likes’ private, if you’re worried you’ll give off the wrong impression.
Perhaps the utilization of “like” in order to follow something is one of Facebook’s biggest flaws when it comes to filter bubbles. It’s counter-intuitive–and maybe a little embarrassing–for a self-proclaimed conservative to ‘like’ Hillary Clinton on Facebook. But isn’t it important to know what you’re opposing inside and out, if you want to be truly informed? Shouldn’t there be a way to do this without having it publicly define your taste?
To take things to the next level, a service developed by MIT burst the filter bubble by taking the conversation offline. The app matches you with a platonic lunch date that you are likely to get along with personally, but differ with ideologically, socioeconomically or demographically. The app’s creators realized that the key to stimulation and interest is not always comfort–sometimes it can and should be the exact opposite.
Along with Mounia Lalmas and Daniel Quercia, both at Yahoo Labs, Eduardo Graells-Garrido at the Universitat Pompeu Fabra in Barcelona created another solution: a recommendation engine that points ideologically opposed Twitter users at one another based on non-political preferences.
Perhaps social media websites will take these methods into consideration by developing algorithms that expose and humanize opposing viewpoints, or maybe we’ll have to take it into our own hands. Whatever the case, realizing that there is a bubble is kind of like realizing there is a Matrix. Once we know we’re in it, our ability to escape it is that much more likely.
Originally published on nathansproul.com.