Could have been a prophet

Back in 2013 I started learning about the “filter bubble” – a natural result of the behavior- and preference-driven algorithms that power major search engines. Between your search history, the links you click on, and the sites you visit that are tagged with Google Analytics (which is a lot of websites nowadays), search engines like Google can make a reasonable approximation of what results might interest you.

There’s nothing inherently nefarious about this. Google’s interest is in getting you to the right website as quickly as possible, and they’ve done a phenomenal job at it. The better results you get from Google, the more you use them – which means you’re more likely to click the ads that get served alongside search results.

The problem, though, is that it sacrifices diversity of ideas a bit. If you’re a habitual Fox News reader, you follow Donald Trump, and routinely watch his speeches on YouTube, the next time you search something like “abortion” or “gun rights”, the filter bubble will give you results from sites it thinks you want to see, and you’ll get a very right-wing view of the situation.

The search engine will dutifully give you the information it thinks you want to see, but may not give you the information you need, and that’s where the problem comes in. So if you’re in a really bad situation (say, unwanted pregnancy in a conservative and oppressive religious family), and you need level-headed information on whether abortions are safe, legal, and where to get them, Google won’t know to give you that.

That’s the inadvertent side-effect of the filter bubble, and it got me thinking – what would happen if it were made deliberate? A corporation with that much power could, theoretically, start deliberately adjusting their algorithms to subtly affect the worldview of the people using their service.

Facebook’s hilarious miscarriage aside, Google is now doing exactly this. Wired Magazine reports that a Google subsidiary is going to deliberately attempt to feed misinformation to potential ISIS recruits.

Much has been written about cyber warfare, and what it might look like – hackers, viruses, trojans, groups of dangerous people taking down power plants and military bases. Much less has been written about the more insidious form of information warfare that’s crept up on us over the last few years, and practically nothing about calling large search engines to account.

Today, Jigsaw (/Google) is trying to identify potential ISIS recruits, and change the results they get to feed them anti-propaganda, to dissuade them from signing up. Maybe it won’t work – I imagine that most recruiting is done peer-to-peer in any case – but maybe it will.

And if it does work, it sets a very worrying precedent. Up until this point, it’s been in Google’s best interests to vacuum up as much of the Internet as possible, and optimize it relentlessly to get you where you’re going. But what if Google decides that, for whatever reason, they’re a national security asset now, and they have a responsibility to tailor search results away from dangerous ideas?

That’s a slippery slope of note, because it opens the door for people to start redefining what those dangerous ideas are. To any reasonable person, a dangerous idea is one that could result in physical harm or a loss of property.

To a militaristic dictatorship, a dangerous idea is any one that can teach the common man to arm and defend themselves. In a police state, a dangerous idea is one that reminds people of their rights under their respective laws. In a communist dictatorship, a dangerous idea is that people are entitled to the fruits of their own labor, and that being constantly stripped of your wealth is not the best way to run a country.

Anything that upsets the balance of power could be considered dangerous, whether or not that power is being wielded fairly or equitably. And with the sheer amount of power we’re giving search engines over our lives, I think it’s worth asking whether or not we’re actually being shown a fair representation of ideas, not just the ones that are deemed “acceptable”.

In the past, the news media has always been the gatekeepers of that, and have been rightly criticized for withholding information that was of vital public interest. The internet has always acted as a bulwark against that, creating a forum where all speech is equal. And now it seems we’re slowly sliding back towards a world where there are gatekeepers, and fringe speech is marginalized at the behest of the powerful.

So anyway, the moral of the story here is that I regret not writing the short story I had in mind in 2013. It was a story that dealt more or less exactly with this – what would it look like if companies could start shaping information that we thought was being ranked on technical merit alone. Would people even notice that their ideas were being deliberately adjusted on a network they thought was free and open? Who would line up to pull the strings, to use information as the next theater for cyber warfare?

Had I written that then, it would have been topical now. Next time I’ll have to do better.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s