The Mother of All Demos

This post is more than a year old. The information, claims or views in this post may be out of date.

Put on your tinfoil hats, we’re going for a ride.

I think James Bridle said it best:

What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects.

James Bridle, Something is wrong on the internet

There’s a good few posts I can write about social media and hyper-connectedness, and how it’s having an erosive effect on our collective identities and well-being, but right now I specifically want to talk about serverless architecture.

I’m an active participant on the Indie Hackers forums, and the recent conversation about serverless websites was one I pretty much had to get involved in.

It’s a problem I keep running into – a dynamic that makes me uneasy, in a way that’s hard to pin down. That’s what I’m trying to do in this post right now: give an outline of a problem that I’m sure most people aren’t yet aware of as a problem. With the possible exception of Cory Doctorow and everyone in his orbit, and I’d hate to end up in a world where he was right all along.

Serverless architecture is the hot new thing, haven’t you heard? Absolute magic: You only need to write the bare minimum code to make your business logic work, then upload it into a magic cloud that takes care of putting it on the internet for you. And you don’t even have to pay for the server!

I have the feeling that a lot of relative newcomers to web development over the last few years are excited about this, mostly because it lets you build APIs without actually having to know anything about how APIs work. So long as you can define a NodeJS HTTP Response, you’re good to go – and that’s the sort of stuff that can be taught inside of a day, assuming you never explain the OSI Model, TCP/IP, how browsers actually connect to servers, what a network stack is, what ports are, what networks even are, and so on.

In short, what serverless encourages is bare minimum ability. Only having to learn enough to make a vendor’s specific implementation work, instead of learning any of the principles that underpin it, or whether or not they’re even good designs.

Personally, I’m in favor of learning as much as I can about any given system. I like being able to do new things, and understanding how and why things work, and trying to discern the governing principles beneath them. That’s why I’ve read most of Paul Graham’s essays, I know what a Twelve Factor App is, I understand MVPs in relation to a sales cycle, and I understand the difference between programming and engineering (among other things)

On public forums, this is usually the point where someone will argue that all developers are equal, and that it doesn’t matter if you can only do frontend pages in Javascript and other people can program embedded devices directly in Assembly, everyone deserves to be taken seriously.

Then the conversation turns into a semantic nightmare where the saner voices might suggest that a person’s value as an individual is not tied to their skill in any given field, which then generally collapses into a bunch of name-calling.

The sanest people don’t engage in these conversations in the first place.

As far as I’m concerned, the people who have spent literal decades in the trenches: writing code, making mistakes, and designing the systems that underpin everything else we do (things like the Linux kernel, or the HTML spec, or TCP/IP, or software team principles) generally do have more valuable things to say about how software should be built, vs someone who just got hired at their first frontend job.

That’s generally the point where I get booted out of the conversation for being antagonistic. Try as I might, I cannot fairly equate the skills and experience of a first-timer to someone who literally helped define the field they’re working in.

I’m learning to move on from these situations though.

My approach of late is to try looking at the bigger picture, which right now is best summed up like this:

Microsoft, Google, Facebook and Amazon own and manage a substantial portion of the information that flows through internet, and by extension, the foundations of communication, productivity and trade in the 21st century.

Note: For the purposes of this article I’m ignoring Apple – while they have an enormous market cap and built genre-defining hardware, their walled-garden approach to developing software for those devices might be the thing that ultimately exonerates them.

Each of them has built up an enormous, complicated set of technologies that makes their core businesses work – all of which has been exported to the public market:

  • Microsoft: .NET and Azure
  • Google: Angular and the Google Cloud Platform
  • Facebook: ReactJS and the social graph
  • Amazon: Amazon Web Services

Each tech giant, being American, also exports a particular set of cultural norms. Being a high-performance culture that worships productivity above all else, everything done in service to greater productivity should pass without question, including:

  • Keeping people connected and online all the time
  • Encouraging people to form family bonds in the workplace
  • Defining and projecting values (not just mission or vision) – Google in particular has explicit “Googley Values”
  • Constantly aligning, encouraging and nudging people in their employ to live up to a vision that is ultimately exploitative in the wider market (with the likely exception of Microsoft), and
  • Acting with the default assumption that they’re entitled to do whatever they like with the data entrusted to them by billions of people. Move fast, break things – right, Facebook?

Then there’s the happy tech-skill halo effect, first unearthed by Microsoft. If you export the tools you use within your business and make them available on the open market, anyone who learns how to use them would make excellent technical recruits – without having to significantly invest in training or upskilling anyone to use your systems.

They would also use your tools and products in their own work, further embedding the reach of your paradigms well beyond your own company. Individual open-source developers understand the pull of this all too well (it’s a sure route to power).

Microsoft built .NET to have a framework for Microsoft to build Microsoft products, but by making it available to businesses and developers in general, they got to look like visionaries for making a big set of free tools available – in return for the near-invisible effect of re-shaping developers into capable Microsoft candidates.

Note: Gamers will recognize this. It’s the plot of Mass Effect: An ancient civilization left behind “mass relays“, devices that enable interstellar transport so long as you build your ships to interface with them. And by doing so, it made it much easier to round everyone up during the harvest time, since all technology had been shaped in the image of that prior race.

That’s been amplified (almost inadvertently) over the last 10 years, with people shifting more of their social interactions online. They’re not just tools anymore – they’re communities, with a set of shared personal values that dictate whether or not your conduct is acceptable.

If that sounds crazy, consider that the Contributor Covenant has been imported to over 100’000 open source projects and has been publicly endorsed by major organizations. While the content of the covenant itself is progressive and relatively harmless, the mechanism of establishing a centralized monoculture across the internet is not.

So as of 2018, if you’re a “React Developer”, that’s not really a job title. It’s also not just an expression of which languages or tools you use. It potentially signals membership in, and acceptance of, a certain way of doing things – almost certainly underpinned by the value systems and worldview of a corporation that’s absolutely not acting in your best interests.

If that sounds a bit tinfoil-hatty to you, don’t worry, I hear you. It sounds that way to me too, believe me. Thing is, I’ve been building websites since before Facebook, and something has just started feeling wrong over the last few years.

And I suspect it’s got something to do with the unexpected impact of having the world’s largest technology companies also

  • establish and enforce behavior and personal values among employees and community members while also
  • owning the technical direction of the tools we use to make and share things while also
  • having enormous financial and political power in the largest market on Earth while also
  • not acknowledging any of this impact, instead stating all of the metrics in universally-friendly terms while also
  • making decisions that have widespread negative impacts on people that we’re only now just beginning to understand.

And these are all the companies selling you serverless technology today: A paradigm that further divorces the developer from the computer, abstracting away all the vital parts that, if understood, could be improved on for everyone’s benefit – like how the web has been built up until this point.

Instead, by coercing new generations of developers into paradigms that divorce them from the (relatively) simple act of bringing a basic website online, these companies are pushing us into a future where software development is not a thing you do on your computer in your home: It’s a thing you do on their services, in their languages, with their tools.

At this point I sound exactly like Cory Doctorow and wish this nightmare to end.

This exact paradigm has already proven problematic, with the Electronic Frontier Foundation taking manufacturers to task over things like not being able to update the firmware on your own car. That’s a domain that’s foreign to me, but web development is not, and I’m already dreading the day we wake up to realize that the internet itself exists in the hands of a few large companies.

Maybe we’re already there, I don’t know. I just like to think there’s still enough variety in hosting, languages, tools and approaches, that there will continue to be individual innovation that advances things for all of us.


The title is a reference to the work of Douglas Engelbart. The 9th of December this year marked the 50-year anniversary of The Mother of All Demos – a 1h40m collaborative presentation in which him and a team of engineers built hardware and software that effectively showcased the future.

The video is long and somewhat boring, so it’s more interesting to list out the sort of stuff they covered:

The live demonstration featured the introduction of the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor.

Fifty years ago, a bunch of engineers had thought through the problems that we’re still dealing with in modern-day computing. Even if they had not managed to solve those problems as effectively as we’ve done today, they were at least able to articulate and navigate them.

It’s the example I keep coming back to in conversations about “new problems” in computing: There barely are any, at least not in the domains of communication or productivity. It really helps knowing the history of this stuff, too – you might save yourself a lot of trouble if you learn from the people that tried solving these problems before you.

But then that’s “old”, so it’s automatically discarded as irrelevant, and people move on to the next new shiny thing. A perpetual disregard for history and obsession over disrupting the status quo – an impulse firmly embedded in tech culture by two decades of reckless Silicon Valley gospel.

It’s time to talk about charts

This post is more than a year old. The information, claims or views in this post may be out of date.

Have you ever felt annoyed that someone tried using a world map chart to visualize country-level data?

Or is that just me?

(It’s probably just me.)

Over the last few weeks I’ve been picking up more books to read (as part of my drive to write more), one of them being Content Inc – recommended to me as a good introduction to content marketing, and how powerful it can be.

The book vacillates between content production at the individual level, and the corporate level. Some of the stories focus on single-person startups, and how they tested ideas and built businesses off the back of content production. The rest of it, haphazardly, deals with how to maintain that within a larger organization (team structures, responsibilities, and so on).

What struck me about the individual stories though was the relative simplicity of the focus. One person wrote about writing – another, about real estate. A third simply wrote about how to get more value out of your camera. All of those, over time, became profitable businesses – the key ingredients being effort, and no small amount of passion for the target subject.

It got me thinking about an idea I had years ago, when first starting to work with Domo. Without going into too much detail, one of the things that intuitively clicked for me during the first few weeks was the brightline relationship between business management, and data visualization.

Borderline-buzzword sentence, I know.

The practice seemed to hit at the intersection of a few of my interest areas – complex systems, data and numbers, and visual communication – and it wasn’t very long before I was already planning out an enormous series of content on how to get the best value out of different data visualization options.

That content never materialized. I had put it on my internal roadmap; to develop “added value” in the form of training content that our consultants could use to help plan best-practice dashboards.

In the end, Domo themselves reached a new level of maturity on their operating models, and that filtered through to the training we got. For that (and quite a few other reasons) that content was never built.

Reading Content Inc made me dust that idea off again. I know for a fact that I can produce useful, actionable content on this topic, having done it before. I’ve also learned, somewhat accidentally, that this is a passion of mine.

In retrospect it might be obvious, but the revelation really came to me on a recent customer project. We were planning out a series of dashboards, and someone wanted to include a world map chart where there didn’t need to be one. That led to a long (and I want to use the word “vibrant”) discussion on whether or not we should include it.

Afterwards, reflecting on that conversation, I realized how deeply I had internalized the principles I had been learning since 2013 – and how naturally they seemed to fit in with the rest of my thinking.

So between that, and my desire to write and publish content more frequently, I’ve decided to take a stab at maintaining a data visualization blog, with a specific focus on practicality: There are amazing interactive visualizations out there (Jer Thorp in particular will always be in my pantheon of data deities), but most of the visualizations we use in daily life are much more basic.

Software has, I think, tricked too many people into thinking charts are easy. I’ve seen so many presentations, Excel workbooks, and “professional”-level reporting that ends up being hard to get any sort of good understanding from.

Simple rule: If your chart is accompanied by a “how to read this chart” helper, you haven’t built a good chart.

And I think this might be the thing that I tackle next: A bit of theory, but mostly practical advice on how to construct good charts. And there are a lot of scenarios to consider – more than enough to build a solid resource for the “everyman” visualization work.

So I’ll be building out plans and content for this over the next few weeks, and hope to launch a new site before the year is out. If there’s one thing I’ve learned so far, is that good data visualization is timeless.

In particular, we’ve had stacked bar charts since as far back as 1780.

Once it goes live, I’ll be posting about the site here. If you want to be alerted when that happens, consider subscribing to my blog – widget’s on the top right.

Rise of the Machines

This post is more than a year old. The information, claims or views in this post may be out of date.

Right now, we’re living in one of the most momentous times in human history, and it could end up being one of the best (or, possibly, worst) things to unfold: our inevitable transition to what Maurice Conti calls the Augmented Age.

Computers have become part of mainstream life in every advanced economy, and basically all major cities around the world (into which people are packing in ever-greater numbers). The resulting efficiency gains have either been a huge boost to creativity and opportunity, or the death-knell of industries that employ tens of millions of people.

I’d like to share two different perspectives on this – both, conveniently, delivered as excellent TED talks. The first is by Maurice Conti, on how advances in computing have changed the way design could be done.

The most remarkable thing about the computer-derived inventions is how biological they look. It took nature millions of years to evolve a structure that their computers can do inside of a few days (referring to the drone chassis), and in future, could do on demand.

I think this is the best insight into how the leading edge of computing might change the way we design cities, vehicles, infrastructure, and the machines that help run our lives. It’s encouraging to note that human designers are still very much a part of the process, but will be able to do a lot more in a lot less time.

Which is a factor leading into the next TED talk – what happens when you centralize that amount of power (and consequently, the financial gains) in the hands of a relative few? People who are skilled at these technologies are able to create enormous value in a short space of time, relative to someone still doing the same task manually.

So what happens when you no longer have a need for the manual labor?

Another excellent talk that takes an unbiased view of Unconditional (I prefer Universal) Basic Income. It raises some good points, but misses at least one point I need to make a note of:

While it’s true that the top 5 tech companies are enormously valuable and employ relatively few people, the platforms they create have in turn generated opportunities for millions more. There are companies, products, services and entertainment channels that could not have existed were it not for the infrastructure and tools that Facebook and the like provide.

Google basically pulled the web development industry up out of the ground when it became clear to businesses that having a well-built site was a competitive advantage. I’m not sure anyone can count the amount of new jobs created in web development, creative design, copywriting, SEO optimization, consulting and education as a result of the platform Google built.

(Yes, I know Google didn’t build the internet. And yes, I know all these websites run on the internet that Google didn’t build, but everyone who’s ever been paid to build one has done so at the request of a customer who believed that being discoverable online would be beneficial to their business, and Google is still the king of discovery on the internet.)

Same goes for the use-cases enabled by Apple hardware, Facebook’s networking, Amazon’s fulfillment infrastructure, and the productivity tools released by Microsoft. Those companies themselves may employ relatively few, but they have empowered millions more.

Moving on.

I think UBI is feasible not so much because of productivity gains due to automation, but because of the ever-declining costs of providing an acceptable standard of living. An excellent, recent example of this is Apis Cor’s house printer.

On the one hand: This technology might end up putting a lot of construction workers out of jobs. While you’ll still need workers for big buildings and the like, simple 1-2 person houses can probably be built quickly, and very cheaply, as a result of this innovation.

But on the flip-side, the cost of houses will plummet. You may not need to work for 20 years to pay off a mortgage for a house that only costs $10k to build. While construction workers might be worried about this, the people who should be a lot more worried are ones with heavy investments in residential development companies πŸ˜‰

I like to imagine a future unconstrained by urbanization. Cities are where the opportunities are – the best jobs are in cities, the best entertainment, the best healthcare, and overall, the best opportunities to live a good life. This is because it’s a lot easier, with the current limitations, to pile a lot of services into one place.

I don’t believe civilization needs to be so centralized, though. If you could get the same quality of food, healthcare, entertainment and job opportunity in an area 200km outside a major city, plus it was cheaper to live there – wouldn’t you?

And there may come a time when we have to. Most major cities (and by extension, most of the world’s population) are located relatively close to a coastline. Historically, cities were founded and grew near coastlines because those afforded the best opportunities for global trade.

Well, that’s under threat. Depending on who you believe, climate change is either a myth, or it’s a reality already underway – and one of the most dire consequences will be the rise of the ocean level. Which, if that happens, will start to make the large, coastal cities unlivable.

We will be forced to start again – massive inland migrations, the design of new cities, infrastructure and services to support the population, while simultaneously ensuring people have a shot at an acceptable standard of living. With the lessons we’re learning today, I imagine those cities (and societies) will look very different.

Between the work of engineers like Maurice and researchers like Federico, I’m optimistic that we’ll be well-equipped to meet those challenges in future.

Some thoughts on Mastery

This post is more than a year old. The information, claims or views in this post may be out of date.

Over the last few weeks I’ve been wrestling with the question of what to do next, career-wise. In doing that, I’ve been re-evaluating most of what I’ve been working on over the last few years, trying to figure out what actually made me happy, what worked to advance my career, and what held me back.

One of the things I consistently identified as being a positive, was being in a situation where I had the opportunity to develop mastery in a particular subject. I think anyone who’s driven by the need to learn would identify with that.

A new and interesting point (to me, anyway) is the idea that mastery itself is relative. I’d always thought of it as an absolute: that there’s a known limit to a given subject, and if you can reach that limit of knowledge, you’re a master in it. Sink 10’000 hours into something, and you’re the best.

That doesn’t really seem to be the case, though. In order to truly develop mastery in anything, you need to keep surrounding yourself with people that are better than you, and learn from them. There’s a quote, that like most quotes has a fuzzy origin:

If you’re the smartest person in the room, you’re in the wrong room.

It’s really obvious in hindsight. If you’re the smartest developer at your company, that doesn’t mean you’ve mastered software development – just that you’ve hit the limits of your learning. To actually become the smartest software developer, you need to find smarter developers to learn from, and inevitably, teach other developers what you know.

And even then, the goalposts keep moving. For instance, being a ‘master’ software developer 30 years ago required the command of much fewer tools and languages. To be an even half-decent full-stack developer in 2016 requires you to understand a bit of everything, from servers to UX, and all the different languages those are expressed in.

Meaning that mastery is an unattainable goal, but by far the worthiest to pursue.

The Human CPU

This post is more than a year old. The information, claims or views in this post may be out of date.

Two of my favorite things in this world: Finding arbitrary connections and correlations, and survival crafting games. This post is about the former.

Earlier this year I took a trip to Melbourne, and got to spend some time looking at a very modern skyline. A few ideas occurred to me there that I only recently found a good way to verbalize.


So let’s start with a very basic introduction to how CPUs work. Everything your computer does is ultimately tied back to a series of bits – ones and zeroes – that move through the very delicate circuitry of your Central Processing Unit.

How do you get from 1s and 0s to cat videos? That’s a very long story, involving addressable memory, registers, clock frequencies and more, but the very basic unit of computing you need to know about is a transistor.


A transistor is a very special, and very tiny, mechanical device that can either prevent, or allow, electric current to pass through it. Each transistor handles one bit at a time, and modern CPUs are packed with billions of the things. A current-generation server-grade CPU can have as many as 2.5 billion on a single die (chip).

Each individual transistor has no way of knowing what’s going on in the overall system. All it does is receive and transmit electrical impulses as programmed, and with the combined effort of hundreds of millions of transistors, we as the end users experience the magic of computing.

It’s important to note that these transistors are “networked”, in a sense, too – they’re all connected with physical circuitry to allow current to pass through them. An isolated transistor is useless.


Hardware is useless without software though, and there too, computers exist as layers built upon layers. In 2016 you’re lucky enough to use an IDE to write a high-level language, with all the hard work of computing abstracted away from you. Everything you write gets compiled down to an impossibly long series of binary instructions that the hardware can execute.

And again, the software that runs on the hardware has no real idea what it’s doing, either. Most of it involves moving bits to and from memory, under certain conditions. It, too, doesn’t understand things at the cat-videolevel.

So this is where my question gets a bit more interesting. Computers are really powerful – they consist of:

  • Finely-tuned and optimized hardware,
  • Running programmable software,
  • Maintained by human programmers that derive a very complex result,
  • Despite the hardware and software running very simple instructions

With that in mind, let’s talk about humans for a little bit.


As of 2016, between urbanization and globalization, there have been two trends on the uptick pretty much since the end of World War II:

  • More people are moving into cities (urban centers are growing) and
  • More people are networked via telephony, and now the Internet

Go to any major metro today, and you’ll see the same thing: A skyline packed with impossibly tall buildings, with millions of people milling about, each doing pretty much their own thing.

Each person alive has a different set of skills, preferences, inherited advantages (or disadvantages), and will attain various levels of success, attributed to various levels of effort, grit, and sometimes luck.

However, as sophisticated as our modern economy and society looks, it tends to operate on very basic principles. Zoom in far enough, and you’ll see the same basic elements of trade – people making, people buying, people selling, people providing services to make all of that more efficient.

It’s also evident that cities follow people – for example, as mines dried up and people left to find work at cities, formerly-busy towns slowly degrade and turn into ghost towns. People (and more specifically trade) is the lifeblood that dictates whether a town shrinks or grows.


And the governing principle there? I suggest it might be our ideas.

As humans, we have thought, and that thought (plus experience and data) has given rise to the concept of an idea: A perception of how things might be, as opposed to how they currently are.

Or, taken another way, an idea is a shorthand for a set of positions and beliefs (like the idea that people should own property, defend their families and their way of life).

More than anything else, ideas have shaped human development. If it were not for our capacity to have ideas, we wouldn’t have progressed much further than the stone age, constantly living just to service our needs in the moment.

For example: if it were not for that, then smaller towns might not shrink. Instead of getting the idea of pursuing better opportunities elsewhere, people might turn to subsistence farming in their area instead, reducing their standard of living to match what the surrounding environment offers. You know, like how basically every other animal on the planet operates?

So now this is where it gets a little weird.


If you accept that a modern computer is a collection of hardware, powered by electricity, programmed with flexible software, running basic instructions that roll up to a complex outcome for the end-user,

And you also accept that a modern city is a collection of infrastructure (buildings), powered by the people that live in it, each person acting in limited self-interest, driven by a set of ideas they accumulate from the world around them:

  1. Is it possible that the balance of ideas in the world is not accidental, and
  2. What sort of higher, complex benefit might someone derive from the low-level interaction of ideas?

So that’s the first thing to think about. As the end-users outside the system, we can create instructions, send them to a computer, watch what it does, and improve on the way the software runs. The computer has no reference point for what it’s doing – it just blindly trusts instructions issued to it.

Does humanity work the same way? Are we just blindly trusting instructions, doing our limited best without comprehending the larger picture? Is there an aggregate outcome of our individual efforts and ideas that we’re not aware of?


Ideas themselves have evolved over time – from throwing stone axes, to the Universal Declaration of Human Rights, our ideas (and our capacity for bigger ideas) has evolved over time every bit as much as our capacity for technological innovation.

What if there were a force – outside our individual comprehension – that was iterating on the quality of ideas, in the same way a software developer iterates on the quality of their software? We’ve had some whopping bad ideas in our history (like racial purity, sun worship, human sacrifice), and more recently, some very good ones (democracy, ownership, free thought).

Ideas used to move at the speed of trade. They were carried in the format of myth and legend, by travelers and merchants for thousands of years until technology came along. In almost no time at all, ideas started moving across radio and telegraph, into widely-circulated print, and now in the last fifty years, on to a global communications network unlike anything that came before.

With that speed of connection, comes the speed of evolution. Ideas can be born, shared, grown, tested, challenged, discredited, and die out a lot faster today than they could a hundred years ago.


In 2016, ideas can move roughly at the speed of thought. People can post half-baked ideas (like this one) to a webpage that can instantly be accessed, digested and iterated on by a potentially infinite audience.

In the space of one day, you can come across new information that completely changes how you see the world, and by the next day, you can become a publisher of your own ideas.

I wonder what will happen when technology breaks down the next barrier, and lets cultures trade ideas without the restriction of language. For one, I know for a fact that humans already can’t deal with this new level of sharing – what used to be socially acceptable not even 20 years ago is taboo today.

Keeping your ideas up to date in 2016 is much harder work than it would have been in 1986, with new information becoming available almost daily, and every perception basically under constant attack. And with ubiquitous access to the Internet, ignorance is less and less of an excuse.


So whatever the next ten years hold, it’ll sure be interesting to watch. There are already compact new ways of expressing ideas (memes) and narratives (emoji), new rules evolving for how they should work, and new expectations that come from a generation of children growing up in an always-online world.

This new generation, and the 2-3 that come after it, will be growing up in a weirdly connected world with totally different rules. They’ll form a remarkably efficient human CPU – a hybrid human/technology engine for executing, iterating and discarding new bits of information faster than anything that came before.

Artificial Intelligence will have its work cut out πŸ˜‰

Could have been a prophet

This post is more than a year old. The information, claims or views in this post may be out of date.

Back in 2013 I started learning about the “filter bubble” – a natural result of the behavior- and preference-driven algorithms that power major search engines. Between your search history, the links you click on, and the sites you visit that are tagged with Google Analytics (which is a lot of websites nowadays), search engines like Google can make a reasonable approximation of what results might interest you.

There’s nothing inherently nefarious about this. Google’s interest is in getting you to the right website as quickly as possible, and they’ve done a phenomenal job at it. The better results you get from Google, the more you use them – which means you’re more likely to click the ads that get served alongside search results.

The problem, though, is that it sacrifices diversity of ideas a bit. If you’re a habitual Fox News reader, you follow Donald Trump, and routinely watch his speeches on YouTube, the next time you search something like “abortion” or “gun rights”, the filter bubble will give you results from sites it thinks you want to see, and you’ll get a very right-wing view of the situation.

The search engine will dutifully give you the information it thinks you want to see, but may not give you the information you need, and that’s where the problem comes in. So if you’re in a really bad situation (say, unwanted pregnancy in a conservative and oppressive religious family), and you need level-headed information on whether abortions are safe, legal, and where to get them, Google won’t know to give you that.

That’s the inadvertent side-effect of the filter bubble, and it got me thinking – what would happen if it were made deliberate? A corporation with that much power could, theoretically, start deliberately adjusting their algorithms to subtly affect the worldview of the people using their service.

Facebook’s hilarious miscarriage aside, Google is now doing exactly this. Wired Magazine reports that a Google subsidiary is going to deliberately attempt to feed misinformation to potential ISIS recruits.

Much has been written about cyber warfare, and what it might look like – hackers, viruses, trojans, groups of dangerous people taking down power plants and military bases. Much less has been written about the more insidious form of information warfare that’s crept up on us over the last few years, and practically nothing about calling large search engines to account.

Today, Jigsaw (/Google) is trying to identify potential ISIS recruits, and change the results they get to feed them anti-propaganda, to dissuade them from signing up. Maybe it won’t work – I imagine that most recruiting is done peer-to-peer in any case – but maybe it will.

And if it does work, it sets a very worrying precedent. Up until this point, it’s been in Google’s best interests to vacuum up as much of the Internet as possible, and optimize it relentlessly to get you where you’re going. But what if Google decides that, for whatever reason, they’re a national security asset now, and they have a responsibility to tailor search results away from dangerous ideas?

That’s a slippery slope of note, because it opens the door for people to start redefining what those dangerous ideas are. To any reasonable person, a dangerous idea is one that could result in physical harm or a loss of property.

To a militaristic dictatorship, a dangerous idea is any one that can teach the common man to arm and defend themselves. In a police state, a dangerous idea is one that reminds people of their rights under their respective laws. In a communist dictatorship, a dangerous idea is that people are entitled to the fruits of their own labor, and that being constantly stripped of your wealth is not the best way to run a country.

Anything that upsets the balance of power could be considered dangerous, whether or not that power is being wielded fairly or equitably. And with the sheer amount of power we’re giving search engines over our lives, I think it’s worth asking whether or not we’re actually being shown a fair representation of ideas, not just the ones that are deemed “acceptable”.

In the past, the news media has always been the gatekeepers of that, and have been rightly criticized for withholding information that was of vital public interest. The internet has always acted as a bulwark against that, creating a forum where all speech is equal. And now it seems we’re slowly sliding back towards a world where there are gatekeepers, and fringe speech is marginalized at the behest of the powerful.

So anyway, the moral of the story here is that I regret not writing the short story I had in mind in 2013. It was a story that dealt more or less exactly with this – what would it look like if companies could start shaping information that we thought was being ranked on technical merit alone. Would people even notice that their ideas were being deliberately adjusted on a network they thought was free and open? Who would line up to pull the strings, to use information as the next theater for cyber warfare?

Had I written that then, it would have been topical now. Next time I’ll have to do better.