The Human CPU

This post is more than a year old. The information, claims or views in this post may be out of date.

Two of my favorite things in this world: Finding arbitrary connections and correlations, and survival crafting games. This post is about the former.

Earlier this year I took a trip to Melbourne, and got to spend some time looking at a very modern skyline. A few ideas occurred to me there that I only recently found a good way to verbalize.


So let’s start with a very basic introduction to how CPUs work. Everything your computer does is ultimately tied back to a series of bits – ones and zeroes – that move through the very delicate circuitry of your Central Processing Unit.

How do you get from 1s and 0s to cat videos? That’s a very long story, involving addressable memory, registers, clock frequencies and more, but the very basic unit of computing you need to know about is a transistor.


A transistor is a very special, and very tiny, mechanical device that can either prevent, or allow, electric current to pass through it. Each transistor handles one bit at a time, and modern CPUs are packed with billions of the things. A current-generation server-grade CPU can have as many as 2.5 billion on a single die (chip).

Each individual transistor has no way of knowing what’s going on in the overall system. All it does is receive and transmit electrical impulses as programmed, and with the combined effort of hundreds of millions of transistors, we as the end users experience the magic of computing.

It’s important to note that these transistors are “networked”, in a sense, too – they’re all connected with physical circuitry to allow current to pass through them. An isolated transistor is useless.


Hardware is useless without software though, and there too, computers exist as layers built upon layers. In 2016 you’re lucky enough to use an IDE to write a high-level language, with all the hard work of computing abstracted away from you. Everything you write gets compiled down to an impossibly long series of binary instructions that the hardware can execute.

And again, the software that runs on the hardware has no real idea what it’s doing, either. Most of it involves moving bits to and from memory, under certain conditions. It, too, doesn’t understand things at the cat-videolevel.

So this is where my question gets a bit more interesting. Computers are really powerful – they consist of:

  • Finely-tuned and optimized hardware,
  • Running programmable software,
  • Maintained by human programmers that derive a very complex result,
  • Despite the hardware and software running very simple instructions

With that in mind, let’s talk about humans for a little bit.


As of 2016, between urbanization and globalization, there have been two trends on the uptick pretty much since the end of World War II:

  • More people are moving into cities (urban centers are growing) and
  • More people are networked via telephony, and now the Internet

Go to any major metro today, and you’ll see the same thing: A skyline packed with impossibly tall buildings, with millions of people milling about, each doing pretty much their own thing.

Each person alive has a different set of skills, preferences, inherited advantages (or disadvantages), and will attain various levels of success, attributed to various levels of effort, grit, and sometimes luck.

However, as sophisticated as our modern economy and society looks, it tends to operate on very basic principles. Zoom in far enough, and you’ll see the same basic elements of trade – people making, people buying, people selling, people providing services to make all of that more efficient.

It’s also evident that cities follow people – for example, as mines dried up and people left to find work at cities, formerly-busy towns slowly degrade and turn into ghost towns. People (and more specifically trade) is the lifeblood that dictates whether a town shrinks or grows.


And the governing principle there? I suggest it might be our ideas.

As humans, we have thought, and that thought (plus experience and data) has given rise to the concept of an idea: A perception of how things might be, as opposed to how they currently are.

Or, taken another way, an idea is a shorthand for a set of positions and beliefs (like the idea that people should own property, defend their families and their way of life).

More than anything else, ideas have shaped human development. If it were not for our capacity to have ideas, we wouldn’t have progressed much further than the stone age, constantly living just to service our needs in the moment.

For example: if it were not for that, then smaller towns might not shrink. Instead of getting the idea of pursuing better opportunities elsewhere, people might turn to subsistence farming in their area instead, reducing their standard of living to match what the surrounding environment offers. You know, like how basically every other animal on the planet operates?

So now this is where it gets a little weird.


If you accept that a modern computer is a collection of hardware, powered by electricity, programmed with flexible software, running basic instructions that roll up to a complex outcome for the end-user,

And you also accept that a modern city is a collection of infrastructure (buildings), powered by the people that live in it, each person acting in limited self-interest, driven by a set of ideas they accumulate from the world around them:

  1. Is it possible that the balance of ideas in the world is not accidental, and
  2. What sort of higher, complex benefit might someone derive from the low-level interaction of ideas?

So that’s the first thing to think about. As the end-users outside the system, we can create instructions, send them to a computer, watch what it does, and improve on the way the software runs. The computer has no reference point for what it’s doing – it just blindly trusts instructions issued to it.

Does humanity work the same way? Are we just blindly trusting instructions, doing our limited best without comprehending the larger picture? Is there an aggregate outcome of our individual efforts and ideas that we’re not aware of?


Ideas themselves have evolved over time – from throwing stone axes, to the Universal Declaration of Human Rights, our ideas (and our capacity for bigger ideas) has evolved over time every bit as much as our capacity for technological innovation.

What if there were a force – outside our individual comprehension – that was iterating on the quality of ideas, in the same way a software developer iterates on the quality of their software? We’ve had some whopping bad ideas in our history (like racial purity, sun worship, human sacrifice), and more recently, some very good ones (democracy, ownership, free thought).

Ideas used to move at the speed of trade. They were carried in the format of myth and legend, by travelers and merchants for thousands of years until technology came along. In almost no time at all, ideas started moving across radio and telegraph, into widely-circulated print, and now in the last fifty years, on to a global communications network unlike anything that came before.

With that speed of connection, comes the speed of evolution. Ideas can be born, shared, grown, tested, challenged, discredited, and die out a lot faster today than they could a hundred years ago.


In 2016, ideas can move roughly at the speed of thought. People can post half-baked ideas (like this one) to a webpage that can instantly be accessed, digested and iterated on by a potentially infinite audience.

In the space of one day, you can come across new information that completely changes how you see the world, and by the next day, you can become a publisher of your own ideas.

I wonder what will happen when technology breaks down the next barrier, and lets cultures trade ideas without the restriction of language. For one, I know for a fact that humans already can’t deal with this new level of sharing – what used to be socially acceptable not even 20 years ago is taboo today.

Keeping your ideas up to date in 2016 is much harder work than it would have been in 1986, with new information becoming available almost daily, and every perception basically under constant attack. And with ubiquitous access to the Internet, ignorance is less and less of an excuse.


So whatever the next ten years hold, it’ll sure be interesting to watch. There are already compact new ways of expressing ideas (memes) and narratives (emoji), new rules evolving for how they should work, and new expectations that come from a generation of children growing up in an always-online world.

This new generation, and the 2-3 that come after it, will be growing up in a weirdly connected world with totally different rules. They’ll form a remarkably efficient human CPU – a hybrid human/technology engine for executing, iterating and discarding new bits of information faster than anything that came before.

Artificial Intelligence will have its work cut out 😉

Published by

Wogan May

By day, I run a software development agency focused on business tools, process optimization, data integration and automation. By night, I build tools and platforms that serve online creators.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.