Category Archives: Ideas

It’s time to talk about charts

Have you ever felt annoyed that someone tried using a world map chart to visualize country-level data?

Or is that just me?

(It’s probably just me.)

Over the last few weeks I’ve been picking up more books to read (as part of my drive to write more), one of them being Content Inc – recommended to me as a good introduction to content marketing, and how powerful it can be.

The book vacillates between content production at the individual level, and the corporate level. Some of the stories focus on single-person startups, and how they tested ideas and built businesses off the back of content production. The rest of it, haphazardly, deals with how to maintain that within a larger organization (team structures, responsibilities, and so on).

What struck me about the individual stories though was the relative simplicity of the focus. One person wrote about writing – another, about real estate. A third simply wrote about how to get more value out of your camera. All of those, over time, became profitable businesses – the key ingredients being effort, and no small amount of passion for the target subject.

It got me thinking about an idea I had years ago, when first starting to work with Domo. Without going into too much detail, one of the things that intuitively clicked for me during the first few weeks was the brightline relationship between business management, and data visualization.

Borderline-buzzword sentence, I know.

The practice seemed to hit at the intersection of a few of my interest areas – complex systems, data and numbers, and visual communication – and it wasn’t very long before I was already planning out an enormous series of content on how to get the best value out of different data visualization options.

That content never materialized. I had put it on my internal roadmap; to develop “added value” in the form of training content that our consultants could use to help plan best-practice dashboards.

In the end, Domo themselves reached a new level of maturity on their operating models, and that filtered through to the training we got. For that (and quite a few other reasons) that content was never built.

Reading Content Inc made me dust that idea off again. I know for a fact that I can produce useful, actionable content on this topic, having done it before. I’ve also learned, somewhat accidentally, that this is a passion of mine.

In retrospect it might be obvious, but the revelation really came to me on a recent customer project. We were planning out a series of dashboards, and someone wanted to include a world map chart where there didn’t need to be one. That led to a long (and I want to use the word “vibrant”) discussion on whether or not we should include it.

Afterwards, reflecting on that conversation, I realized how deeply I had internalized the principles I had been learning since 2013 – and how naturally they seemed to fit in with the rest of my thinking.

So between that, and my desire to write and publish content more frequently, I’ve decided to take a stab at maintaining a data visualization blog, with a specific focus on practicality: There are amazing interactive visualizations out there (Jer Thorp in particular will always be in my pantheon of data deities), but most of the visualizations we use in daily life are much more basic.

Software has, I think, tricked too many people into thinking charts are easy. I’ve seen so many presentations, Excel workbooks, and “professional”-level reporting that ends up being hard to get any sort of good understanding from.

Simple rule: If your chart is accompanied by a “how to read this chart” helper, you haven’t built a good chart.

And I think this might be the thing that I tackle next: A bit of theory, but mostly practical advice on how to construct good charts. And there are a lot of scenarios to consider – more than enough to build a solid resource for the “everyman” visualization work.

So I’ll be building out plans and content for this over the next few weeks, and hope to launch a new site before the year is out. If there’s one thing I’ve learned so far, is that good data visualization is timeless.

1786_Playfair_-_Exports_and_Imports_of_Scotland_to_and_from_different_parts_for_one_Year_from_Christmas_1780_to_Christmas_1781.jpg

In particular, we’ve had stacked bar charts since as far back as 1780.

Once it goes live, I’ll be posting about the site here. If you want to be alerted when that happens, consider subscribing to my blog – widget’s on the top right.

Rise of the Machines

Right now, we’re living in one of the most momentous times in human history, and it could end up being one of the best (or, possibly, worst) things to unfold: our inevitable transition to what Maurice Conti calls the Augmented Age.

Computers have become part of mainstream life in every advanced economy, and basically all major cities around the world (into which people are packing in ever-greater numbers). The resulting efficiency gains have either been a huge boost to creativity and opportunity, or the death-knell of industries that employ tens of millions of people.

I’d like to share two different perspectives on this – both, conveniently, delivered as excellent TED talks. The first is by Maurice Conti, on how advances in computing have changed the way design could be done.

The most remarkable thing about the computer-derived inventions is how biological they look. It took nature millions of years to evolve a structure that their computers can do inside of a few days (referring to the drone chassis), and in future, could do on demand.

I think this is the best insight into how the leading edge of computing might change the way we design cities, vehicles, infrastructure, and the machines that help run our lives. It’s encouraging to note that human designers are still very much a part of the process, but will be able to do a lot more in a lot less time.

Which is a factor leading into the next TED talk – what happens when you centralize that amount of power (and consequently, the financial gains) in the hands of a relative few? People who are skilled at these technologies are able to create enormous value in a short space of time, relative to someone still doing the same task manually.

So what happens when you no longer have a need for the manual labor?

Another excellent talk that takes an unbiased view of Unconditional (I prefer Universal) Basic Income. It raises some good points, but misses at least one point I need to make a note of:

While it’s true that the top 5 tech companies are enormously valuable and employ relatively few people, the platforms they create have in turn generated opportunities for millions more. There are companies, products, services and entertainment channels that could not have existed were it not for the infrastructure and tools that Facebook and the like provide.

Google basically pulled the web development industry up out of the ground when it became clear to businesses that having a well-built site was a competitive advantage. I’m not sure anyone can count the amount of new jobs created in web development, creative design, copywriting, SEO optimization, consulting and education as a result of the platform Google built.

(Yes, I know Google didn’t build the internet. And yes, I know all these websites run on the internet that Google didn’t build, but everyone who’s ever been paid to build one has done so at the request of a customer who believed that being discoverable online would be beneficial to their business, and Google is still the king of discovery on the internet.)

Same goes for the use-cases enabled by Apple hardware, Facebook’s networking, Amazon’s fulfillment infrastructure, and the productivity tools released by Microsoft. Those companies themselves may employ relatively few, but they have empowered millions more.

Moving on.

I think UBI is feasible not so much because of productivity gains due to automation, but because of the ever-declining costs of providing an acceptable standard of living. An excellent, recent example of this is Apis Cor’s house printer.

On the one hand: This technology might end up putting a lot of construction workers out of jobs. While you’ll still need workers for big buildings and the like, simple 1-2 person houses can probably be built quickly, and very cheaply, as a result of this innovation.

But on the flip-side, the cost of houses will plummet. You may not need to work for 20 years to pay off a mortgage for a house that only costs $10k to build. While construction workers might be worried about this, the people who should be a lot more worried are ones with heavy investments in residential development companies 😉

I like to imagine a future unconstrained by urbanization. Cities are where the opportunities are – the best jobs are in cities, the best entertainment, the best healthcare, and overall, the best opportunities to live a good life. This is because it’s a lot easier, with the current limitations, to pile a lot of services into one place.

I don’t believe civilization needs to be so centralized, though. If you could get the same quality of food, healthcare, entertainment and job opportunity in an area 200km outside a major city, plus it was cheaper to live there – wouldn’t you?

And there may come a time when we have to. Most major cities (and by extension, most of the world’s population) are located relatively close to a coastline. Historically, cities were founded and grew near coastlines because those afforded the best opportunities for global trade.

Well, that’s under threat. Depending on who you believe, climate change is either a myth, or it’s a reality already underway – and one of the most dire consequences will be the rise of the ocean level. Which, if that happens, will start to make the large, coastal cities unlivable.

We will be forced to start again – massive inland migrations, the design of new cities, infrastructure and services to support the population, while simultaneously ensuring people have a shot at an acceptable standard of living. With the lessons we’re learning today, I imagine those cities (and societies) will look very different.

Between the work of engineers like Maurice and researchers like Federico, I’m optimistic that we’ll be well-equipped to meet those challenges in future.

Some thoughts on Mastery

Over the last few weeks I’ve been wrestling with the question of what to do next, career-wise. In doing that, I’ve been re-evaluating most of what I’ve been working on over the last few years, trying to figure out what actually made me happy, what worked to advance my career, and what held me back.

One of the things I consistently identified as being a positive, was being in a situation where I had the opportunity to develop mastery in a particular subject. I think anyone who’s driven by the need to learn would identify with that.

A new and interesting point (to me, anyway) is the idea that mastery itself is relative. I’d always thought of it as an absolute: that there’s a known limit to a given subject, and if you can reach that limit of knowledge, you’re a master in it. Sink 10’000 hours into something, and you’re the best.

That doesn’t really seem to be the case, though. In order to truly develop mastery in anything, you need to keep surrounding yourself with people that are better than you, and learn from them. There’s a quote, that like most quotes has a fuzzy origin:

If you’re the smartest person in the room, you’re in the wrong room.

It’s really obvious in hindsight. If you’re the smartest developer at your company, that doesn’t mean you’ve mastered software development – just that you’ve hit the limits of your learning. To actually become the smartest software developer, you need to find smarter developers to learn from, and inevitably, teach other developers what you know.

And even then, the goalposts keep moving. For instance, being a ‘master’ software developer 30 years ago required the command of much fewer tools and languages. To be an even half-decent full-stack developer in 2016 requires you to understand a bit of everything, from servers to UX, and all the different languages those are expressed in.

Meaning that mastery is an unattainable goal, but by far the worthiest to pursue.

The Human CPU

Two of my favorite things in this world: Finding arbitrary connections and correlations, and survival crafting games. This post is about the former.

Earlier this year I took a trip to Melbourne, and got to spend some time looking at a very modern skyline. A few ideas occurred to me there that I only recently found a good way to verbalize.

img_20160411_140004-1

So let’s start with a very basic introduction to how CPUs work. Everything your computer does is ultimately tied back to a series of bits – ones and zeroes – that move through the very delicate circuitry of your Central Processing Unit.

How do you get from 1s and 0s to cat videos? That’s a very long story, involving addressable memory, registers, clock frequencies and more, but the very basic unit of computing you need to know about is a transistor.

pile-of-transistors

A transistor is a very special, and very tiny, mechanical device that can either prevent, or allow, electric current to pass through it. Each transistor handles one bit at a time, and modern CPUs are packed with billions of the things. A current-generation server-grade CPU can have as many as 2.5 billion on a single die (chip).

Each individual transistor has no way of knowing what’s going on in the overall system. All it does is receive and transmit electrical impulses as programmed, and with the combined effort of hundreds of millions of transistors, we as the end users experience the magic of computing.

It’s important to note that these transistors are “networked”, in a sense, too – they’re all connected with physical circuitry to allow current to pass through them. An isolated transistor is useless.

2016-09-12-20_05_48

Hardware is useless without software though, and there too, computers exist as layers built upon layers. In 2016 you’re lucky enough to use an IDE to write a high-level language, with all the hard work of computing abstracted away from you. Everything you write gets compiled down to an impossibly long series of binary instructions that the hardware can execute.

And again, the software that runs on the hardware has no real idea what it’s doing, either. Most of it involves moving bits to and from memory, under certain conditions. It, too, doesn’t understand things at the cat-videolevel.

So this is where my question gets a bit more interesting. Computers are really powerful – they consist of:

  • Finely-tuned and optimized hardware,
  • Running programmable software,
  • Maintained by human programmers that derive a very complex result,
  • Despite the hardware and software running very simple instructions

With that in mind, let’s talk about humans for a little bit.

80b0d25e

As of 2016, between urbanization and globalization, there have been two trends on the uptick pretty much since the end of World War II:

  • More people are moving into cities (urban centers are growing) and
  • More people are networked via telephony, and now the Internet

Go to any major metro today, and you’ll see the same thing: A skyline packed with impossibly tall buildings, with millions of people milling about, each doing pretty much their own thing.

Each person alive has a different set of skills, preferences, inherited advantages (or disadvantages), and will attain various levels of success, attributed to various levels of effort, grit, and sometimes luck.

However, as sophisticated as our modern economy and society looks, it tends to operate on very basic principles. Zoom in far enough, and you’ll see the same basic elements of trade – people making, people buying, people selling, people providing services to make all of that more efficient.

It’s also evident that cities follow people – for example, as mines dried up and people left to find work at cities, formerly-busy towns slowly degrade and turn into ghost towns. People (and more specifically trade) is the lifeblood that dictates whether a town shrinks or grows.

photo-1452828380758-183ec24b2ada

And the governing principle there? I suggest it might be our ideas.

As humans, we have thought, and that thought (plus experience and data) has given rise to the concept of an idea: A perception of how things might be, as opposed to how they currently are.

Or, taken another way, an idea is a shorthand for a set of positions and beliefs (like the idea that people should own property, defend their families and their way of life).

More than anything else, ideas have shaped human development. If it were not for our capacity to have ideas, we wouldn’t have progressed much further than the stone age, constantly living just to service our needs in the moment.

For example: if it were not for that, then smaller towns might not shrink. Instead of getting the idea of pursuing better opportunities elsewhere, people might turn to subsistence farming in their area instead, reducing their standard of living to match what the surrounding environment offers. You know, like how basically every other animal on the planet operates?

So now this is where it gets a little weird.

photo-1440074121584-e4acb8b6e954

If you accept that a modern computer is a collection of hardware, powered by electricity, programmed with flexible software, running basic instructions that roll up to a complex outcome for the end-user,

And you also accept that a modern city is a collection of infrastructure (buildings), powered by the people that live in it, each person acting in limited self-interest, driven by a set of ideas they accumulate from the world around them:

  1. Is it possible that the balance of ideas in the world is not accidental, and
  2. What sort of higher, complex benefit might someone derive from the low-level interaction of ideas?

So that’s the first thing to think about. As the end-users outside the system, we can create instructions, send them to a computer, watch what it does, and improve on the way the software runs. The computer has no reference point for what it’s doing – it just blindly trusts instructions issued to it.

Does humanity work the same way? Are we just blindly trusting instructions, doing our limited best without comprehending the larger picture? Is there an aggregate outcome of our individual efforts and ideas that we’re not aware of?

nextpic1

Ideas themselves have evolved over time – from throwing stone axes, to the Universal Declaration of Human Rights, our ideas (and our capacity for bigger ideas) has evolved over time every bit as much as our capacity for technological innovation.

What if there were a force – outside our individual comprehension – that was iterating on the quality of ideas, in the same way a software developer iterates on the quality of their software? We’ve had some whopping bad ideas in our history (like racial purity, sun worship, human sacrifice), and more recently, some very good ones (democracy, ownership, free thought).

Ideas used to move at the speed of trade. They were carried in the format of myth and legend, by travelers and merchants for thousands of years until technology came along. In almost no time at all, ideas started moving across radio and telegraph, into widely-circulated print, and now in the last fifty years, on to a global communications network unlike anything that came before.

With that speed of connection, comes the speed of evolution. Ideas can be born, shared, grown, tested, challenged, discredited, and die out a lot faster today than they could a hundred years ago.

nextpic2

In 2016, ideas can move roughly at the speed of thought. People can post half-baked ideas (like this one) to a webpage that can instantly be accessed, digested and iterated on by a potentially infinite audience.

In the space of one day, you can come across new information that completely changes how you see the world, and by the next day, you can become a publisher of your own ideas.

I wonder what will happen when technology breaks down the next barrier, and lets cultures trade ideas without the restriction of language. For one, I know for a fact that humans already can’t deal with this new level of sharing – what used to be socially acceptable not even 20 years ago is taboo today.

Keeping your ideas up to date in 2016 is much harder work than it would have been in 1986, with new information becoming available almost daily, and every perception basically under constant attack. And with ubiquitous access to the Internet, ignorance is less and less of an excuse.

nextpic3

So whatever the next ten years hold, it’ll sure be interesting to watch. There are already compact new ways of expressing ideas (memes) and narratives (emoji), new rules evolving for how they should work, and new expectations that come from a generation of children growing up in an always-online world.

This new generation, and the 2-3 that come after it, will be growing up in a weirdly connected world with totally different rules. They’ll form a remarkably efficient human CPU – a hybrid human/technology engine for executing, iterating and discarding new bits of information faster than anything that came before.

Artificial Intelligence will have its work cut out 😉

Could have been a prophet

Back in 2013 I started learning about the “filter bubble” – a natural result of the behavior- and preference-driven algorithms that power major search engines. Between your search history, the links you click on, and the sites you visit that are tagged with Google Analytics (which is a lot of websites nowadays), search engines like Google can make a reasonable approximation of what results might interest you.

There’s nothing inherently nefarious about this. Google’s interest is in getting you to the right website as quickly as possible, and they’ve done a phenomenal job at it. The better results you get from Google, the more you use them – which means you’re more likely to click the ads that get served alongside search results.

The problem, though, is that it sacrifices diversity of ideas a bit. If you’re a habitual Fox News reader, you follow Donald Trump, and routinely watch his speeches on YouTube, the next time you search something like “abortion” or “gun rights”, the filter bubble will give you results from sites it thinks you want to see, and you’ll get a very right-wing view of the situation.

The search engine will dutifully give you the information it thinks you want to see, but may not give you the information you need, and that’s where the problem comes in. So if you’re in a really bad situation (say, unwanted pregnancy in a conservative and oppressive religious family), and you need level-headed information on whether abortions are safe, legal, and where to get them, Google won’t know to give you that.

That’s the inadvertent side-effect of the filter bubble, and it got me thinking – what would happen if it were made deliberate? A corporation with that much power could, theoretically, start deliberately adjusting their algorithms to subtly affect the worldview of the people using their service.

Facebook’s hilarious miscarriage aside, Google is now doing exactly this. Wired Magazine reports that a Google subsidiary is going to deliberately attempt to feed misinformation to potential ISIS recruits.

Much has been written about cyber warfare, and what it might look like – hackers, viruses, trojans, groups of dangerous people taking down power plants and military bases. Much less has been written about the more insidious form of information warfare that’s crept up on us over the last few years, and practically nothing about calling large search engines to account.

Today, Jigsaw (/Google) is trying to identify potential ISIS recruits, and change the results they get to feed them anti-propaganda, to dissuade them from signing up. Maybe it won’t work – I imagine that most recruiting is done peer-to-peer in any case – but maybe it will.

And if it does work, it sets a very worrying precedent. Up until this point, it’s been in Google’s best interests to vacuum up as much of the Internet as possible, and optimize it relentlessly to get you where you’re going. But what if Google decides that, for whatever reason, they’re a national security asset now, and they have a responsibility to tailor search results away from dangerous ideas?

That’s a slippery slope of note, because it opens the door for people to start redefining what those dangerous ideas are. To any reasonable person, a dangerous idea is one that could result in physical harm or a loss of property.

To a militaristic dictatorship, a dangerous idea is any one that can teach the common man to arm and defend themselves. In a police state, a dangerous idea is one that reminds people of their rights under their respective laws. In a communist dictatorship, a dangerous idea is that people are entitled to the fruits of their own labor, and that being constantly stripped of your wealth is not the best way to run a country.

Anything that upsets the balance of power could be considered dangerous, whether or not that power is being wielded fairly or equitably. And with the sheer amount of power we’re giving search engines over our lives, I think it’s worth asking whether or not we’re actually being shown a fair representation of ideas, not just the ones that are deemed “acceptable”.

In the past, the news media has always been the gatekeepers of that, and have been rightly criticized for withholding information that was of vital public interest. The internet has always acted as a bulwark against that, creating a forum where all speech is equal. And now it seems we’re slowly sliding back towards a world where there are gatekeepers, and fringe speech is marginalized at the behest of the powerful.

So anyway, the moral of the story here is that I regret not writing the short story I had in mind in 2013. It was a story that dealt more or less exactly with this – what would it look like if companies could start shaping information that we thought was being ranked on technical merit alone. Would people even notice that their ideas were being deliberately adjusted on a network they thought was free and open? Who would line up to pull the strings, to use information as the next theater for cyber warfare?

Had I written that then, it would have been topical now. Next time I’ll have to do better.