Rise of the Machines

This post is more than a year old. The information, claims or views in this post may be out of date.

Right now, we’re living in one of the most momentous times in human history, and it could end up being one of the best (or, possibly, worst) things to unfold: our inevitable transition to what Maurice Conti calls the Augmented Age.

Computers have become part of mainstream life in every advanced economy, and basically all major cities around the world (into which people are packing in ever-greater numbers). The resulting efficiency gains have either been a huge boost to creativity and opportunity, or the death-knell of industries that employ tens of millions of people.

I’d like to share two different perspectives on this – both, conveniently, delivered as excellent TED talks. The first is by Maurice Conti, on how advances in computing have changed the way design could be done.

The most remarkable thing about the computer-derived inventions is how biological they look. It took nature millions of years to evolve a structure that their computers can do inside of a few days (referring to the drone chassis), and in future, could do on demand.

I think this is the best insight into how the leading edge of computing might change the way we design cities, vehicles, infrastructure, and the machines that help run our lives. It’s encouraging to note that human designers are still very much a part of the process, but will be able to do a lot more in a lot less time.

Which is a factor leading into the next TED talk – what happens when you centralize that amount of power (and consequently, the financial gains) in the hands of a relative few? People who are skilled at these technologies are able to create enormous value in a short space of time, relative to someone still doing the same task manually.

So what happens when you no longer have a need for the manual labor?

Another excellent talk that takes an unbiased view of Unconditional (I prefer Universal) Basic Income. It raises some good points, but misses at least one point I need to make a note of:

While it’s true that the top 5 tech companies are enormously valuable and employ relatively few people, the platforms they create have in turn generated opportunities for millions more. There are companies, products, services and entertainment channels that could not have existed were it not for the infrastructure and tools that Facebook and the like provide.

Google basically pulled the web development industry up out of the ground when it became clear to businesses that having a well-built site was a competitive advantage. I’m not sure anyone can count the amount of new jobs created in web development, creative design, copywriting, SEO optimization, consulting and education as a result of the platform Google built.

(Yes, I know Google didn’t build the internet. And yes, I know all these websites run on the internet that Google didn’t build, but everyone who’s ever been paid to build one has done so at the request of a customer who believed that being discoverable online would be beneficial to their business, and Google is still the king of discovery on the internet.)

Same goes for the use-cases enabled by Apple hardware, Facebook’s networking, Amazon’s fulfillment infrastructure, and the productivity tools released by Microsoft. Those companies themselves may employ relatively few, but they have empowered millions more.

Moving on.

I think UBI is feasible not so much because of productivity gains due to automation, but because of the ever-declining costs of providing an acceptable standard of living. An excellent, recent example of this is Apis Cor’s house printer.

On the one hand: This technology might end up putting a lot of construction workers out of jobs. While you’ll still need workers for big buildings and the like, simple 1-2 person houses can probably be built quickly, and very cheaply, as a result of this innovation.

But on the flip-side, the cost of houses will plummet. You may not need to work for 20 years to pay off a mortgage for a house that only costs $10k to build. While construction workers might be worried about this, the people who should be a lot more worried are ones with heavy investments in residential development companies 😉

I like to imagine a future unconstrained by urbanization. Cities are where the opportunities are – the best jobs are in cities, the best entertainment, the best healthcare, and overall, the best opportunities to live a good life. This is because it’s a lot easier, with the current limitations, to pile a lot of services into one place.

I don’t believe civilization needs to be so centralized, though. If you could get the same quality of food, healthcare, entertainment and job opportunity in an area 200km outside a major city, plus it was cheaper to live there – wouldn’t you?

And there may come a time when we have to. Most major cities (and by extension, most of the world’s population) are located relatively close to a coastline. Historically, cities were founded and grew near coastlines because those afforded the best opportunities for global trade.

Well, that’s under threat. Depending on who you believe, climate change is either a myth, or it’s a reality already underway – and one of the most dire consequences will be the rise of the ocean level. Which, if that happens, will start to make the large, coastal cities unlivable.

We will be forced to start again – massive inland migrations, the design of new cities, infrastructure and services to support the population, while simultaneously ensuring people have a shot at an acceptable standard of living. With the lessons we’re learning today, I imagine those cities (and societies) will look very different.

Between the work of engineers like Maurice and researchers like Federico, I’m optimistic that we’ll be well-equipped to meet those challenges in future.

On Record

This post is more than a year old. The information, claims or views in this post may be out of date.

Boy, have I got a story for you.

Urination-over-IP, I guess.

So yes, that’s me, in the screenshot above, calling the death of Apple’s relative lead in the app ecosystem wars – on 11 November 2016. When that thread dried up, I told myself that as soon as I got my new .blog domain hooked up to The Grid, I’d write a more detailed post explaining the reasons why.

(Then I found out that The Grid is crap, and set up here on WordPress instead)

For context: That conversation came out of a bit of hand-wringing around Apple’s new Touch Bar (and a few other really odd technical decisions on the part of Apple). One of the recurring themes from hardcore, longtime Apple users is that the “Pro” part of MacBook Pro became disingenuous with this release. The Mac is no longer for professionals.

There’s two main threads behind my comment though, so let me start with the company itself.

Post-Jobs Apple

To put it flatly: Apple died with Jobs.

I don’t mean the company itself – it’s still the most valuable in the world by market cap, and has brand equity second to none. As a going concern, it’ll be going for a very, very long time

I don’t mean the products, though Apple has recently started trimming some of their smaller lines. We’ll have iPhones and MacBooks for years (if not decades) to come.

What I’m talking about is the spirit of Apple – the drive, the mystique, the vision, the quasi-religious, standard-setting, trail-blazing aspect of owning Apple hardware. That’s gone, and the Touch Bar was the final nail in that coffin.

Among the many things that Jobs did, when it came to the Mac he only seemed to have two objectives in mind:

  1. Apple will build the best machine possible – from the hardware to the software
  2. Apple will enable creatives, dreamers and makers

The MacBook itself became a sort of paragon – a golden standard for notebook design. Every generation was thinner and lighter. Apple introduced Retina resolution, the best touchpad on the market (still undefeated in my opinion), unibody design, relentless optimization to the keyboard, and occasional ground-up rewrites to ensure that OSX would remain stable and performant.

Very few of those choices were informed by market forces. When it comes to designing a product to target a market of any sort, most companies will typically do the least they can get away with. There’s a cost/benefit formula to everything: How much money sunk into R&D, versus how many sales are required to make a profit.

Problem with that is, markets are generally full of shit. Consumers don’t know what they want until you put it in front of them – one of the many insights that drove Jobs, and by extension, Apple.

It didn’t matter to Apple that they were sinking far more R&D time into parts of the device that most consumers wouldn’t ever touch. It didn’t matter which way the market was going at any one point. If they made a decision (even if unpopular) – consumers be damned. Apple owned the game.

And that worked out very well for the makers – the photographers, video editors, software engineers, designers and artists. They could rely on successive generations of the MacBook Pro line to thoroughly equip them to create better things, no doubt driven by an obsessive CEO who was never satisfied with the output.

When you build the best, and sell the best, inevitably you attract the best. There’s a small, but significant halo effect that Apple has created here: Their hardware has attracted the best developers. Not just the developers that code for money, but also the ones that code because it’s their lives.

And when those developers need to solve a problem, chances are they’ll use the best tools at their disposal to do so. Over time, this meant that the brightest and most capable engineering talent accumulated within the Apple ecosystem. It should be no small wonder that the Android and Windows app stores are currently seen as second-class citizens, or that MacBooks are effectively mandatory at any tech startup.

All of that talent has a material impact, too. Consider the iPhone – the hardware tends to stay ahead in some areas, the operating system is often criticized for being feature-limited, but the app store is second to none.

Which does make a big difference. I’ve owned several different phones, the best of which (hardware-wise) was a Lumia 1520. Brilliant screen, camera, battery and touch surface, but the apps were barely functional, and less than two months into the contract I bought a new phone out of sheer frustration. I know I’m not the only one.

You can do much more (and much better) with an iPhone than you can with any other device, which is why this chart should not come as a big surprise:

Respective FY-2015 totals taken from audited financials

In FY 2015, the Apple iPhone product line alone has generated more revenue than entire competitors.

And don’t underestimate the ripple effect to major software vendors. Producers of high-end creative software packages (Photo manipulation, video editing, sound editing) aim for the Mac platform because that’s where the high-end creatives are. If that market starts drying up, so too do the updates that regular users benefit from.

No wonder everyone – users and analysts – think that Apple is unstoppable.

Except that it is, because it’s failing to do two things right now.

First: In the short term, it just failed to equip high-end creative professionals with the best possible hardware. In the wake of the new MacBook ‘Pro’ lineup, long-time Apple users are starting to talk about defecting to other platforms. This will eventually have a degrading impact on the Apple ecosystem as a whole, especially if another vendor is standing by to give those power users what they need.

Which Microsoft neatly did with the Surface Studio this year – a desktop machine aimed squarely at designers and digital artists. If they release a better notebook for developers, I’m willing to bet quite a few luminaries would seriously consider switching to the Microsoft ecosystem.

But there’s another thing that Apple’s failed to do, and I think this is the one that really matters.

Computing is changing

For as long as we’ve kept records, we’ve needed to process them – and for a very long time, manual processing was OK. It’s hard to imagine now, but there was a time that drawing bar graphs was an actual profession.

Computers came along as a grown-up version of basic calculators. The biggest benefit was that you could change how the computer processed information, by providing more information: software.

Since then, all we’ve really done is more of the same. Computers have gotten millions of times faster at processing instructions, and computing languages have been developed to make programming within reach of almost everyone.

Software has been getting more powerful and more sophisticated over time, but has always been bound by a very simple constraint: It required a human to learn how to program a machine. Before you could make a computer do anything, you need to understand how to solve the problem yourself, then instruct the computer how to solve it.

That’s starting to change with recent advances in Machine Learning (more specifically, Neural Networks). It’s a simple but powerful layer of abstraction: Instead of telling computers what to do, we’re teaching them how to decide what to do for themselves.

Here’s a simple example: https://quickdraw.withgoogle.com/

That’s a simple neural network game. It processes the images you draw, and matches them against similar images that other people have drawn. Over time, it learns to recognize more variations of objects. There may come a time when it has so many variations stored, its accuracy becomes close to human (if not perfect).

However, that machine was not programmed by a human to recognize every possible shape. It was built, instead, to find patterns, and to correlate those patterns with ones it’s seen before. That’s the difference.

Problem-solving itself is starting to change. In future, instead of solving problems by writing machine instructions, the heaviest problems will be solved by building machines that can learn, and by training them to solve those problems for us.

Today, solving these problems requires the use of cloud computing. Sure, you can run a small neural network on your laptop, but to give it true power it has to scale out to hundreds of computing nodes in parallel.

And so, today, there are two vendors which are leading the field here: Google and Microsoft.

Google’s Cloud Platform is exposing APIs that developers can start using today to add natural interactivity to their applications, and they’re already doing cool things with neural networks – for instance, zero-shot translation.

Microsoft, through the Azure platform, is building towards making even stronger capabilities available to end-users. They already claim human-parity speech recognition, and have recently partnered with OpenAI.

Personally, I think Microsoft is leading this race. Google’s got the edge on quantity – their entire infrastructure is geared towards processing large amounts of data and finding useful relationships. Microsoft, on the other hand, has way more infrastructure, a stronger research team, and seems better-equipped to tackle the more interesting use cases.

In any case: Apple is nowhere to be found. The company that built itself on the quality of its hardware, equipping high-end creatives and reaping the benefit of their participation in the ecosystem, has precisely zero play in the AI game.

So that’s the rationale behind my position. Vendors other than Apple are building the tools that power-users of the future will require, and so will attract more power-users. They, in turn, will have the same halo effect on the Microsoft (and/or Google) ecosystems.

Ergo, I’m “on record”:

I can call it right now: The day will come when the Windows app ecosystem rivals the OSX ecosystem for quality, and after that, we’ll come to think of Apple vs Microsoft as Myspace vs Facebook.

To wrap it up in tl;dr terms:

  • The future game is about AI, neural networks and machine learning
  • The winner will be the vendor that can solve for the most complex problems in the most cost-efficient way
  • Microsoft is currently positioned to build the best hardware/software/services ecosystem to enable developers to do just that

Apple will lumber on as a consumer brand. The core value proposition (Apple hardware is for makers) has now been sacrificed on the altar of market forces. Whoever comes up with the best ecosystem for AI will win on the software front, which will be the only front that matters in the end.