Project: Newskitten

This post is more than a year old. The information, claims or views in this post may be out of date.

The world is a big, messy, complex, and sometimes-scary place – and it’s often made to seem more scary than it really is, thanks to mainstream news media.

Now don’t get me wrong, I’m not some sort of tinfoil-hatted alt-truther who believes there are secret forces pulling the strings behind what we see and hear in the news. I know it for a fact.

News outlets need to generate revenue to survive (they’re not charities), and over time, the proven revenue generators have been bad news. The day Oscar Pistorius shot Reeva Steenkamp was a fantastic day for online news, and for online ad revenue – that’s the sort of market force that ends up shaping editorial policy.

strip-les-journalistes-aujourdhui-650-finalenglish2

News organizations are optimizing for what works, and unfortunately, what works is not good. They’re forced to give audiences less of what they need, and more of what they want.

What people want, it seems, is sensationalism. They want news that angers and upsets them, that gives them something to talk and argue about, that conforms to their biases and reminds them their opinion of the world is still valid. They want to be entertained, not informed – because real information is boring.

At some point, the content that news providers put out there stopped being informative. It stopped being information that applied to you, or information that you needed in order to make an informed decision. There’s almost nothing that you can take from a news website today and use to improve the world you live in.

For example, here’s the top-line news on News24.com right now:

2016-12-22-12_43_40-news24-south-africas-premier-news-source-provides-breaking-news-on-national

This is what the largest and most popular news website in South Africa thinks I need to know today. None of it is relevant or useful to me.

  • Police pursuing N1 City Mall robber, high alert at shopping centres – I’m not in the police. I don’t control their budgets or deployments, I have no involvement in shopping center security, and the only way this applies to me is if I go shopping at a mall – at which point the people who’s jobs it is to keep the mall safe, will do their job to the best of their ability.
  • Helen Zille is a chief racist – ANC Western Cape – An empty statement from a political party. I don’t know Zille, I have nothing to do with either party (no say in their internal structures, no membership), and when it comes time to vote in a few years, I’ll cast my vote based on what the parties have actually done for me. News like this does nothing to sway my opinion.
  • We will show you how the ‘soccer’ game is played – KZN’s ANC – more political noise. I’m not in the ANC, I’m not in KZN, I’m not attending that conference, or have any interest in its outcome. Even if I hopped a plane right now and showed up at the conference venue door, there’d be nothing I could do. Hell, even if I was in the KZN ANC itself, chances are I’d have no voice, owing to the fact that my career is not politics, and I’d have no decision-making ability.
  • CONFIRMED: Mashaba sacked as Bafana coach – Zero relevance to me, since I don’t follow sports at all. But even to a sports fan, still zero relevance – hiring decisions are made by team management, not fans. Team managers are doing their jobs, and if there are better ways to do it, then a case can be made for change.
  • The end of Everest as we know it? China plans to build a mega-resort on the mountain – Everest is literally a world away from me. Everything I know about Everest is thanks to movies, books and music – this news is irrelevant to me. If China does build a resort there (which will be interesting seeing as Everest is in Tibet, and the nations have a cold relationship), I’ll have zero say in how it’s designed, resourced, built or maintained. I definitely, as a South African, have less than zero input into any decision made by a Chinese firm.

I could go on – headlines about SA’s nuclear plan, an SPCA investigation into a rodent supplier, Hlaudi Motsoeneng mouthing off: none of this information is useful to me. I doubt it’s useful to most people, since most people are not in a position to do anything with this information.

Except get upset. The mall-robbers story is a great springboard for complaining about how South Africa is chronically unsafe and the police aren’t doing their jobs. The Zille story is the perfect fuel to stoke the continually-burning race war in our national discourse. The Everest story is Christmas to people who think nature should be left alone, and people (or the Chinese specifically) are evil for building things.

None of these stories leave you with anything constructive – you just come away feeling depressed, angry and hopeless. And then, most of the time, you do your friends the disservice of sharing that on social media, spreading the infection further. Worse if you believe that your Facebook likes are actually saving lives.

how-idiots-think-facebook-works

And if you’re like most people, you rationalize it by calling it “being informed”, and that it’s better to be informed, than ignorant. You think that being informed is very important, almost crucial to daily life, and that the news you’re reading is an accurate depiction of the state of things.

Except that it’s not. On top of being driven by what sells, news organizations occasionally employ very biased editors. Editors that would happily tarnish the reputation of their organization for the sake of running, say, a politically-motivated smear campaign.

That was not news. That was a deliberate, blatant fabrication with no fact-checking, presented as news. Even if it had been true, what would you do with information that serves only to confirm your existing biases?

Imagine for a second that it was true, that Maimane was getting lessons from FW. If it was a problem, the DA has its own means of dealing with it. If it’s not a problem, the DA will have spun it to Maimane’s credit. In either case, on the outside, there’s nothing meaningful you could have contributed to the process.

If that’s what an editor is willing to put on the front page, in the middle of an election season, imagine what the editors are willing to do with the every day stories you read online. How many of those have been edited to provide the worst version of events? How many are actually just political hit-pieces disguised as news, aimed at discrediting a person, a party, or a part of society? How can you even tell the difference anymore?

And how many editors are just flat-out ignorant? Like the story of Andrew Kenny, a respected engineer and columnist who visited the Afrikaner town of Orania. His story shows the community in a positive light, and his editor decided not to run the story, since it might offend readers.

http://www.biznews.com/undictated/2015/10/30/heres-andrew-kennys-orania-column-the-citizen-doesnt-want-you-to-read/

News that might offend readers? News is meant to be facts, right? Well-researched, well-documented, delivered responsibly, and if it upsets people, then that’s unavoidable – hell, it’s necessary. Being offended is a natural and healthy part of living in a civilized society – but that’s a whole other topic.

So that’s news, as of 2016. It’s either designed to generate revenue, slanted to an editor’s personal bias, or simply ignored because it doesn’t fit the outlet’s story. The information you really need (civics, opportunities, policies) are all obtained elsewhere, and any time you spend on a typical news website is probably (balance of probability) time spent consuming content that won’t have a net-positive effect on your health or wellbeing.

Which is what I built a Chrome extension to fix.

ktn2

It’s called Newskitten, and you can download it here: Newskitten – Chrome Web Store

It’s still in its early stages. The concept is simple – if it sees a news website, it replaces it with that message above instead.

I’ve been using it for over a week now, and it’s already made a noticeable difference. I’ve clicked through from depressing-sounding headlines on my Facebook feed, only to be met with a relaxing cat. I’ve habitually opened up news websites in moments of boredom, hoping to find something new, and getting that gif instead.

It already feels like the news is leeching a lot less of my time and energy away from me. So much so, that I found the time (and mental clarity) to write this really long blog post, without feeling like the world is about to collapse on me at any moment.

 

Project: write500.net

This post is more than a year old. The information, claims or views in this post may be out of date.

Check it out: https://write500.net

Whenever I’m not deliberately concentrating on a given task, my mind tends to wander in a very specific way: It tells me stories. All day, every day – characters, worlds, twists, inventing and reinventing themselves.

One of my perpetual New Year’s Resolutions (since roughly 2008) is to write more – almost entirely to pin some of those stories down on paper, and if I turn out to be any good at it, polish one or two up and publish them.

Good theory – much harder in practice.

A writer is someone for whom writing is more difficult than it is for other people.

~Thomas Mann

Writing, I’ve learned, is every bit a skill as software development. Sure: anyone can read someone else’s work, see the logic in it, and gain the (false) confidence to create something like it – but the moment the pen hits the paper, that evaporates completely, leaving you facing the sober reality that, actually, you have no experience.

This has been my problem for the last few years, and I suspect it’s not unique. Any writer who’s read more than half a writing craft book should know that writing every day is one of the critical components – that you need the ability to produce sheer quantity, before you can start obsessing about the quality.

So with the new year coming, I had the idea of setting up a system for 2017 to help me exercise that muscle. I know I can write 500 words pretty easily – this blog post, written on the spur of the moment, is 586 words – and so long as I have some sort of guidance as to what to write, I shouldn’t find it difficult at all.

500 words per day, for 365 days, is over 180’000 words. Sure, they’re not all congruent words, and I have no hope of getting a novel out of it – but if I can manage it, I’d have written a novel’s worth of words, and I’d have built up routine, momentum, and (hopefully) a bit of confidence in my ability.

On that theory, I grabbed a domain, and started building a system to deliver me a writing prompt every morning at 9am. I figured the workflow would be no different from managing my inbox – I get an email, I respond to it, and I carry on with my day. And if I can do that every day (and let’s be realistic, we spend way too much time on email anyway), then I could start developing a writing habit.

About a minute after that thought, it clicked that other people might also benefit from a system like this, so I’ve spent the last few days producing a polished version I could share. It still needs a ton of behind the scenes work, but I’ve got time over the next 10 days, and I intend on hitting the ground running on 1 January.

That system: write500.net

For now, it’s basically just a mailing list. I’m working on a batch of thematic writing prompts (not just the random nonsense you find via Google), and if I can finish this off as intended, I’ve got some other feature ideas to throw in. But right now, I shouldn’t get distracted 😉

(Interestingly enough, while Mailchimp (the list provider there) does have an automation system, setting up a chain of 365 emails would push it to its limit, so I’m working up a completely custom system, using Amazon SES and my own list management. I might do a write-up on this at some point, assuming I can get it all off the ground!)

Review: The OA

This post is more than a year old. The information, claims or views in this post may be out of date.

My god, what a letdown.

The OA follows the story of a blind girl that went missing for seven years, and mysteriously reappeared with her sight restored, weird scars on her body, and an obsession with trying to find one of her old friends.

(Spoiler alert, duh)

Firstly, I love the atmosphere of this story, or: stories. There’s the A story, which takes place in the real-world present-day, where The OA is trying to cope after being rescued, and is desperately trying to get back somewhere. Most of the A story involves her gathering a group of followers and telling them her story – the B story.

The B story is where the magic happens – literally. In the B story, she’s the unfortunate orphan of a Russian billionaire, sent to America under a new name, and who ends up living with her aunt in squalid conditions, before eventually being rescued by a kind American couple who adopt her.

If that sounds like a superhero origin story, it is. If it reminds you a lot of Jupiter Jones’ backstory in Jupiter Ascending, it should, since it’s cribbed more or less word-for-word, including the Russian heritage.

The B story does get better from there, though, and I really enjoyed the way they explored the afterlife. In the B story, she travels to New York, following her visions to reunite with her father, but nothing comes of it. She does meet a kind stranger, a scientist researching Near-Death Experiences (NDEs), and agrees to take part in his study.

This whole section of the B story feels like 10 Cloverfield Lane, with some mad science thrown in, and one NDE later, The OA starts down a path of uncovering a mystical power based entirely in a very Sadler Wells-esque dance routine. Watching it sort of reminded me of this abnormality, at first.

There are two parts of this show I thoroughly enjoyed. In the B story, I loved the merging of science and mysticism, the idea of an afterlife, parallel universes that could be navigated by harnessing natural energy. I loved that they had “common”, every-day medical professionals doing this research, totally in secret – a world behind the world.

And in the A story, I loved the disciple aspect, with The OA uniting five very disparate people, and through her story and convictions alone, fundamentally changing the nature of that group.

The A story alone is worth an exploration in and of itself. No magic happens here, no weirdness, no proof of anything she’s telling us in the B story, but it’s enough to get her five disciples hooked – and eventually to risk their lives in the finale, despite them finding compelling evidence that she simply made the whole B story up.

Could be a great allegory about man’s need to believe in something in the great beyond, even in the face of physical evidence to the contrary.

The finale was the best/worst I’ve ever seen in the show. It was the best in that, it executed a realistic, terrifying scenario with absolute precision. They re-created a scenario every bit as tense as the 9/11 attacks, and tapped into the latent nightmares of basically any parent or school child.

And then they crapped all over it, building to an inspiring final act that completely deflated, unraveling the entire A story. They had an opportunity to do something really awesome, to have the A and B stories overlap, bring some of that mysticism into the present-day, and poke at the fabric of reality as they did so.

Which they did not, and instead made the disciples out to look like complete idiots, with none of their actions contributing to resolving the issue they thought they might have resolved, and out of nowhere, The OA gets shot by a stray bullet.

I mean, what?

In the end, The OA left me feeling disappointed. It became a story less about the afterlife, and more about what a group of people are willing to do based solely on the words of one damaged, but compelling, individual.

Mobile is eating the future

This post is more than a year old. The information, claims or views in this post may be out of date.

I cannot not write about this excellent presentation from a16z:

Mobile is eating the world by Benedict Evans

Two big things that stand out for me (and there’s lots of high-density info in that deck):

Machine Learning
The ability for computers to understand the world around us got infinitely better once humans were taken out of the equation. Neural networks that train against vast sets of data and write their own rules turned out to be a lot more efficient than having human specialists trying to write those rules by hand.

Somewhere in 2016 (or maybe even as early as 2014) we crossed the first Rubicon: Human engineers may no longer be capable of keeping up with the intellectual growth of the machines they used to manage.

Fantastic news for scale and growth – computers can now write better and more efficient software, which in turn gets loaded on to ever-smaller and lower-powered devices. Ambient intelligence is just around the corner.

Slightly worse news for governance and accountability – at what point does the outcome of a program stop being the responsibility of the human engineers, and start becoming the responsibility of the neural network that designed its own decision tree?

The day we need to prosecute a neural network for a crime – that’ll be the second Rubicon. Once software has legal standing, the game changes again. Probably not for the better.

Mobile applied to Automotive
Mobile phones scaled out a hell of a lot faster than PCs ever could (hence the title of the presentation), but one of the things it has made a significant impact on is manufacturing. The halo effect of having so many compact, mass-produced components means that hardware is no longer a true differentiating factor, and it’s much more about the software and services that power those devices.

The same could be true for cars. We might be heading into a future where cars (taking “electric” for granted here) are assembled in the same way that smartphones are today (just by pulling off-the-shelf interoperable components together), and the key differentiator will be the services rendered through that car.

Which leads to the interesting thought of “Automotion-as-a-Service”.

I wonder if SaaS-type pricing will ever apply to the automotive industry. Bundled minutes become bundled miles, personal assistant integrations cost extra, and you get cheaper packages if you accept in-car targeted advertising. Somehow I think that might happen.

Assuming there isn’t a Final Optimization at some point, where the neural networks collectively decide we’re too much trouble to deal with 😉

The perks of yesteryear

This post is more than a year old. The information, claims or views in this post may be out of date.

Alphabet’s new CFO seems intent on deflating the magical bouncy castle that is Google. They’ve always been known for incredible offices, fantastic perks, and moonshot projects.

… employees were informed that their holiday gift this year was a donation to charity, Fortune has learned. Alphabet donated $30 million worth of Chromebooks, phones, and associated tech support to schools on its employees’ behalf.

Ouch.

Being told that nobody’s getting expensive holiday gifts this year? That’s reasonable – it’s not a very common perk.

Being told that not only are you not getting any gifts, but the budget that would have been spent on that is being donated on your behalf somewhere else? That has to sting a little.

I wonder what’s next on Porat’s chopping block – and I wonder if anyone’s worried about the 20% rule being at risk. If the C-suite wants fiscal responsibility at the cost of talent-attracting perks, Google might end up an ad publishing company that happens to have a IaaS platform.

Which will be a sad Google.

Source: Alphabet Donated Its Employees’ Holiday Gifts to Charity

Apple’s in the AI game!

This post is more than a year old. The information, claims or views in this post may be out of date.

They’re still not sharing everything, but at least they’ve put themselves on the map.

Apple (AAPL) is opening up a bit on the state of its AI research — Quartz

The self-driving car research isn’t as interesting to me as what they’re doing with image processing and neural networks:

Compared to TensorFlow, Apple can process images twice as fast as Google can. This’ll probably show up in consumer tech eventually (yet another camera update), but if they really do have better image processing, they could lead the field in machine vision.

Robot Wars 2020

This post is more than a year old. The information, claims or views in this post may be out of date.

In terms of AI research (and overall firepower), there are three major commercial factions spinning up, and there’s also an early (ergo promising) commitment to open-source.

One recurring theme right up front: Both Microsoft and Google have tools for training AIs to play games – OpenAI’s Gym, and Google’s DeepMind Lab. So I guess, if nothing else, game developers will eventually have an easier time when it comes to designing the CPU players!

Microsoft

Microsoft already has some of the “basic” business applications up – machine learning APIs, cognitive services, and so on. In November, Elon Musk’s OpenAI foundation partnered with Microsoft to use Azure as the primary cloud platform.

Some of the cool things in the Microsoft/OpenAI camp:

Microsoft is open-sourcing a decent amount of this stuff too. Not that I’d personally do anything with it.

Their avatar in the ring is Cortana – backed by Cognitive Services, no doubt: https://www.microsoft.com/en/mobile/experiences/cortana/

Google

Google’s got a lot of the same stuff out there that Microsoft has, especially in terms of web services. Where Microsoft has Azure, Google has the Google Cloud Platform. https://cloud.google.com/products/machine-learning/

They’ve also recently open-sourced the entire DeepMind Lab: https://deepmind.com/blog/open-sourcing-deepmind-lab/

Of course, the part I’m personally fascinated by? The Blizzard/DeepMind partnership, to train the DeepMind AI on how to play Starcraft 2: http://us.battle.net/forums/en/sc2/topic/20751114921

(Finally, someone might be able to give those Koreans a run for their money)

IBM

Then there’s IBM, who has a very big, very impressive, and incredibly opaque lead over the competition in one narrow area: Deep Learning.

http://www.ibm.com/watson/

That’s five years ago. That server rack at 02:33 is now probably a fifth of the size, if not totally relegated to the cloud.

Since then, the tech behind Watson has grown a bit. The tech is designed to mine large sets of unstructured data, forming connections, and being able to answer questions based on that data – so it’s got some really powerful analytics.

All of that is being sold as a service, though – don’t expect IBM to open-source anything too soon. Which I personally think will make them irrelevant by 2020.

2020

If the next four years are anything like the last four years (in that everything seems to be speeding up), we’re almost-definitely going to see AI-as-a-Service popping up for lots of different problem domains, personal assistants that can grasp context and finally be useful, and if we’re lucky, the use of AI for things like smarter energy and resource management.

My money’s still, predominantly, on Microsoft to come out as the leader in this new field. Call me crazy 😉

 

Some thoughts on Mastery

This post is more than a year old. The information, claims or views in this post may be out of date.

Over the last few weeks I’ve been wrestling with the question of what to do next, career-wise. In doing that, I’ve been re-evaluating most of what I’ve been working on over the last few years, trying to figure out what actually made me happy, what worked to advance my career, and what held me back.

One of the things I consistently identified as being a positive, was being in a situation where I had the opportunity to develop mastery in a particular subject. I think anyone who’s driven by the need to learn would identify with that.

A new and interesting point (to me, anyway) is the idea that mastery itself is relative. I’d always thought of it as an absolute: that there’s a known limit to a given subject, and if you can reach that limit of knowledge, you’re a master in it. Sink 10’000 hours into something, and you’re the best.

That doesn’t really seem to be the case, though. In order to truly develop mastery in anything, you need to keep surrounding yourself with people that are better than you, and learn from them. There’s a quote, that like most quotes has a fuzzy origin:

If you’re the smartest person in the room, you’re in the wrong room.

It’s really obvious in hindsight. If you’re the smartest developer at your company, that doesn’t mean you’ve mastered software development – just that you’ve hit the limits of your learning. To actually become the smartest software developer, you need to find smarter developers to learn from, and inevitably, teach other developers what you know.

And even then, the goalposts keep moving. For instance, being a ‘master’ software developer 30 years ago required the command of much fewer tools and languages. To be an even half-decent full-stack developer in 2016 requires you to understand a bit of everything, from servers to UX, and all the different languages those are expressed in.

Meaning that mastery is an unattainable goal, but by far the worthiest to pursue.

Another year, another domain

This post is more than a year old. The information, claims or views in this post may be out of date.

Though this time I’m pretty happy to have landed wogan.blog. I mean, it’s not as if I have any shortage of domain names, but it’s great having one so perfectly suited to a blog I almost never update!

My initial plan for this domain was to point it to thegrid.ai and use their platform to run a site, but after a few hours trying to get Molly (that’s her name, apparently) to spit out a design that wasn’t garbage, I gave up and defaulted back to WordPress.

I’ve now got three versions of a personal blog floating around. I’ll have to corral them all at some point, but for now, I’ll just enjoy the new-domain smell.

On Record

This post is more than a year old. The information, claims or views in this post may be out of date.

Boy, have I got a story for you.

9296b8b380b562bb19cb754529fcc741cc376025.png
Urination-over-IP, I guess.

So yes, that’s me, in the screenshot above, calling the death of Apple’s relative lead in the app ecosystem wars – on 11 November 2016. When that thread dried up, I told myself that as soon as I got my new .blog domain hooked up to The Grid, I’d write a more detailed post explaining the reasons why.

(Then I found out that The Grid is crap, and set up here on WordPress instead)

For context: That conversation came out of a bit of hand-wringing around Apple’s new Touch Bar (and a few other really odd technical decisions on the part of Apple). One of the recurring themes from hardcore, longtime Apple users is that the “Pro” part of MacBook Pro became disingenuous with this release. The Mac is no longer for professionals.

There’s two main threads behind my comment though, so let me start with the company itself.

Post-Jobs Apple

To put it flatly: Apple died with Jobs.

I don’t mean the company itself – it’s still the most valuable in the world by market cap, and has brand equity second to none. As a going concern, it’ll be going for a very, very long time

I don’t mean the products, though Apple has recently started trimming some of their smaller lines. We’ll have iPhones and MacBooks for years (if not decades) to come.

What I’m talking about is the spirit of Apple – the drive, the mystique, the vision, the quasi-religious, standard-setting, trail-blazing aspect of owning Apple hardware. That’s gone, and the Touch Bar was the final nail in that coffin.

Among the many things that Jobs did, when it came to the Mac he only seemed to have two objectives in mind:

  1. Apple will build the best machine possible – from the hardware to the software
  2. Apple will enable creatives, dreamers and makers

The MacBook itself became a sort of paragon – a golden standard for notebook design. Every generation was thinner and lighter. Apple introduced Retina resolution, the best touchpad on the market (still undefeated in my opinion), unibody design, relentless optimization to the keyboard, and occasional ground-up rewrites to ensure that OSX would remain stable and performant.

Very few of those choices were informed by market forces. When it comes to designing a product to target a market of any sort, most companies will typically do the least they can get away with. There’s a cost/benefit formula to everything: How much money sunk into R&D, versus how many sales are required to make a profit.

Problem with that is, markets are generally full of shit. Consumers don’t know what they want until you put it in front of them – one of the many insights that drove Jobs, and by extension, Apple.

It didn’t matter to Apple that they were sinking far more R&D time into parts of the device that most consumers wouldn’t ever touch. It didn’t matter which way the market was going at any one point. If they made a decision (even if unpopular) – consumers be damned. Apple owned the game.

And that worked out very well for the makers – the photographers, video editors, software engineers, designers and artists. They could rely on successive generations of the MacBook Pro line to thoroughly equip them to create better things, no doubt driven by an obsessive CEO who was never satisfied with the output.

When you build the best, and sell the best, inevitably you attract the best. There’s a small, but significant halo effect that Apple has created here: Their hardware has attracted the best developers. Not just the developers that code for money, but also the ones that code because it’s their lives.

And when those developers need to solve a problem, chances are they’ll use the best tools at their disposal to do so. Over time, this meant that the brightest and most capable engineering talent accumulated within the Apple ecosystem. It should be no small wonder that the Android and Windows app stores are currently seen as second-class citizens, or that MacBooks are effectively mandatory at any tech startup.

All of that talent has a material impact, too. Consider the iPhone – the hardware tends to stay ahead in some areas, the operating system is often criticized for being feature-limited, but the app store is second to none.

Which does make a big difference. I’ve owned several different phones, the best of which (hardware-wise) was a Lumia 1520. Brilliant screen, camera, battery and touch surface, but the apps were barely functional, and less than two months into the contract I bought a new phone out of sheer frustration. I know I’m not the only one.

You can do much more (and much better) with an iPhone than you can with any other device, which is why this chart should not come as a big surprise:

2af803fa-c0dd-4158-b453-61aa05e59e55.png
Respective FY-2015 totals taken from audited financials

In FY 2015, the Apple iPhone product line alone has generated more revenue than entire competitors.

And don’t underestimate the ripple effect to major software vendors. Producers of high-end creative software packages (Photo manipulation, video editing, sound editing) aim for the Mac platform because that’s where the high-end creatives are. If that market starts drying up, so too do the updates that regular users benefit from.

No wonder everyone – users and analysts – think that Apple is unstoppable.

Except that it is, because it’s failing to do two things right now.

First: In the short term, it just failed to equip high-end creative professionals with the best possible hardware. In the wake of the new MacBook ‘Pro’ lineup, long-time Apple users are starting to talk about defecting to other platforms. This will eventually have a degrading impact on the Apple ecosystem as a whole, especially if another vendor is standing by to give those power users what they need.

Which Microsoft neatly did with the Surface Studio this year – a desktop machine aimed squarely at designers and digital artists. If they release a better notebook for developers, I’m willing to bet quite a few luminaries would seriously consider switching to the Microsoft ecosystem.

But there’s another thing that Apple’s failed to do, and I think this is the one that really matters.

Computing is changing

For as long as we’ve kept records, we’ve needed to process them – and for a very long time, manual processing was OK. It’s hard to imagine now, but there was a time that drawing bar graphs was an actual profession.

Computers came along as a grown-up version of basic calculators. The biggest benefit was that you could change how the computer processed information, by providing more information: software.

Since then, all we’ve really done is more of the same. Computers have gotten millions of times faster at processing instructions, and computing languages have been developed to make programming within reach of almost everyone.

Software has been getting more powerful and more sophisticated over time, but has always been bound by a very simple constraint: It required a human to learn how to program a machine. Before you could make a computer do anything, you need to understand how to solve the problem yourself, then instruct the computer how to solve it.

That’s starting to change with recent advances in Machine Learning (more specifically, Neural Networks). It’s a simple but powerful layer of abstraction: Instead of telling computers what to do, we’re teaching them how to decide what to do for themselves.

Here’s a simple example: https://quickdraw.withgoogle.com/

That’s a simple neural network game. It processes the images you draw, and matches them against similar images that other people have drawn. Over time, it learns to recognize more variations of objects. There may come a time when it has so many variations stored, its accuracy becomes close to human (if not perfect).

However, that machine was not programmed by a human to recognize every possible shape. It was built, instead, to find patterns, and to correlate those patterns with ones it’s seen before. That’s the difference.

Problem-solving itself is starting to change. In future, instead of solving problems by writing machine instructions, the heaviest problems will be solved by building machines that can learn, and by training them to solve those problems for us.

Today, solving these problems requires the use of cloud computing. Sure, you can run a small neural network on your laptop, but to give it true power it has to scale out to hundreds of computing nodes in parallel.

And so, today, there are two vendors which are leading the field here: Google and Microsoft.

Google’s Cloud Platform is exposing APIs that developers can start using today to add natural interactivity to their applications, and they’re already doing cool things with neural networks – for instance, zero-shot translation.

Microsoft, through the Azure platform, is building towards making even stronger capabilities available to end-users. They already claim human-parity speech recognition, and have recently partnered with OpenAI.

Personally, I think Microsoft is leading this race. Google’s got the edge on quantity – their entire infrastructure is geared towards processing large amounts of data and finding useful relationships. Microsoft, on the other hand, has way more infrastructure, a stronger research team, and seems better-equipped to tackle the more interesting use cases.

In any case: Apple is nowhere to be found. The company that built itself on the quality of its hardware, equipping high-end creatives and reaping the benefit of their participation in the ecosystem, has precisely zero play in the AI game.

So that’s the rationale behind my position. Vendors other than Apple are building the tools that power-users of the future will require, and so will attract more power-users. They, in turn, will have the same halo effect on the Microsoft (and/or Google) ecosystems.

Ergo, I’m “on record”:

I can call it right now: The day will come when the Windows app ecosystem rivals the OSX ecosystem for quality, and after that, we’ll come to think of Apple vs Microsoft as Myspace vs Facebook.

To wrap it up in tl;dr terms:

  • The future game is about AI, neural networks and machine learning
  • The winner will be the vendor that can solve for the most complex problems in the most cost-efficient way
  • Microsoft is currently positioned to build the best hardware/software/services ecosystem to enable developers to do just that

Apple will lumber on as a consumer brand. The core value proposition (Apple hardware is for makers) has now been sacrificed on the altar of market forces. Whoever comes up with the best ecosystem for AI will win on the software front, which will be the only front that matters in the end.