Project: Newskitten

This post is more than a year old. The information, claims or views in this post may be out of date.

The world is a big, messy, complex, and sometimes-scary place – and it’s often made to seem more scary than it really is, thanks to mainstream news media.

Now don’t get me wrong, I’m not some sort of tinfoil-hatted alt-truther who believes there are secret forces pulling the strings behind what we see and hear in the news. I know it for a fact.

News outlets need to generate revenue to survive (they’re not charities), and over time, the proven revenue generators have been bad news. The day Oscar Pistorius shot Reeva Steenkamp was a fantastic day for online news, and for online ad revenue – that’s the sort of market force that ends up shaping editorial policy.


News organizations are optimizing for what works, and unfortunately, what works is not good. They’re forced to give audiences less of what they need, and more of what they want.

What people want, it seems, is sensationalism. They want news that angers and upsets them, that gives them something to talk and argue about, that conforms to their biases and reminds them their opinion of the world is still valid. They want to be entertained, not informed – because real information is boring.

At some point, the content that news providers put out there stopped being informative. It stopped being information that applied to you, or information that you needed in order to make an informed decision. There’s almost nothing that you can take from a news website today and use to improve the world you live in.

For example, here’s the top-line news on right now:


This is what the largest and most popular news website in South Africa thinks I need to know today. None of it is relevant or useful to me.

  • Police pursuing N1 City Mall robber, high alert at shopping centres – I’m not in the police. I don’t control their budgets or deployments, I have no involvement in shopping center security, and the only way this applies to me is if I go shopping at a mall – at which point the people who’s jobs it is to keep the mall safe, will do their job to the best of their ability.
  • Helen Zille is a chief racist – ANC Western Cape – An empty statement from a political party. I don’t know Zille, I have nothing to do with either party (no say in their internal structures, no membership), and when it comes time to vote in a few years, I’ll cast my vote based on what the parties have actually done for me. News like this does nothing to sway my opinion.
  • We will show you how the ‘soccer’ game is played – KZN’s ANC – more political noise. I’m not in the ANC, I’m not in KZN, I’m not attending that conference, or have any interest in its outcome. Even if I hopped a plane right now and showed up at the conference venue door, there’d be nothing I could do. Hell, even if I was in the KZN ANC itself, chances are I’d have no voice, owing to the fact that my career is not politics, and I’d have no decision-making ability.
  • CONFIRMED: Mashaba sacked as Bafana coach – Zero relevance to me, since I don’t follow sports at all. But even to a sports fan, still zero relevance – hiring decisions are made by team management, not fans. Team managers are doing their jobs, and if there are better ways to do it, then a case can be made for change.
  • The end of Everest as we know it? China plans to build a mega-resort on the mountain – Everest is literally a world away from me. Everything I know about Everest is thanks to movies, books and music – this news is irrelevant to me. If China does build a resort there (which will be interesting seeing as Everest is in Tibet, and the nations have a cold relationship), I’ll have zero say in how it’s designed, resourced, built or maintained. I definitely, as a South African, have less than zero input into any decision made by a Chinese firm.

I could go on – headlines about SA’s nuclear plan, an SPCA investigation into a rodent supplier, Hlaudi Motsoeneng mouthing off: none of this information is useful to me. I doubt it’s useful to most people, since most people are not in a position to do anything with this information.

Except get upset. The mall-robbers story is a great springboard for complaining about how South Africa is chronically unsafe and the police aren’t doing their jobs. The Zille story is the perfect fuel to stoke the continually-burning race war in our national discourse. The Everest story is Christmas to people who think nature should be left alone, and people (or the Chinese specifically) are evil for building things.

None of these stories leave you with anything constructive – you just come away feeling depressed, angry and hopeless. And then, most of the time, you do your friends the disservice of sharing that on social media, spreading the infection further. Worse if you believe that your Facebook likes are actually saving lives.


And if you’re like most people, you rationalize it by calling it “being informed”, and that it’s better to be informed, than ignorant. You think that being informed is very important, almost crucial to daily life, and that the news you’re reading is an accurate depiction of the state of things.

Except that it’s not. On top of being driven by what sells, news organizations occasionally employ very biased editors. Editors that would happily tarnish the reputation of their organization for the sake of running, say, a politically-motivated smear campaign.

That was not news. That was a deliberate, blatant fabrication with no fact-checking, presented as news. Even if it had been true, what would you do with information that serves only to confirm your existing biases?

Imagine for a second that it was true, that Maimane was getting lessons from FW. If it was a problem, the DA has its own means of dealing with it. If it’s not a problem, the DA will have spun it to Maimane’s credit. In either case, on the outside, there’s nothing meaningful you could have contributed to the process.

If that’s what an editor is willing to put on the front page, in the middle of an election season, imagine what the editors are willing to do with the every day stories you read online. How many of those have been edited to provide the worst version of events? How many are actually just political hit-pieces disguised as news, aimed at discrediting a person, a party, or a part of society? How can you even tell the difference anymore?

And how many editors are just flat-out ignorant? Like the story of Andrew Kenny, a respected engineer and columnist who visited the Afrikaner town of Orania. His story shows the community in a positive light, and his editor decided not to run the story, since it might offend readers.

News that might offend readers? News is meant to be facts, right? Well-researched, well-documented, delivered responsibly, and if it upsets people, then that’s unavoidable – hell, it’s necessary. Being offended is a natural and healthy part of living in a civilized society – but that’s a whole other topic.

So that’s news, as of 2016. It’s either designed to generate revenue, slanted to an editor’s personal bias, or simply ignored because it doesn’t fit the outlet’s story. The information you really need (civics, opportunities, policies) are all obtained elsewhere, and any time you spend on a typical news website is probably (balance of probability) time spent consuming content that won’t have a net-positive effect on your health or wellbeing.

Which is what I built a Chrome extension to fix.


It’s called Newskitten, and you can download it here: Newskitten – Chrome Web Store

It’s still in its early stages. The concept is simple – if it sees a news website, it replaces it with that message above instead.

I’ve been using it for over a week now, and it’s already made a noticeable difference. I’ve clicked through from depressing-sounding headlines on my Facebook feed, only to be met with a relaxing cat. I’ve habitually opened up news websites in moments of boredom, hoping to find something new, and getting that gif instead.

It already feels like the news is leeching a lot less of my time and energy away from me. So much so, that I found the time (and mental clarity) to write this really long blog post, without feeling like the world is about to collapse on me at any moment.



This post is more than a year old. The information, claims or views in this post may be out of date.

Check it out:

Whenever I’m not deliberately concentrating on a given task, my mind tends to wander in a very specific way: It tells me stories. All day, every day – characters, worlds, twists, inventing and reinventing themselves.

One of my perpetual New Year’s Resolutions (since roughly 2008) is to write more – almost entirely to pin some of those stories down on paper, and if I turn out to be any good at it, polish one or two up and publish them.

Good theory – much harder in practice.

A writer is someone for whom writing is more difficult than it is for other people.

~Thomas Mann

Writing, I’ve learned, is every bit a skill as software development. Sure: anyone can read someone else’s work, see the logic in it, and gain the (false) confidence to create something like it – but the moment the pen hits the paper, that evaporates completely, leaving you facing the sober reality that, actually, you have no experience.

This has been my problem for the last few years, and I suspect it’s not unique. Any writer who’s read more than half a writing craft book should know that writing every day is one of the critical components – that you need the ability to produce sheer quantity, before you can start obsessing about the quality.

So with the new year coming, I had the idea of setting up a system for 2017 to help me exercise that muscle. I know I can write 500 words pretty easily – this blog post, written on the spur of the moment, is 586 words – and so long as I have some sort of guidance as to what to write, I shouldn’t find it difficult at all.

500 words per day, for 365 days, is over 180’000 words. Sure, they’re not all congruent words, and I have no hope of getting a novel out of it – but if I can manage it, I’d have written a novel’s worth of words, and I’d have built up routine, momentum, and (hopefully) a bit of confidence in my ability.

On that theory, I grabbed a domain, and started building a system to deliver me a writing prompt every morning at 9am. I figured the workflow would be no different from managing my inbox – I get an email, I respond to it, and I carry on with my day. And if I can do that every day (and let’s be realistic, we spend way too much time on email anyway), then I could start developing a writing habit.

About a minute after that thought, it clicked that other people might also benefit from a system like this, so I’ve spent the last few days producing a polished version I could share. It still needs a ton of behind the scenes work, but I’ve got time over the next 10 days, and I intend on hitting the ground running on 1 January.

That system:

For now, it’s basically just a mailing list. I’m working on a batch of thematic writing prompts (not just the random nonsense you find via Google), and if I can finish this off as intended, I’ve got some other feature ideas to throw in. But right now, I shouldn’t get distracted 😉

(Interestingly enough, while Mailchimp (the list provider there) does have an automation system, setting up a chain of 365 emails would push it to its limit, so I’m working up a completely custom system, using Amazon SES and my own list management. I might do a write-up on this at some point, assuming I can get it all off the ground!)

Review: The OA

This post is more than a year old. The information, claims or views in this post may be out of date.

My god, what a letdown.

The OA follows the story of a blind girl that went missing for seven years, and mysteriously reappeared with her sight restored, weird scars on her body, and an obsession with trying to find one of her old friends.

(Spoiler alert, duh)

Firstly, I love the atmosphere of this story, or: stories. There’s the A story, which takes place in the real-world present-day, where The OA is trying to cope after being rescued, and is desperately trying to get back somewhere. Most of the A story involves her gathering a group of followers and telling them her story – the B story.

The B story is where the magic happens – literally. In the B story, she’s the unfortunate orphan of a Russian billionaire, sent to America under a new name, and who ends up living with her aunt in squalid conditions, before eventually being rescued by a kind American couple who adopt her.

If that sounds like a superhero origin story, it is. If it reminds you a lot of Jupiter Jones’ backstory in Jupiter Ascending, it should, since it’s cribbed more or less word-for-word, including the Russian heritage.

The B story does get better from there, though, and I really enjoyed the way they explored the afterlife. In the B story, she travels to New York, following her visions to reunite with her father, but nothing comes of it. She does meet a kind stranger, a scientist researching Near-Death Experiences (NDEs), and agrees to take part in his study.

This whole section of the B story feels like 10 Cloverfield Lane, with some mad science thrown in, and one NDE later, The OA starts down a path of uncovering a mystical power based entirely in a very Sadler Wells-esque dance routine. Watching it sort of reminded me of this abnormality, at first.

There are two parts of this show I thoroughly enjoyed. In the B story, I loved the merging of science and mysticism, the idea of an afterlife, parallel universes that could be navigated by harnessing natural energy. I loved that they had “common”, every-day medical professionals doing this research, totally in secret – a world behind the world.

And in the A story, I loved the disciple aspect, with The OA uniting five very disparate people, and through her story and convictions alone, fundamentally changing the nature of that group.

The A story alone is worth an exploration in and of itself. No magic happens here, no weirdness, no proof of anything she’s telling us in the B story, but it’s enough to get her five disciples hooked – and eventually to risk their lives in the finale, despite them finding compelling evidence that she simply made the whole B story up.

Could be a great allegory about man’s need to believe in something in the great beyond, even in the face of physical evidence to the contrary.

The finale was the best/worst I’ve ever seen in the show. It was the best in that, it executed a realistic, terrifying scenario with absolute precision. They re-created a scenario every bit as tense as the 9/11 attacks, and tapped into the latent nightmares of basically any parent or school child.

And then they crapped all over it, building to an inspiring final act that completely deflated, unraveling the entire A story. They had an opportunity to do something really awesome, to have the A and B stories overlap, bring some of that mysticism into the present-day, and poke at the fabric of reality as they did so.

Which they did not, and instead made the disciples out to look like complete idiots, with none of their actions contributing to resolving the issue they thought they might have resolved, and out of nowhere, The OA gets shot by a stray bullet.

I mean, what?

In the end, The OA left me feeling disappointed. It became a story less about the afterlife, and more about what a group of people are willing to do based solely on the words of one damaged, but compelling, individual.

Mobile is eating the future

This post is more than a year old. The information, claims or views in this post may be out of date.

I cannot not write about this excellent presentation from a16z:

Mobile is eating the world by Benedict Evans

Two big things that stand out for me (and there’s lots of high-density info in that deck):

Machine Learning
The ability for computers to understand the world around us got infinitely better once humans were taken out of the equation. Neural networks that train against vast sets of data and write their own rules turned out to be a lot more efficient than having human specialists trying to write those rules by hand.

Somewhere in 2016 (or maybe even as early as 2014) we crossed the first Rubicon: Human engineers may no longer be capable of keeping up with the intellectual growth of the machines they used to manage.

Fantastic news for scale and growth – computers can now write better and more efficient software, which in turn gets loaded on to ever-smaller and lower-powered devices. Ambient intelligence is just around the corner.

Slightly worse news for governance and accountability – at what point does the outcome of a program stop being the responsibility of the human engineers, and start becoming the responsibility of the neural network that designed its own decision tree?

The day we need to prosecute a neural network for a crime – that’ll be the second Rubicon. Once software has legal standing, the game changes again. Probably not for the better.

Mobile applied to Automotive
Mobile phones scaled out a hell of a lot faster than PCs ever could (hence the title of the presentation), but one of the things it has made a significant impact on is manufacturing. The halo effect of having so many compact, mass-produced components means that hardware is no longer a true differentiating factor, and it’s much more about the software and services that power those devices.

The same could be true for cars. We might be heading into a future where cars (taking “electric” for granted here) are assembled in the same way that smartphones are today (just by pulling off-the-shelf interoperable components together), and the key differentiator will be the services rendered through that car.

Which leads to the interesting thought of “Automotion-as-a-Service”.

I wonder if SaaS-type pricing will ever apply to the automotive industry. Bundled minutes become bundled miles, personal assistant integrations cost extra, and you get cheaper packages if you accept in-car targeted advertising. Somehow I think that might happen.

Assuming there isn’t a Final Optimization at some point, where the neural networks collectively decide we’re too much trouble to deal with 😉

The perks of yesteryear

This post is more than a year old. The information, claims or views in this post may be out of date.

Alphabet’s new CFO seems intent on deflating the magical bouncy castle that is Google. They’ve always been known for incredible offices, fantastic perks, and moonshot projects.

… employees were informed that their holiday gift this year was a donation to charity, Fortune has learned. Alphabet donated $30 million worth of Chromebooks, phones, and associated tech support to schools on its employees’ behalf.


Being told that nobody’s getting expensive holiday gifts this year? That’s reasonable – it’s not a very common perk.

Being told that not only are you not getting any gifts, but the budget that would have been spent on that is being donated on your behalf somewhere else? That has to sting a little.

I wonder what’s next on Porat’s chopping block – and I wonder if anyone’s worried about the 20% rule being at risk. If the C-suite wants fiscal responsibility at the cost of talent-attracting perks, Google might end up an ad publishing company that happens to have a IaaS platform.

Which will be a sad Google.

Source: Alphabet Donated Its Employees’ Holiday Gifts to Charity

Apple’s in the AI game!

This post is more than a year old. The information, claims or views in this post may be out of date.

They’re still not sharing everything, but at least they’ve put themselves on the map.

Apple (AAPL) is opening up a bit on the state of its AI research — Quartz

The self-driving car research isn’t as interesting to me as what they’re doing with image processing and neural networks:

Compared to TensorFlow, Apple can process images twice as fast as Google can. This’ll probably show up in consumer tech eventually (yet another camera update), but if they really do have better image processing, they could lead the field in machine vision.

Robot Wars 2020

This post is more than a year old. The information, claims or views in this post may be out of date.

In terms of AI research (and overall firepower), there are three major commercial factions spinning up, and there’s also an early (ergo promising) commitment to open-source.

One recurring theme right up front: Both Microsoft and Google have tools for training AIs to play games – OpenAI’s Gym, and Google’s DeepMind Lab. So I guess, if nothing else, game developers will eventually have an easier time when it comes to designing the CPU players!


Microsoft already has some of the “basic” business applications up – machine learning APIs, cognitive services, and so on. In November, Elon Musk’s OpenAI foundation partnered with Microsoft to use Azure as the primary cloud platform.

Some of the cool things in the Microsoft/OpenAI camp:

Microsoft is open-sourcing a decent amount of this stuff too. Not that I’d personally do anything with it.

Their avatar in the ring is Cortana – backed by Cognitive Services, no doubt:


Google’s got a lot of the same stuff out there that Microsoft has, especially in terms of web services. Where Microsoft has Azure, Google has the Google Cloud Platform.

They’ve also recently open-sourced the entire DeepMind Lab:

Of course, the part I’m personally fascinated by? The Blizzard/DeepMind partnership, to train the DeepMind AI on how to play Starcraft 2:

(Finally, someone might be able to give those Koreans a run for their money)


Then there’s IBM, who has a very big, very impressive, and incredibly opaque lead over the competition in one narrow area: Deep Learning.

That’s five years ago. That server rack at 02:33 is now probably a fifth of the size, if not totally relegated to the cloud.

Since then, the tech behind Watson has grown a bit. The tech is designed to mine large sets of unstructured data, forming connections, and being able to answer questions based on that data – so it’s got some really powerful analytics.

All of that is being sold as a service, though – don’t expect IBM to open-source anything too soon. Which I personally think will make them irrelevant by 2020.


If the next four years are anything like the last four years (in that everything seems to be speeding up), we’re almost-definitely going to see AI-as-a-Service popping up for lots of different problem domains, personal assistants that can grasp context and finally be useful, and if we’re lucky, the use of AI for things like smarter energy and resource management.

My money’s still, predominantly, on Microsoft to come out as the leader in this new field. Call me crazy 😉


Some thoughts on Mastery

This post is more than a year old. The information, claims or views in this post may be out of date.

Over the last few weeks I’ve been wrestling with the question of what to do next, career-wise. In doing that, I’ve been re-evaluating most of what I’ve been working on over the last few years, trying to figure out what actually made me happy, what worked to advance my career, and what held me back.

One of the things I consistently identified as being a positive, was being in a situation where I had the opportunity to develop mastery in a particular subject. I think anyone who’s driven by the need to learn would identify with that.

A new and interesting point (to me, anyway) is the idea that mastery itself is relative. I’d always thought of it as an absolute: that there’s a known limit to a given subject, and if you can reach that limit of knowledge, you’re a master in it. Sink 10’000 hours into something, and you’re the best.

That doesn’t really seem to be the case, though. In order to truly develop mastery in anything, you need to keep surrounding yourself with people that are better than you, and learn from them. There’s a quote, that like most quotes has a fuzzy origin:

If you’re the smartest person in the room, you’re in the wrong room.

It’s really obvious in hindsight. If you’re the smartest developer at your company, that doesn’t mean you’ve mastered software development – just that you’ve hit the limits of your learning. To actually become the smartest software developer, you need to find smarter developers to learn from, and inevitably, teach other developers what you know.

And even then, the goalposts keep moving. For instance, being a ‘master’ software developer 30 years ago required the command of much fewer tools and languages. To be an even half-decent full-stack developer in 2016 requires you to understand a bit of everything, from servers to UX, and all the different languages those are expressed in.

Meaning that mastery is an unattainable goal, but by far the worthiest to pursue.