The importance of publishing regularly

You know the difference between theory and practice, right? How you can know something in theory for years, but never actually apply it? I’m spectacularly guilty of that.

One of the many side-effects of modernized content publishing (under which I’m lumping radio, TV, video, podcasts and comics too) is the rate at which consumers now expect new content.

If you’re just getting started today, and you intend on building any kind of career, constant output really matters.

I’ve known this for years (and have ignored it for those same years), but if you’re planning on being a professional content creator in today’s hyper-connected, hyper-accelerated world, you need to create and share with some frequency.

By “professional” I mean: “earning a living doing this”. Not “professional” as in “the best possible output from the most skilled individuals”, or even “certified or qualified” – those barriers have long since been obliterated.

If your intention is to build a career on this, then quality is almost secondary to constant output. Especially if you have an inner critic that’s harping on at you about how bad your work is.

There’s an excellent example of this on YouTube. One of my favorite creators is a guy by the name of Bill Wurtz, who has a channel filled with short clips in his very unique style. A few months ago, he published a new video – The History Of The Entire World:

Excellent video, extremely information-dense, and enormously painstaking to create. That 19 minutes of footage took him over 11 months to produce (according to an interview he did later), and at many points during the project, he felt like dropping it completely – which I sympathize with!

It’s a solid creative achievement, and I’d always want to see more work like this make its way to the world.

But if you’re trying to build a career as a content creator, a schedule like that is a death sentence. As of today, the video has over 25 million views, which means it’s accumulated a flat average of around 470k views/day. And that’s already slowed down to a crawl:


Public statistics from the YouTube video

Creators on YouTube can earn advertising revenue (with the possible exception of the recent adpocalypse that’s still evolving), and beyond that, popular creators often spin up secondary revenue streams – things like merchandise, club memberships, and Patreon pages.

But all of that starts with views (or reads, listens, downloads, whatever your metric is) – more is better, and consistency is better.

There’s another YouTuber that I’ve recently sort-of started watching – Felix Kjellberg, better known as PewDiePie. He started in April 2010, essentially just recording videos of himself playing games. In the 7+ years since then, SocialBlade reports that he’s uploaded over 3200 videos.

There are only 2555 days in 7 years, meaning that he’s uploaded an average of more than one video per day, every day, for over 7 years.

The man is a Swedish meme machine. I don’t know how he does it, but he does.

That consistency built him an audience, ended up landing him contracts and other opportunities, then turned into a solid (multi-million dollar) career. It took a knock when the Wall Street Journal went after him, but that’s a whole other story on its own. Right now, he’s sitting on 55 million subscribers – possibly 56 million before today is out. Not bad for a kid that used to sell hot dogs!

Fun fact: PewDiePie has 55 million subscribers and can drive upwards of a million views to a new video the day it comes out. The WSJ is the largest newspaper in the United States, with a circulation of only 2.4 million print copies, and 900k digital subscribers.

(No wonder they went after him!)

The videos themselves, if I’m honest, are not great – something that he acknowledges in a very funny and self-deprecating way. They’re mostly just him in front of a webcam, talking into a microphone and browsing the web. The comedic timing is excellent though, and the use of effects is hilariously cheesy.

The most recent uploads are usually around 10 minutes in length (partially to take advantage of YouTube’s advertising distribution algorithm), and probably take a few hours at most to record, edit, and upload. But he does it every single day, and has done it consistently for so long, that this is how the daily movements across his channel look today:


Stats from

His back catalog of 3200+ videos cumulatively gains new views in the millions, every single day. And all of those views are potential ad revenue, new subscribers, or merchandise customers.

This principle applies everywhere in the modern content business. The more content you build up over time, and the more frequently you’re able to do it (even if it’s not your absolute best work on every single publish), the better off you’re going to be in terms of audience size, and the corresponding revenue opportunities.

Constant publishing creates an audience, and constant consumption creates opportunity.

There’ll always be space for the big, heavy, long-running creative projects in the world. Those are the ones that will go on to have enormous cultural impact – a great recent example being American Gods.

The book was originally published in 2001 (that’s sixteen years ago, in case you weren’t already feeling old). It’s getting a television series as of this year, and two ambitious claims have emerged: That this will be a series on the level of Game of Thrones, and that the content in the novel itself could keep the series going for several years without needing much modification.


American Gods took Gaiman several years to write. I think I read somewhere that it was (at least) 5 years. I can’t source that right now, but I could certainly believe that, given the scope of the work.

I’m convinced that American Gods will go on to have a huge cultural impact, win multiple awards (the book already won awards back in 2001), and inspire thousands of new creators.

But not all creative work has to be on the same scale, and there’s just as much place for the people who produce entertainment on a much more regular basis. Especially now that there are so many formats, the cost of entry is lower than its ever been, and you can probably find an audience for even the narrowest of niches.

It’s just down to overcoming your inner critic, and making a start!

You should probably invest in crypto

Obvious upfront disclaimer: Don’t take financial advice from the internet, least of all me. And don’t sue me if you make a bad bet and lose – like I’ll point out below, this is still an extremely volatile space.

I have “mixed” opinions on cryptocurrencies. On the one hand, I’m enormously frustrated with the quasi-religious evangelism that tells people how crypto is the future of literally all money, and their pooh-poohing of simple technical concerns like “how do you scale to hundreds of millions of concurrent real-time transactions”.

Crypto as we know it today won’t replace money as we know it today. Some future optimized version of crypto might, but it will be unrecognizable as compared to the algorithms of today.

On other aspects, I’m a lot more positive – the technology behind it (blockchain) is the real revolution, and a completely new way of thinking about storing, processing and ensuring the integrity of data. New applications built on blockchain technology have enormous potential, and it’s all open-source.

The reason I put “mixed” in quotes is that these positions seem inconsistent to fans of crypto (“how can you not believe it’s the future of money if you think it’s so cool?”), which is an argument entirely on its own, and more than I want to get into right now.

So what are cryptocurrencies? As simply as I can explain it: digital gold.

A bit more complex: A shared ledger, secured by a mathematical algorithm, calculated across tens of thousands of computers which will reach consensus on whether or not a single transaction on it is authentic.

In other words – it’s impossible to forge, and it’s impossible to steal (short of actually forcing someone at gunpoint to transfer coins to you). It is possible to lose, however (as the upcoming BIP 148 hiccup might end up demonstrating), and of course it’s always possible that governments might decide to regulate or outlaw it altogether.

It’s also impossible to devalue by manipulating volume – each currency has a set upfront limit of how many coins will ever be issued, and free markets are entirely responsible for determining the value. I’m sure this is one of the more attractive aspects to the folks who deeply distrust any government, bank or other institution.

Which is where the volatility comes into it. It’s very unlikely, but it is possible at any point that one of these currencies (and there are many) could simply stop operating.

Their existence depends entirely on the network of miners (networked computers) that agree to process transactions for that currency, and at some point, regular external market forces could make some of the smaller currencies untenable.

There’s also the regulation aspect hanging over this. Apart from the obvious criminal uses of a nearly-untraceable cryptocurrency, there’s the wildly unregulated speculation going on right now.

As of today, Bitcoin alone has a USD-denominated market cap of over $40 billion. What this means in practical terms: A good couple of investors (in the hundreds-of-thousands range by now, at least) have bought Bitcoins in exchange for real money, likely in the hope that it’ll appreciate in value over time.

That’s a really big roulette wheel, and so far (for the most part) it’s paid off quite handsomely. It has a lot of the hallmarks of historical gold rushes: Someone’s found something of value, and people are tripping over each other to get in on the action.

That value, though, is likely to only ever exist in the realm of financial speculation. Bitcoin is a desirable, limited-edition asset, which in and of itself is enough to drive speculators and gamblers to it. At $40bn, you have to assume there’s a lot of interest in keeping the entire system propped up, almost entirely for its own sake.

Which is not necessarily a bad thing, and it’s why I’m optimistic about crypto. $40bn is technically a tiny market cap, compared to assets like gold ($7 trillion) – but it has proven itself over the last few years to be stable and growing.

And it can only grow. Sure, getting into Bitcoin today is hard. If you’re planning on buying hardware and setting up as a miner, you better hope you can get free electricity – otherwise you’re losing money the moment you turn that machine on.

The market can always expand, though, and this is the main reason for my optimism. Bitcoin is the largest, oldest and most well-known, but it’s by far not the only cryptocurrency that can be traded for fiat. Here’s the top 10 by market cap, as per


Each symbol there represents an entirely different blockchain – a different asset. And while a single currency has a set limit on the amount of coins it can produce, there’s no limit to the amount of currencies that can be set up.

Unlike gold, wine, art or land, cryptocurrencies can be forked and spun up infinitely. As an asset class, it can expand to soak up every bit of market demand. Every person in the world, theoretically, could have holdings in at least one cryptocurrency.

So why the title of this post, and why do I think it’s worth investing in? Simple: It’s still early days, and it’s still got lots of room to grow.

With the amount of interest and investment going into the underlying systems, and the value of owning some cryptocurrency (both inherent and perceived), they’re all likely to continue to grow in value over the long term.

Today, a single Bitcoin costs around $2’500. There’s no reason to believe it won’t hit $10’000 someday. The same is true of most of the currencies in that screenshot above – whatever the cost is today, there’s a good chance it’ll be a great deal higher in a few years.

My quick anecdote on this: In January I moved R800 into a local crypto exchange, then found out they wanted to do KYC processes over unencrypted email, got annoyed, asked for a refund, and was denied. So in frustration I just cut my losses and forgot about it.

Until a few days ago, when I remembered the whole saga and logged in to see if anything had changed. And one thing had: In around 5 months, my R800 investment in Litecoin had turned into R4’720.

That’s a return you’re not going to find anywhere else – which is what makes it simultaneously alluring, and incredibly risky. There could just as easily have been a crash that wiped out my R800 forever.

Which is why I started small, and would advise anyone else interested in this to do the same. I’ve got accounts on three different exchanges, and holdings in two different cryptocurrencies. Every now and then when I get a bit of spare cash left over that I can afford to save, I move some of it into (what I now consider) my cryptocurrency portfolio.

Between all that spare cash, accumulating over time, it’s added up to very decent growth. As of today, that rate is sitting around 35%, and I’ve been doing this less than a year. That’s an absolutely insane growth rate as compared to other investment options.

And it could all disappear tomorrow, leaving me with absolutely no recourse. Which is why, as tempting as it is sometimes, I’m not gonna bet the farm on this just yet.

What it is, though, is the single riskiest position across all my investments, and the one that’s most likely to yield big returns in a short space of time. I’m expecting (and somewhat hoping) that this growth trend will continue for the next few years, too.

But I’m also expecting that at some point, all the valuations might just evaporate overnight, and I would have done no better than slowly setting that cash on fire. Tempered expectations are pretty much a requirement if you’re gonna ride this roller-coaster – at least, in my experience!

If you’re a South African and you want to get started, is your best bet. Otherwise, and are good options.

Nothing in this post should be construed as financial advice. Please consult a professional before making incredibly risky investment decisions. 

Casting off for new shores

One thing I’ve learned this week – even if you know about the sunk cost fallacy, it can still somehow creep up on you unexpectedly.

Writing has been a lifelong dream for me. For about as long as I can remember, I’ve always been fascinated with stories, and would constantly dissect and re-imagine them in new ways.

But it never felt like the right time to actually write. The reasons are varied, complex, mostly depressing, some aggravating, and all in the past – the sum of it is that I’ve never really been able to settle on something that I actually wanted to do, and as a result of that paralysis, have done largely nothing.

Last year, May 2016, I redid this blog – archived all the old content, changed the domain name (supremely happy to have landed, let me tell you), and wrote this introductory post.

Fast forward a year, and while I’ve done some measure of that privately, I haven’t done nearly enough of it publicly. What I have done, instead, is retreat back to the same comfort zones I’ve always had.


Towards the end of 2016 I started getting agitated that I wasn’t making progress with my writing, and decided to tackle it as a pure productivity problem. People are writing and publishing books every day – there has to be a system that will work, right?

That’s where the idea for Write500 came from – my own desire to set specific goals that I could hit, every day, and make progress as a result.

But then my comfort zones kicked in.

When I talk about the “sunk cost fallacy”, the first assumption you might make is that I’m referring to the time I spent on this specific project – which is a factor. Over the last few weeks I’ve been working on Write500 more or less because it exists, and because I know there are people that are interested in how it evolves. Not so much because I think it can actually solve the problems I need it to solve.

So that’s one level of fallacy right there – working on something because I’ve been working on it before, and the cost of killing it is somehow (inexplicably and irrationally) unacceptably high.

But there’s another level to this, and it’s only in this last week that it’s really been driven home for me: My career so far is, itself, a sunk cost.

My approach to solving problems is almost always rooted in software, which shouldn’t be a surprise – I learned to program at a very young age, and because of my knack for it, I was able to get a job, which I was then able to turn into something resembling a career.

As a result, when I’m deciding the best way to add value to the world (which I believe is something we should all try and do, in our own way), my main inclination is almost always software. I keep coming back to it, thanks to how far its gotten me in life so far.

Over the last year though, that inclination’s been challenged somewhat. Last year around this time, I was in the process of handing over my biggest freelance client to another agency, thanks almost entirely to burnout.

The idea of working late nights developing software for paying customers had lost its appeal entirely, despite how absolutely brilliant it was to be able to monetize my nights and weekends, and build up some savings as a result.

I’ve had a similar inclination at work, too – whenever a new problem presented itself, and if it could be solved with some custom development, that would always be the first suggestion I’d make.

Which had not been a problem, really, up until the end of 2015 – I had latitude to develop things that I thought needed to be developed. The change of my job role had also forced me to change the way I solve problems – including occasionally not solving them at all.

I imagine it’s at this point that a lot of developers would quit out of frustration – feeling like they’re adding no value, or taking umbrage at not being able to use or grow their skills.

I didn’t quit, though, and I ended up learning something new: That it was possible for me to be productive (and add value) without writing a single line of code.

I’m sure that seems obvious to a lot of people, but it’s only recently become clear to me how big of a mental shift this actually is. And it brings me back to the thing about writing.

I’ve wanted to be a writer for as long as I can remember – the idea of spending time in my own world, creating characters and stories within it to share with other people – is incredibly appealing.

But with my very narrow view of problem-solving, I’d always look at my lack of writing as something that could be solved with software. And so instead of actually writing, I’d set out to shave as many yaks as possible.

It’s the old “if all you have is a hammer” adage – the problem of me not writing started to look like a nail. A problem that could be solved if I just found the perfect combination of tools, frameworks, and the right approach.

Which as it turns out, is horribly wrong – at least for me.

Most of my work on Write500 was underpinned by that. The first, most basic thing it was meant to do was deliver daily writing prompts (a tool I always wanted to build anyway). But beyond that, I wanted Write500 to solve two other problems: Be a daily go-to tool to produce new content, and be a revenue-generating SaaS product.

Except that neither of those things (and it’s obvious now) actually move me any closer to me being a writer. It’s actually the exact opposite: I’m creating new tasks for myself that specifically prevent me from writing, but justify it by telling myself that once I build this, I’ll be equipped to write.

Which is bullshit, and I think I always knew it was bullshit, but I let myself believe that anyway.

Another big dimension to all of this is that I’m doing all of this work in my spare time. What little of it I have, anyway. Time to work on these sorts of things is a scarce resource for me, and I haven’t been making very good use of it by focusing almost entirely on things that move me in the exact opposite direction of my goals.

And so last night, while processing all of this (and failing to fall asleep) I came to the eventual realization that I have to kill Write500. Specifically, the extensions to it – the daily prompts thing is still quite useful, and low-maintenance on my part.

Once I actually go through the process of producing and publishing something, I’m sure I’ll uncover lots of problems from that experience – and I might find a gap that could be filled with software.

For now, though, I’m rolling everything back and parking this project. A part of me still hates to do it, but the reality is that I have limited time available to me, and I’m not actually making the progress I want to make.

Instead, I now know I should be focusing on the things that are outside my comfort zone: Namely, writing things I think are shitty, and sharing them with people that might have nothing good to say about it. Which will be a start 🙂

Mid-year Resolutions

It really does feel like January was just the other day – hard to believe we’re already halfway through 2017. It must be some sort of bad joke.

The passage of time hit me properly today when a conversation about cryptocurrencies came up. I remembered that just a few weeks ago, I had dumped a bunch of money into a local exchange and bought some, and I had abandoned that exchange after an issue over support.

(A few weeks ago == January. It’s been five months, but we’re not mentioning that.)

I logged in to find out my R800 investment had somehow turned into R4700, thanks to the exploding value of Litecoin. I could have had an even bigger return had I logged in just a few weeks prior, but I was happy enough with the 500% growth over 5 months.

That experience brought me back to January, for which I still had the unresolved support case sitting in my inbox, along with a pile of notes and plans and ideas. I had sketched out a few things I wanted to achieve in 2017, and it feels like I’ve just now come up for air and half the year is gone.

Among the many things I’ve wanted to do, is actually get a book published. It’s why I started Write500 in the first place, and it’s why I’ve tried to build a daily writing habit. I’ve done okay in some weeks, badly in others, but still not enough to actually get a first draft done.

It reminds me (rather annoyingly) of a conversation I had back in 2007. Back then, at the ripe old age of 18, I had already decided I was going to write and publish something, and declared I’d have it done by November of that year. So naturally, someone went and marked that up on a calendar.

November 2007 came and went, no novel.

And now it’s ten years later.

It’s my fault, really, for not putting writing and writing-related activities higher on my list of priorities. I figure if you’ve passively thought about doing something for more than ten years, it’s probably something you actually have to do, right?

So the first thing I’m going to try doing differently, is writing here a lot more, not averaging a month between posts. I’ve always tried to have a topic, theme, structure, and something useful to say before saying it – and that’s led to me more or less saying nothing, since the bar to actually blogging is set so high.

The other thing I’m going to do is actually try pulling away from tech a lot more. I decided ages ago that I needed to spend a lot more time on creative work, and yet somehow the majority of the content posted so far this year is tech-related.

I guess old habits are really hard to break!

So for this 2017 Mid Year’s Resolution, I’m going to try writing more frequently, and less about technology. That seems like a good enough place to start.

Write500 and the Abyss of Reluctance

The story of Write500 so far can be best summed up in this commit chart from Bitbucket:

Screen Shot 2017-05-30 at 6.11.39 PM

The project started in December 2016, with the intention to create a tool to help writers write every day. At the time, I was pretty optimistic about my ability to retain focus:

…if I can finish this off as intended, I’ve got some other feature ideas to throw in. But right now, I shouldn’t get distracted 😉

I wrapped up and launched the first version before December drew to a close, and I was able to open registrations on 1 January 2017 – ringing in the New Year with a new project.

I then engaged the marketing engines, trying out a few different formats, and it wasn’t long before I reached the 500 subscriber mark. That, too, was hugely motivating – and it was less than a week later that I created the write500-app repository, starting work on the “real” version of Write500.

This was going to be the web app that I had originally envisioned, plus a ton of new ideas that I had picked up over the first few weeks. It was going to be an all-in-one of sorts: Writing and statistics, a built-in social feature, built-in community, and enough features to support two pricing tiers.

At the time I wrote my 28 January update, I had not actually done any creative writing since the 13th (about the time I started planning out the full app). I figured it was okay to let that lapse, since I was focused on building something that would eventually help me get back on track.

That momentum carried all the way through into the first two weeks of February, which is when the inertia set in. It’s pretty clear from the chart above: my code commits to the project simply fell flat. Looking back on it now, there were two main reasons:

Feature Fatigue

Too much, too early. I had actually managed to build up (at least, in my head) a beast of a system, but for every new feature I added, it felt like more features were missing. Pretty soon, the lists of things I planned on implementing had eclipsed the intention of the project itself (seriously, I was reinventing activity feeds towards the end).

I could actually start to feel the drift: More and more hours were being spent productively, but it felt like every new commit was pulling me further away from launching a usable product. Worse, it was getting harder for me to justify why the code I was working on, would actually help writers write more.

In the space of a few weeks, Write500 turns from an exciting project, to something resembling dread – towards the end, I just couldn’t bring myself to open the IDE and carry on working. I was firmly in the Abyss.

No Dogfooding

I had stopped writing every day – the irony of which does not escape me. At some point it became more important to me to work on Write500, than it did to actually write every day – the very problem Write500 purports to solve.

As the necessity to write every day started to wane, so did my motivation for solving the problem. It was a spiral I managed to trap myself in pretty effectively – as the scope of the product expanded, and my capacity for daily writing diminished, I thought I could solve the problem by designing extra features.

I had lost sight of trying to solve the problem for my hypothetical users in the simplest way, and was instead trying to solve a problem that existed entirely in my mind.

With the marketing campaigns long-expired (and subscribers only trickling into the free list), and my own capacity for writing eaten up by software development, that vicious cycle finally ground me down to complete ineffectiveness around mid-February.

It’s usually around this point in my projects where I just give up – I decommission, archive, and shelve, chalk it up to my inability to stay on-target, and move on. And I came close to doing that several times.

How is Write500 different? In the end, I think it had everything to do with this chart:

Screen Shot 2017-05-30 at 6.36.35 PM.png

After almost 5 months on the daily list, less than 25% of users had unsubscribed. More than 600 people were still getting value out of those daily prompts.

That chart gave me a different perspective on the problem entirely. Where I had been trying to solve problems with introducing ever-more-complex features, most users to date had simply been carrying on with the free list.

Maybe I was over-engineering it? That thought only occurred to me around mid-April. Maybe it would be possible (even, desirable) to throw away everything except the core experience (getting a new prompt every day), and basically start over.

The Great Purge

And so on 11 May 2017, I started doing just that.

Screen Shot 2017-05-30 at 6.42.57 PM.png

I gutted the entire project – all the controllers, views, models, migrations, resources, assets, most of the configuration, most of composer.json. And then I started over.

By the end of the first day, I had re-implemented the basics – authentication, prompts, the basic writing interface, the streak display. All perfectly-functional components of the behemoth project, brought over more-or-less intact.

The remarkable thing here? In the old project, those exact same components felt like smaller by-products of a larger vision. In the rebuild, with a fresh perspective, they actually felt like core components again. I found myself able to chart a much clearer path between the code I was writing, and the value I expected my users to be able to get out of this.

A week later, I had the subscription mechanism and Paypal integration restored, and documented better than before. I added a new Statistics mechanism, which now tracks and records wordcount and speed pretty much in realtime. I added the export options which were initially high on my list of priorities, but had fallen by the wayside.

This purge-and-refactor process brought back all the motivation I had lost before. Write500 transformed again – from something seemingly without end, to a project I could conceivably finish.

All the commits from the 21st onwards were mostly cleanup and polish – fixing typos, rearranging screen elements, testing in Browserstack (unbelievably useful) for the major mobile devices, adding a streamlined migration onboarding path from the free list service, and so on.

This past Sunday (the 28th) I rounded it off by adding the Terms and Privacy pages ( was enormously helpful for the former), and finally pushed the v0.3.0 tag. I did this while sitting in a hotel room in London, having just landed a few hours before.

On Monday (thankfully, a bank holiday in the UK), I was able to refine and run the new dispatch system, and gave it a full day to test. And today, I completed the migration of all users from the free list service to the new version of Write500.

Less really is more

There’s an excellent quote – the origins of which I have long since forgotten – which I routinely forget to apply to my own work (and I’m paraphrasing a bit):

Every project eventually exceeds the developer’s capability to maintain it.

Write500 outgrew my ability to maintain it even before it had made it out of my dev environment – which is not smart, and is the reason I failed to launch it in the first place.

With limited time and resources, the smartest approach is almost certainly the leanest one. The version of Write500 I have deployed right now (0.3.5) is a far cry from the vision I have for it, but it has one compelling thing in its favor: It exists.


Carrying on

It exists, but it’s also only getting started. The real test is whether or not there is actually a market for this. I’m happy with the way the free list has performed – there’s clearly some demand out there for tools that make consistent writing easier to achieve.

Is there enough demand, though, to turn this into a paid product? I guess only time (and marketing!) will tell.

At the very least, I’m glad to have been able to make this amount of progress. Write500 is the first project that actually came back from the Abyss of Reluctance, and made it into production.

Which, right now, is enough for me!

Getting started with Mastodon!

Mastodon Setup

Howdy, stranger! This document is the other half of this video, in which I set up a single-server instance of Mastodon. This was assembled on 9 April 2017, and there’s a good chance that some of the specifics here will change over time. I’ll keep an updated version up on

(What is Mastodon? I’ll do another post on that sometime!)

If you’d like, you can download a plain HTML, styled HTML or PDF version of this post instead – it might make copying some of the code easier.

UPDATE 17 April 2017: Mastodon has reached v1.2, and now requires Ruby 2.4.1. The post has been updated with new commands required as of today, and an upgrade guide is below.

0. Pre-Prerequisites

At a bare minimum, you’re going to need:

  • A domain name, with the ability to add an A record yourself
  • A free account, with the account verified and your sandbox enabled to send to you
  • A 1GB RAM machine with decent network access. This document uses a DigitalOcean VM.

This setup procedure skips a few things that you may want to do on a “productionized” or “community” instance of Mastodon, such as configuring S3 file storage, or using a non-sandbox email send account. You may also want a beefier machine than just 1GB RAM.

For reference, the OS version in use is Ubuntu 16.04.2 LTS and all the commands are being run from the root user unless explicitly specified.

1. Getting started!

The first couple steps:

  • Create the VM
  • Point your domain to it immediately, by setting the A record to the public IP
  • Log into the VM
  • Set your root password
  • Create a new Mastodon user: adduser mastodon
  • Update the apt cache: apt-get update

2. Install Prerequisites

Now we’ll grab all the prerequisite software packages in one go:

# apt-get install imagemagick ffmpeg libpq-dev libxml2-dev libxslt1-dev nodejs file git curl redis-server redis-tools postgresql postgresql-contrib autoconf bison build-essential libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm3 libgdbm-dev git-core letsencrypt nginx

That’ll take a little while to run. When it’s done, you’ll need Node (version 4) and yarn:

# curl -sL | bash -
# apt-get install nodejs
# npm install -g yarn

You’ll also want to be sure that redis is running, so do:

# service redis-server start

3. Configure Database

With Postgres installed, you need to create a new user. Drop into the postgres user and create a mastodon account:

# su - postgres
$ psql
> \q
$ exit

Later on we’ll configure mastodon to use that.

4. Generate SSL certificate

Before configuring nginx, we can generate the files we’ll need to support SSL. First, kill nginx:

# service nginx stop

Now proceed through the LetsEncrypt process:

  • Run letsencrypt certonly
  • Enter your email address
  • Read and acknowledge the terms
  • Enter the domain name you chose

If the domain name has propagated (which is why it’s important to do this early), LetsEncrypt will find your server and issue the certificate in one go. If this step fails, you may need to wait a while longer for your domain to propagate so that LetsEncrypt can see it.

5. Configure nginx

With the SSL cert done, time to configure nginx!

# cd /etc/nginx/sites-available
# nano mastodon

Simply substitute your domain name where it says in this snippet (lines 9, 15, 23, 24), then paste the entire thing into the file and save it.

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;

server {
  listen 80;
  listen [::]:80;
  return 301 https://$host$request_uri;

server {
  listen 443 ssl;

  ssl_protocols TLSv1.2;
  ssl_ecdh_curve prime256v1;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  ssl_certificate     /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 0;
  gzip off;

  root /home/mastodon/live/public;

  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

  location / {
    try_files $uri @proxy;

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;

    proxy_pass_header Server;

    proxy_pass http://localhost:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;

    proxy_pass http://localhost:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;

  error_page 500 501 502 503 504 /500.html;

Once you’ve saved and closed the file, enable it by creating a symlink:

# ln -s /etc/nginx/sites-available/mastodon /etc/nginx/sites-enabled/mastodon

Then test that the file is OK by running nginx -t. If it reports any errors, you’ll want to fix them before moving on. If the file comes back OK, fire it up!

# service nginx start

Open a browser tab and navigate to your domain. You should get a 502 Gateway Error, secured with your LetsEncrypt cert. If not, go back and make sure you’ve followed every preceding step correctly.

6. Configure Systemd

Mastodon consists of 3 services (web, sidekiq and streaming), and we need to create config files for each. You can use the code straight from this page, as-is.

# cd /etc/systemd/system/

The first file is called mastodon-web.service and consists of the following:


ExecStart=/home/mastodon/.rbenv/shims/bundle exec puma -C config/puma.rb


The next file is called mastodon-sidekiq.service and consists of the following:


ExecStart=/home/mastodon/.rbenv/shims/bundle exec sidekiq -c 5 -q default -q mailers -q pull -q push


The final file is called mastodon-streaming.service and consists of the following:


ExecStart=/usr/bin/npm run start


Once all those are saved, we’ve done all we can with the root user for now.

7. Switch to the Mastodon user

If you haven’t yet logged into the server as mastodon, do so now in a second SSH window. We’re going to set up ruby and pull down the actual Mastodon code here.

8. Install rbenv, rbenv-build and Ruby

As the mastodon user, clone the rbenv repo into your home folder:

$ git clone ~/.rbenv

When that’s done, link the bin folder to your PATH:

$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile

Then add the init script to your profile:

$ echo 'eval "$(rbenv init -)"' >> ~/.bash_profile

That line is valid for the OS we’re on (Ubuntu 16.04 LTS) but it may differ slightly for you. You can run ~/.rbenv/bin/rbenv init to check what line you need to use.

Once you’ve saved that, log out of the mastodon user, then log back in to complete the rest of this section.

Install the ruby-build plugin like so:

$ git clone ~/.rbenv/plugins/ruby-build

Then install Ruby v2.4.1 proper:

$ rbenv install 2.4.1

This could take up to 15 minutes to run!

When it’s done, change to your home folder and clone the Mastodon source:

$ cd ~
$ git clone live
$ cd live

Next up, dependencies! Always more dependencies – we’ll install bundler, then use that to install everything else:

$ gem install bundler
$ bundle install --deployment --without development test
$ yarn install

If all of those succeeded, we’re ready to configure!

9. Configure Mastodon

Before diving into the configuration file, generate 3 secret strings by running this command 3 times:

$ bundle exec rake secret

Copy those out to a text file – you’ll paste them back in later. Create the config file by copying the template, then editing it with nano:

$ cp .env.production.sample .env.production
$ nano .env.production

Inside this file we’re going to make several quick changes.


To enable federation, you need to set your domain name here:

Then, for these 3, paste in each key you generated earlier:


Finally, configure your SMTP details:

SMTP_LOGIN= (whatever your mailgun is)
SMTP_PASSWORD= (whatever your mailgun is)

Save and close the file.

10. Run installer

If you’ve done everything correctly, this command will install the database:

$ RAILS_ENV=production bundle exec rails db:setup

If that passes successfully (it’ll echo out every command it runs), you can then precompile the site assets, which may take a few minutes:

$ RAILS_ENV=production bundle exec rails assets:precompile

At this point, we’re almost ready to go!

11. Configure cronjob

This is technically optional, but highly recommended to keep your instance in good order. As the mastodon user, start by determining where your bundle command lives:

$ which bundle

That path will be substituted for $bundle. Now, edit your own crontab:

$ crontab -e

Select nano (2) if you’re prompted. As of version 1.2 (17 April 2017) you only need one daily task in your crontab:

5 0 * * * RAILS_ENV=production $bundle exec rake mastodon:daily

Save and close the crontab.

12. Log out and return to root

We’re done with the mastodon account. Log out and return to your root shell.

13. Start Mastodon

The moment of truth! Enable the Mastodon services (so that they start on boot):

# systemctl enable /etc/systemd/system/mastodon-*.service

Then fire up Mastodon itself:

# systemctl start mastodon-web.service mastodon-sidekiq.service mastodon-streaming.service

Open up a browser tab on your domain. Mastodon can take up to 30 seconds to warm up, so if you see an error page, don’t fret. Only fret if it’s there for longer than a minute – that requires troubleshooting, which is outside the scope of this document.

You should eventually get a signup page. Congratulations! Register an account for yourself, receive the confirmation email, and activate it. This should enable you (the first user) as an administrator.

14. Securing Mastodon

This is by no means a comprehensive guide to server security, but there are two quick things you can change while the root shell is open. Start by editing the passwd file:

# nano /etc/passwd

Find the mastodon entry (it’ll be near the bottom) and replace /bin/bash with /usr/sbin/nologin. Save and quit. This will prevent anyone from logging in as the mastodon user.

Next, configure ufw. First check if it’s disabled:

# ufw status

It should be off, since this is a brand new VM. Configure it to allow SSH (port 22) and HTTPS (port 443), then turn it on:

# ufw allow 22
# ufw allow 443
# ufw enable
? y

That will prevent any connection attempts on other ports.

15. Enjoy!

If you enjoyed this guide, I’d appreciate a follow! You can find me by searching in your Mastodon web UI. Give me a shout if you were able to get an instance set up with these instructions, or if you ran into any problems.

16. Upgrade to v1.2 (17 April 2017)

If you’ve installed Mastodon according to these instructions, you’ll need to do a few things to upgrade to the latest version.

Start by logging into your instance as the root user, then re-enabling your mastodon user shell (in step 14, change the mastodon user’s shell back to /bin/bash). We’ll use it in a bit to perform the upgrades themselves.

When that’s done, stop the Mastodon services like so:

# systemctl stop mastodon-*

That will shut down all the Mastodon services. In a new window, log into your mastodon user and install Ruby 2.4.1, the new preferred version:

$ cd live
$ rbenv install 2.4.1
$ gem install bundler --no-ri --no-rdoc

This will install the latest Ruby, and the version-appropriate bundler. Now pull down the latest source code:

$ git pull

There are a couple of one-time commands to run – in order, they are to install new dependencies, run database migrations, do a one-time avatar migration, and recompile the frontend.

$ bundle install
$ yarn install
$ RAILS_ENV=production bundle exec rails db:migrate
$ RAILS_ENV=production rake mastodon:maintenance:add_static_avatars
$ RAILS_ENV=production bundle exec rails assets:precompile

When this is all done, make sure your crontab has been updated to use the new mastodon:daily command. Refer to step 11 above for details.

Finally, the teardown – log out of the mastodon user, and switch back to your root connection. Set the mastodon user’s shell back to /usr/sbin/nologin (step 14) and restart the Mastodon services:

# systemctl start mastodon-web.service
# systemctl start mastodon-sidekiq.service
# systemctl start mastodon-streaming.service

Give it a few seconds to warm up, and check that they’re running with:

# systemctl status mastodon-web.service

If you get a green dot with “running”, you’re good to go!


A lot of this guide was sourced from the official Production guide on the Mastodon Github page. I reorded it into a logical sequence after running through it for a few tries.

This post was updated for v1.2 (and v1.1.2) upgrade notes on 17 April 2017.

Integrating Laravel and Flarum

One of the features I wanted to add to Write500 was a community forum. I’ve worked with Flarum before, and while it’s still ostensibly beta software, it has some really nifty features and works quite well across desktop and mobile. I decided early on that I’d plan to integrate a standalone instance of Flarum into Write500.

Flarum has a JSON API, which as it turns out is really easy to use (despite being insufficiently documented at the moment).

I wanted to set up an integration such that Write500 handled the authentication (you could only log in from the site, not the forum itself), and that changes to a user’s Write500 profile would be synced across.

Laravel Flarum - Page 1 (4).png

This integration broke down into several parts. I’ll go into detail on each one here – hopefully it’s useful!


  1. Laravel Setup
  2. Flarum Setup
  3. Integration: Creating a new user
  4. Integration: Authenticate the user
  5. Integration: Avatars
  6. Conclusion and Additional Notes

Laravel Setup

At the outset, I implemented a “nickname” field on my User model, with the following constraints (in the validator() method in RegisterController).

'nickname' => 'required|max:32|unique:users'

Those are the same constraints Flarum uses. So now, when I integrate the two (using the nickname as the Flarum username), I can be at least 99% sure it’ll work every time – barring the inevitable weird-unicode-character issue, or some other edge case.

I also ended up adding the following to my User table:


Those fields will be populated when the integration runs, and are critical for everything else.

Finally, I needed a few configurable fields. I added these to my config/app.php file:

'forum_url' => env('FORUM_URL', ''),
'forum_admin_username' => env('FORUM_USER', 'default_admin'),
'forum_admin_password' => env('FORUM_PASS', 'default_pass'),

Note that in the samples below, I don’t reference any of these fields directly.

Flarum Setup

There are a few tweaks made to Flarum to enable all of this. In summary:

  • Installed the latest version of Flarum
  • Set up HTTPS using LetsEncrypt
  • Added the Flarum Links extension
  • Set up the default admin user as a generic account (not named after myself)
  • Disabled new-user registration

About the Links extension

When a user navigates to the community, I want them to be able to navigate back to any other part of the site, without feeling like they’ve really left Write500. To do this, I used the Links extension to create a navbar with the same links as on the main site.

Screen Shot 2017-02-13 at 2.31.49 PM.png

Header on main site

Screen Shot 2017-02-13 at 2.31.58 PM.png

Header on Flarum

I could have used Flarum’s Custom HTML Header feature for that, and created my own menu bar to replace the forum’s one, but that would have lost me the search bar and notification indicator.

About the default admin user

I use the username and password for the admin user to drive the integration. I could technically create an API key on Flarum’s side and just use that instead (it would save me a CURL call), but that also means going directly into the Flarum database to do stuff, which I wasn’t keen on.

One day, when you can generate API keys in the Flarum interface itself, that would obviously be the way to go.

Note about the code samples

The code samples below (all the gist embeds) are not taken directly from my Write500 source – they’ve been adapted for standalone illustration.

Integration: Creating a new User

My first integration requirement:

Whenever a new user is created in Laravel, that same user must be created and activated in Flarum.

The following has to happen:

  1. Authenticate to the Flarum API with my admin account
  2. Create a new user
  3. Activate that user
  4. Link that user to our Laravel user

I ended up putting all of that in a Job, and dispatching it when a new User object was created (in my RegisterController):

dispatch(new \App\Jobs\SyncUserToForum($user));

And now for the actual code!

Step 1: Authenticate

Nothing complicated here – just POSTing to the API to get a token back for this session. This would be the step I could skip if I had a preconfigured API token.

This same method is used whenever I need to log into the Flarum API, so for brevity I’m excluding this code from the rest of the samples.

Step 2: Create a new user

I should, strictly, be checking the API first to see if that nickname or email exists – to prevent any attempts to create duplicate users. However, both Laravel and Flarum itself should fail a new registration if the nickname or email address fields are not unique – so I should be okay there.

Step 3: Activate the user

This could possibly be done in the registration call itself (sending isActivated=true as an attribute upfront), but I’m breaking this out for another reason.

I know that, at some point, I’m going to set up a registration workflow on Write500 that requires users to activate their account before they can log in, and I’m only going to trigger the Flarum activation at that point.

What this means, practically, is that a user’s nickname will be reserved across the system, but will only be @-mentionable once their Write500 account is activated.

Step 4: Link to our Laravel user

When I created this new user, I generated a random password like so, which was used to create the user on Flarum:

$generated_password = substr(md5(microtime()), 0, 12);

The user never has to know this password, since the actual authentication is handled by the site. So we just need to update a few things on our user model:

// $new_user is the result of the POST /api/users call
$user->forum_id = $new_user->data->id;
$user->forum_password = $generated_password;

The forum user ID is not strictly required, but it makes it easier to check whether the user has a forum account yet or not – we don’t want to present the sign-in link to the user before they have a forum account to sign in to:

if($user->forum_id != null) { /* Integrated successfully */ }


So we now have a job that registers a new user account on Flarum once we have a valid Laravel user – none of it requiring any modifications to Flarum’s internal database.

Integration: Authenticate the user

Next, we need users to be able to authenticate to the forum.

Create a one-click link on Laravel’s side that authenticates and redirects the user to the Flarum instance.

There are a couple of ways to handle this, and I’ll admit upfront that the solution I’ve come up with (as of right now) may not be the best, security-wise.

The following has to happen:

  1. Create a new session token
  2. Set that as a cookie, then redirect to the forum
  3. Update the forum to only allow authenticated users access

Flarum makes this quite easy – the Token you get from the /api/token endpoint can be used as a flarum_remember cookie.

Step 1: Create session token

Once a user has been provisioned in Flarum (they have a numeric forum_id value on their model), we can use their nickname and password to generate a new session token, in the same way we authenticated the admin user.

Step 2: Set token value as cookie

I butted heads with this step for a while, before coming to a solution. On the Flarum side, I had to create a new auth.php file in the forum web root (where index.php and api.php are), with the following:

All that file does is take whatever token value you give it, set it as a cookie, and then bounce you to the forum homepage.

In terms of security, this setup is making me a bit nervous. However, it’s worth noting:

  • The token is sent over HTTPS, and HTTPS URLs and parameters are encrypted
  • The only way to hijack someone else’s account using this, would be knowing their session ID – at which point you can manually add that as a browser cookie anyway
  • The only other exploit that’s possible here is someone trying a script injection attack, but again, that’s nothing you cannot set in your browser manually
  • If a user tries hitting this route manually with a bad token value, they’ll be bounced to the forum, invalidated (see below) and then sent back to the login page.

With that script set up, you can now redirect the user to it from Laravel (as in Step 1), and they’ll be bounced to the forum, fully authenticated.

return redirect()->to('https://my.flarum.url/auth.php?token=' . $session->token);

Step 3: Only permit authenticated users to view the forum

This was another ugly script hack. Flarum itself lets you set permissions so that guests cannot read content – they’ll just see a “no discussions” message when opening the forum.

However, since I’m also planning on removing the Login button on the forum, the only way an unauthenticated user will be able to log into the forum, is by going via my login page.

So I need to add a redirect before Flarum itself fires, to check whether the user is authenticated. I ended up with this:

When a user authenticates with Flarum (which all runs via the api route eventually), a new access token is generated and stored in that table. If the value matches, they’re logged in – if not, they’re bounced to my login page. After logging into Write500, hitting that Community login route will authenticate and take them through.

That could be improved by reading the database connection details from config.php instead, but those values are not likely to change for this setup. I’d also have preferred it if Flarum used some sort of file storage for sessions, in the same way basic Laravel does – but we can’t always get what we want 😉

Integration: Flarum Modification

So we now have the basics down – new users are registered, users can log in via my site, and users cannot reach the forum without being authenticated.

Modify Flarum to integrate the navigation with the main site, and prevent users from logging out via Flarum.

On the Flarum side, I want to change what some of the buttons in the navigation do – for instance, they shouldn’t be able to log out via Flarum (since that still leaves them logged into Write500), and I don’t want them to be able to manage their forum settings directly in Flarum.

After some trial and error, and almost giving up and hacking on the core scripts, I eventually landed on this neat hack.

In the Flarum administration section, under Appearance, you can add your own custom HTML. I used that to deploy a script block that looks like this:

That script is loaded in the page header, and executes immediately. At the end of the page, Flarum loads its app script and creates an app object. Before that happens, there’s nothing to modify, and I cannot force my javascript to run at the end of the page. So instead, my script keeps checking to see if the app variable is available, and when it is, it makes some changes.

With that script in place, the forum is now about as integrated as it can get – there’s just one more thing to cover – Avatars.

Integration: Avatars

Users can upload and crop avatars to Write500.

Automatically synchronize the user’s avatar from Laravel, to Flarum.

I wanted those avatars to sync across to the forum, whenever they were created or updated on Write500’s side. I also have a “Reset to Default” which deletes the current avatar, and that deletion should be synced across as well.

Delete Avatars

This one was pretty easy. On the route that resets my avatar, I dispatch a job that basically does this:

It authenticates to the Write500 API, and issues a delete request against the avatar resource. Simple.

Upload avatars

This one’s a bit more complicated. The API endpoint is expecting a form-data upload, which takes a bit of fiddling to get right on PHP’s end.

After a user uploads, crops and saves an avatar, it’s stored in the public/ directory with a GUID, which is saved against the User model. All I really need to do is read that file from disk and upload it to the Flarum API. Which I achieve like so:

That bit of code will blindly upload a new avatar. Flarum doesn’t require you to delete an existing avatar before uploading a new one, so I can trigger this all I want. Right now, it’s just dispatching as a job after a new avatar is saved in Write500.

I’m not sure whether or not the User-Agent is strictly required for this, but I haven’t had time to go back and test it. I do know that the Flarum API does behave differently depending on whether it thinks it’s being hit from a browser.


So that’s an exhaustive look at how I’ve enabled the following integrations:

Laravel Flarum - Page 1 (5).png

I’m sure that some elements of this could be improved, and if I make any substantial updates to this, I’ll likely write a follow-up post here.

Additional Considerations

This is all still a work in progress, and I’m sure I’ll find more bugs as I go along.

Right now, for instance, I think it might make more sense to keep the user token cached in the User model, and only issue a new one when the current one expires. Right now, if you upload an avatar (for instance), it generates a new token that might invalidate the old one. So you’d have to log into the community from scratch to get access again.

There’s also notification integration. There’s an API endpoint that returns information about new notifications (@-mentions, etc), which I would want to display in the Write500 navbar.

I’d also want to take over the profile view eventually. I’m aiming to build a user profile page in Write500 to show some high-level stats, and I’d rather integrate forum activity in there, as opposed to having Flarum display any user profile page at all. Finally, I need to consider what will happen when users cancel their Write500 subscriptions.

Those are all questions for another day though!