Tag Archives: amazon

Charting the Amazon Sci-Fi jungle

Probably one of the more inspiring books I’ve read lately is Chris Fox’s Write to Market – it’s practical, grounded advice for building a writing career in today’s landscape. The principles contained within are solid, the first being: find an under-served market you can target your efforts on.

It makes complete sense from a supply/demand point of view – if you know ahead of time what readers are interested in buying, and they happen to align with what you enjoy writing, you can build a much clearer picture of what you’ll need to do to succeed. Modern content production has changed, after all.

The book got me thinking about how I might apply it to my own ambitions, and it became pretty clear that I’d have to take a very detailed look at the Sci-Fi book market on Kindle. Amazon accounts for a solid portion of global eBook sales, and should serve as a fantastic indicator for what’s trending.

So last night, I worked on exactly that – first, acquiring a snapshot view of the Top 100 books in each of the 21 sub-genres of Science Fiction, and how they relate to the global sales rank. I’ve got some information to share there, as well as some related insights on the composition of the market.

The Theory

The theory is relatively simple: Amazon lists over five million eBooks on Kindle (depending on what numbers you trust, I guess), and they’re all globally ranked on what Amazon calls their Best Seller rank (I call it ABS for short).

One book can exist in multiple categories – it can have a rank in the niche it serves (for instance, Science Fiction about Genetic Engineering), as well as a global ABS rank. The relation between these tell you how active a niche is.

For instance, if the top 5 books in a niche also exist on the top 10 ABS list, it means there’s a large amount of demand there. If books #80-100 in that same niche are in the high-thousands, that indicates under-served demand: People are buying books in that niche, but for whatever reason are not spending money on some of the lower-ranked books currently available.

This is the fertile ground – you know you have people heavily interested in a particular niche, and they are likely ready to buy anything new and interesting that might land in that category.

If the top 5 books in a niche are in the high-thousands, that means there’s very little demand for that niche. But if all Top 100 books in a niche appear in the top 500 ABS rank, that’s most likely an impenetrable market – and a wildly popular niche.

For the time being, anyway – the ground here shifts constantly as reader tastes evolve. Just like fashion, trends come and go. So despite all the charts I’m showing you in this post, they could be out of date as little as six months from now.

The Niches in Sci-Fi

I’m interested primarily in Sci-Fi, and so focused my analysis there. Things might look different in the other markets, but since I’m not likely to get into Suspense or Young Adult any time soon, I figured I’d give those a miss ūüėČ

Amazon lists 21 niches (or sub-genres) under Sci-Fi:

Adventure Alien Invasion Alternative History
Anthologies & Short Stories Classics Colonization
Cyberpunk Dystopian First Contact
Galactic Empire Genetic Engineering Hard Science Fiction
LGBT Metaphysical & Visionary Military
Post-Apocalyptic Space Exploration Space Opera
Steampunk TV, Movie, Video Game Adaptations Time Travel

For each one, I set about gathering specific data:

  • The list of top 100 books in that niche, based on the niche’s own performance
  • For each book, what the global ABS rank is, and who the merchant is
  • Timestamped for once-a-day retrieval

 

Throw them all together in a chart, and you end up with something like this:

PBIDesktop_2017-07-03_20-59-51.png

Enlightening, right? Let’s rather go by genre, starting with the most hotly-contested one right now – Adventure:

PBIDesktop_2017-07-03_21-14-05.png

This is the dashboard of a very healthy sub-genre.

The top 20 books all have ABS ranks below 1000, with the top 5 being below 100 – these books are selling very well, and there is clear demand for this sub-genre right now. The market is also being very well served at the moment – none of the ABS ranks are above 10’000, so it’s unlikely that a first-time author, or someone without major existing traction, will be able to break in here right now.

Now let’s look at a less-contested genre – Hard Science Fiction.

PBIDesktop_2017-07-03_21-19-52.png

This is more like it! The Top 20 books are all under the 2000 ABS rank, and the book sitting at #40 is double that. The category bottoms out at over 12K, so if you’re looking for a place to start, this could be a good sub-genre to do it in.

Finally, the most uncontested sub-genre at the moment – LGBT.

PBIDesktop_2017-07-03_21-22-34.png

There are no official numbers for this, but the #1 book being at ABS rank 1973 would suggest that it’s selling around 100 copies a day. By comparison, the #1 book in Adventure should be doing around 6000 copies/day. This is according to TCK Publishing’s calculator.

100 copies/day on the top end is not much in terms of demand, so while you could almost definitely rank in this sub-genre, it probably won’t be worth the time investment right now.

All the charts above are looking at the total market though, regardless of whether or not titles were independently published. Let’s get into that next.

Independent publishing on Amazon

I published first versions of these on the Dragon Writers group, but now that I have updated information and time to properly process it, here’s a snapshot of how the Sci-Fi genres break down as of today.

PBIDesktop_2017-07-03_21-37-53.png

The vast majority of Kindle titles in the Sci-Fi genre are independently published – “Amazon Digital Services LLC” is the business name used there.

A word of warning on this: That same business name is used by Amazon itself on occasion – so far I’ve seen it used for special store listings of old, republished books. Unfortunately that’s just the nature of a project like this – the data is not going to be 100% accurate.

Other than the LLC, there are a few big names in this space, but they account for very few of the titles published.

But then there’s the quality-vs-quantity argument. Are independently-published novels doing better (or worse) than those published by traditional houses?

This one’s a tricky question to answer, so it’ll help to look at it in parts.

Let’s go with all titles under Sci-Fi with an ABS rank of 1000 or higher. At 1000, you’re selling around 185 books/day – it’s an arbitrary number, sure, but we need to start somewhere.

For each of the sub-genres that have books in that ABS range, what proportions were published independently vs traditionally?

PBIDesktop_2017-07-03_21-59-05.png

It’s no surprise that Traditional is dominating the Classics sub-genre – since that’s literally the genre in which Traditional companies re-package existing traditionally-published books.

But look at the rest – entire sub-genres are being dominated by independently-published titles! This is the encouraging part – on the largest eBook retail platform in the world, it’s possible for independently-published authors to dominate entire sub-genres.

What does the top end look like? Let’s take the top 100 books across all Sci-Fi sub-genres, sorted by ABS rank. The #1 Sci-Fi book is ranked 4th on the Best Seller list, and the #100th book comes in at rank 1429.

PBIDesktop_2017-07-03_23-30-26.png

That’s the most encouraging chart I’ve produced yet. Across the top 100 titles at the moment, 88 are independent titles – but more than that, there’s no clear bias attributable to the publishing method.

Or in other words: It doesn’t matter if you’re independently or traditionally published – both methods have a chance of reaching the top, and ultimately reaching customers.

Conclusions

None of the data above looks at sales or revenue – a lot is being inferred by the limited ranking information that Amazon makes available. For the most comprehensive report that actually looks at sales, AuthorEarnings is the best place to go.

The intention of this post wasn’t to dive into the industry as a whole, but rather to illustrate two things:

  1. There is opportunity here, possibly more so than via traditional publishing channels. The markets are wide-open to new entrants, and the opportunities might change over time, but they are always there.
  2. In the eBook space, it doesn’t matter whether you were published by a big name, or under your own name – both books will have equal treatment, and customers end up making the choices.

Publishing is definitely changing, and I’m excited to see where it goes next.

DigitalOcean vs AWS Lightsail

I’ve been a big fan of DigitalOcean pretty much since they launched. Their pricing was cheap and simple, and their service was a joy to use. What made it different (as compared to other VPS hosts of the time) was the sheer simplicity of setup. The first time I used it, I was up and running with an SSD-backed VPS in under a minute – and blown away, of course.

In 2016, Amazon launched Lightsail – presumably in an attempt to tap into the market for developers who need quick and cheap VPSes. It got me thinking whether or not it’d be worth it to actually run some of my VMs there. At the lower tiers, at least, it looked like Lightsail had a cheaper offering.

A word on features

Each VPS host offers the same thing, fundamentally: CPU, RAM, SSD storage and bandwidth. They do diverge on the added-value features – for instance, DigitalOcean includes free DNS and monitoring, whereas Amazon expects you to pay for Route53 and CloudWatch respectively.

In this case though, I’m looking purely at the cost of the servers themselves.

The Basics

For this pricing comparison, I’m first looking at the per-hour cost – since that’s what you get billed on.

2017-02-11 02_33_07-VPS.xlsx - Excel.png

  • 0.5GB RAM: Out of the gate, Lightsail and DigitalOcean have the same pricing and features for their smallest instance.
  • 1GB RAM: One tier up, Lightsail actually works out fractionally cheaper for the features offered by DigitalOcean. But that lead doesn’t last.
  • 2GB RAM: In this category, Lightsail is slightly cheaper, but DigitalOcean offers double the CPU capacity at a¬†similar price point.
  • 4GB RAM: There’s parity again here with a comparative saving of just under 10%, if you were to select Lightsail over DigitalOcean.
  • 8GB RAM: And at this point, Lightsail is slightly cheaper (<10%), but again, DigitalOcean offers double the CPU.

This is how it works out on a Monthly basis:

2017-02-11 02_33_22-VPS.xlsx - Excel.png

Interestingly, despite having higher per-hour costs than some Lightsail options, DigitalOcean’s advertised monthly cost is lower. They must be using a strange definition of “Monthly” in calculating that, so to make it fair, I’m basing this on a 744-hour (31-day solid) month. That’s the upper bound for what you’d need to budget for.

My conclusion: at these instance sizes, DigitalOcean is better value for money across the board. With the possible exception of the 4GB RAM instance offered by LightSail – there you’d save roughly 10% over DigitalOcean.

Let’s talk about Transfer

In the next section, I’m going to compare these prices to raw AWS EC2 prices. One of the stated benefits of Lightsail is that once you outgrow your initial servers, you can migrate and extend by leveraging the AWS cloud. Which sounds nice in theory.

DigitalOcean (and Lightsail, by the looks of it) bundle a bandwidth allocation in at each price point.On DigitalOcean, that Transfer number counts for incoming and outgoing traffic on the public network interface (meaning that transfer on a private network is free).

AWS EC2 has a different approach. Most transfer into EC2 is free, and transfer out (uploading from your VPS to somewhere else) is charged differently depending on the destination. If it’s to another internal AWS service you usually get a much cheaper rate, as compared to transfer to the Internet.

While DigitalOcean and Lightsail both make huge bandwidth allocations available, the assumption (on their end) is that most users won’t actually use all of that bandwidth. If users did actually manage to max it out every month, the pricing would be very different.

Comparing to EC2

So let’s look at what it would cost to get the same features and bandwidth allocation directly from Amazon EC2. In this comparison, I’m basing everything off the N. Virginia region (their largest, oldest and cheapest), and I’m assuming On-Demand pricing for Linux VMs. I’ll compare it against Lightsail, which is only marginally more expensive than DigitalOcean to begin with.

2017-02-11 02_30_52-VPS.xlsx - Excel.png

Say what? Must be a calculation error, right?

EC2 charges for each component separately, and in excruciating detail. You’ll rent a Compute instance for RAM and CPU, then attach an Elastic Block Store volume to serve as the storage, and you’ll pay separately for the bandwidth. Complicated? You bet!

So in that table, the cost of each component breaks down like so:

2017-02-11 02_31_43-VPS.xlsx - Excel.png

Here’s where that bundled transfer stuff comes into play. If you look at just the Instance and Storage costs, it’s about on-par with Lightsail. The moment you want to serve traffic to the Internet, though, you’re paying $0.09/GB – and budgeting to be able to do terabytes worth of transfer every month is really expensive.

(Incidentally, pushing everything to AWS CloudFront won’t save you, since they start at $0.085/GB for transit to the Internet).

In truth, the bundled transfer included by DigitalOcean and Lightsail is what makes the difference.

Conclusions

If you were already on DigitalOcean, you’re probably congratulating yourself right now for making the smarter choice. And you’d be right.

If you’re on Lightsail, there’s no real reason to move. But if you’re running a couple of smallish EC2 VMs, and are sweating the bandwidth costs every month, it might be worth switching and taking advantage of the free bundled bandwidth.

Use Amazon S3 with Laravel 5

Laravel’s Filesystem component makes it very easy to work with cloud storage drivers, and the documentation does an excellent job of covering how the Storage facade works – so I won’t repeat that here.

Instead, here’s the specifics on getting Laravel configured to use S3 as a cloud disk. These instructions are valid as of 4 January 2017.

The AWS Setup

On the AWS side, you need a few things:

  • An S3 bucket
  • An IAM user
  • An IAM policy attached to that user to let it use the bucket
  • The AWS Key and Secret belonging to the IAM user

Step 1: The S3 Bucket

Assuming you don’t already have one, of course.

This is the easiest part – log into AWS, navigate to S3, and create a bucket with any given name. For this example, I’m using write500-backups (mainly because I just migrated the automated backups for write500.net to S3):

2017-01-04 00_33_39-S3 Management Console.png

1. Easy button to find

Then:

2017-01-04 01_41_14-S3 Management Console.png

2. Select your region – with care

US Standard is otherwise known as North Virginia, and us-east-1. You can choose any region, but then you’ll need to use the corresponding region ID in the config file. Amazon keeps a list of region names here.

If you’re using this as a cloud disk for your app, it would make sense to locate the bucket as close as physically possible to your main servers – there are transfer time and latency benefits. In this case, I’m selecting Ireland because I like potatoes.

Step 2: The IAM User

Navigate to IAM and create a new user. AWS has made some updates to this process recently, so it has a lot more of a wizard look and feel.

2017-01-04 00_37_17-IAM Management Console.png

1. Add a new user from the Users tab

2017-01-04 00_37_33-IAM Management Console.png

2. Make sure the Programmatic Access is ticked, so the system generates a Key and Secret

Step 3: The IAM Policy

The wizard will now show the Permissions page.¬†AWS offers a few template policies we’ll completely ignore, since they grant far too much access. We need our user to only be able to access the specific bucket we created.

Instead, we’ll opt to attach existing policies:

2017-01-04 01_45_53-IAM Management Console.png

3. This one

And then create a new policy:

2017-01-04 01_46_01-IAM Management Console.png

4. And then this one

This will pop out a new tab. On that screen, Select “Create Your Own Policy”.

  • Policy name: Something unique
  • Policy description: Something descriptive
  • Policy document: Click here for the sample

Paste that gist into the Policy Document section, taking care that there are no blank spaces preceding the {. Replace “bucket-name” with your actual bucket name, then save:

2017-01-04 01_50_40-IAM Management Console.png

If only insurance were this easy

Go back to the IAM wizard screen and click Refresh – you should see your brand new policy appear at the top of the list.

2017-01-04 00_40_36-IAM Management Console.png

Perfect!

Tick the box, then click¬†Next: Review, and then¬†Create user. It’ll give you the Access key ID and Secret like so:

2017-01-04 00_40_50-IAM Management Console.png

3. When you complete the wizard, you’ll get these.

The Access Key ID (key) and Secret access key (secret) will be plugged into the config file.

Step 4: Configure Laravel

You’ll want to edit the filesystem details at:config/filesystem.php

Near the bottom you should see the s3 block. It gets filled in like so:

fixed.png

Remember to set the correct region for your bucket

And done! The filesystem is now configured. If you’re making your app portable, it would be smart to use env() calls with defaults instead, but I’ll leave you figure that one out ūüôā

Step 5: Test

The simplest way to test this is to drop into a tinker session and try working with the s3 disk.

2017-01-04 01_57_16-forge@write500_ ~_write500.net.png

And you should see the corresponding file in the S3 interface itself:

2017-01-04 01_57_43-S3 Management Console.png

Step 6: parrot.gif

Now that you have cloud storage configured, you should use it!.

First (and this will take minimal effort), you should set up the excellent spatie/laravel-backup package. It can do file and db backups, health monitoring, alerts, and can be easily scheduled. Total win.

You can also have Laravel do all its storage there. Just change the default disk:

2017-01-04 02_01_34-write500 - Cloud9.png

config/filesystems.php

This has the benefit of ensuring that even if your server crashes/dies horribly, nothing gets lost. You can also have multiple server instances all talking to the same s3 disk.

In my case, I’m using S3 as the storage for regular backups from write500. I’ll also use the same connection and attempt to publish my internal statistics¬†dumps as CSV directly to S3 – meaning I can¬†pull the data in via Domo’s Amazon S3 connector. I can then undo the SFTP setup I created previously, further securing my server.