Tag Archives: aws

DigitalOcean vs AWS Lightsail

I’ve been a big fan of DigitalOcean pretty much since they launched. Their pricing was cheap and simple, and their service was a joy to use. What made it different (as compared to other VPS hosts of the time) was the sheer simplicity of setup. The first time I used it, I was up and running with an SSD-backed VPS in under a minute – and blown away, of course.

In 2016, Amazon launched Lightsail – presumably in an attempt to tap into the market for developers who need quick and cheap VPSes. It got me thinking whether or not it’d be worth it to actually run some of my VMs there. At the lower tiers, at least, it looked like Lightsail had a cheaper offering.

A word on features

Each VPS host offers the same thing, fundamentally: CPU, RAM, SSD storage and bandwidth. They do diverge on the added-value features – for instance, DigitalOcean includes free DNS and monitoring, whereas Amazon expects you to pay for Route53 and CloudWatch respectively.

In this case though, I’m looking purely at the cost of the servers themselves.

The Basics

For this pricing comparison, I’m first looking at the per-hour cost – since that’s what you get billed on.

2017-02-11 02_33_07-VPS.xlsx - Excel.png

  • 0.5GB RAM: Out of the gate, Lightsail and DigitalOcean have the same pricing and features for their smallest instance.
  • 1GB RAM: One tier up, Lightsail actually works out fractionally cheaper for the features offered by DigitalOcean. But that lead doesn’t last.
  • 2GB RAM: In this category, Lightsail is slightly cheaper, but DigitalOcean offers double the CPU capacity at a similar price point.
  • 4GB RAM: There’s parity again here with a comparative saving of just under 10%, if you were to select Lightsail over DigitalOcean.
  • 8GB RAM: And at this point, Lightsail is slightly cheaper (<10%), but again, DigitalOcean offers double the CPU.

This is how it works out on a Monthly basis:

2017-02-11 02_33_22-VPS.xlsx - Excel.png

Interestingly, despite having higher per-hour costs than some Lightsail options, DigitalOcean’s advertised monthly cost is lower. They must be using a strange definition of “Monthly” in calculating that, so to make it fair, I’m basing this on a 744-hour (31-day solid) month. That’s the upper bound for what you’d need to budget for.

My conclusion: at these instance sizes, DigitalOcean is better value for money across the board. With the possible exception of the 4GB RAM instance offered by LightSail – there you’d save roughly 10% over DigitalOcean.

Let’s talk about Transfer

In the next section, I’m going to compare these prices to raw AWS EC2 prices. One of the stated benefits of Lightsail is that once you outgrow your initial servers, you can migrate and extend by leveraging the AWS cloud. Which sounds nice in theory.

DigitalOcean (and Lightsail, by the looks of it) bundle a bandwidth allocation in at each price point.On DigitalOcean, that Transfer number counts for incoming and outgoing traffic on the public network interface (meaning that transfer on a private network is free).

AWS EC2 has a different approach. Most transfer into EC2 is free, and transfer out (uploading from your VPS to somewhere else) is charged differently depending on the destination. If it’s to another internal AWS service you usually get a much cheaper rate, as compared to transfer to the Internet.

While DigitalOcean and Lightsail both make huge bandwidth allocations available, the assumption (on their end) is that most users won’t actually use all of that bandwidth. If users did actually manage to max it out every month, the pricing would be very different.

Comparing to EC2

So let’s look at what it would cost to get the same features and bandwidth allocation directly from Amazon EC2. In this comparison, I’m basing everything off the N. Virginia region (their largest, oldest and cheapest), and I’m assuming On-Demand pricing for Linux VMs. I’ll compare it against Lightsail, which is only marginally more expensive than DigitalOcean to begin with.

2017-02-11 02_30_52-VPS.xlsx - Excel.png

Say what? Must be a calculation error, right?

EC2 charges for each component separately, and in excruciating detail. You’ll rent a Compute instance for RAM and CPU, then attach an Elastic Block Store volume to serve as the storage, and you’ll pay separately for the bandwidth. Complicated? You bet!

So in that table, the cost of each component breaks down like so:

2017-02-11 02_31_43-VPS.xlsx - Excel.png

Here’s where that bundled transfer stuff comes into play. If you look at just the Instance and Storage costs, it’s about on-par with Lightsail. The moment you want to serve traffic to the Internet, though, you’re paying $0.09/GB – and budgeting to be able to do terabytes worth of transfer every month is really expensive.

(Incidentally, pushing everything to AWS CloudFront won’t save you, since they start at $0.085/GB for transit to the Internet).

In truth, the bundled transfer included by DigitalOcean and Lightsail is what makes the difference.

Conclusions

If you were already on DigitalOcean, you’re probably congratulating yourself right now for making the smarter choice. And you’d be right.

If you’re on Lightsail, there’s no real reason to move. But if you’re running a couple of smallish EC2 VMs, and are sweating the bandwidth costs every month, it might be worth switching and taking advantage of the free bundled bandwidth.

Use Amazon S3 with Laravel 5

Laravel’s Filesystem component makes it very easy to work with cloud storage drivers, and the documentation does an excellent job of covering how the Storage facade works – so I won’t repeat that here.

Instead, here’s the specifics on getting Laravel configured to use S3 as a cloud disk. These instructions are valid as of 4 January 2017.

The AWS Setup

On the AWS side, you need a few things:

  • An S3 bucket
  • An IAM user
  • An IAM policy attached to that user to let it use the bucket
  • The AWS Key and Secret belonging to the IAM user

Step 1: The S3 Bucket

Assuming you don’t already have one, of course.

This is the easiest part – log into AWS, navigate to S3, and create a bucket with any given name. For this example, I’m using write500-backups (mainly because I just migrated the automated backups for write500.net to S3):

2017-01-04 00_33_39-S3 Management Console.png

1. Easy button to find

Then:

2017-01-04 01_41_14-S3 Management Console.png

2. Select your region – with care

US Standard is otherwise known as North Virginia, and us-east-1. You can choose any region, but then you’ll need to use the corresponding region ID in the config file. Amazon keeps a list of region names here.

If you’re using this as a cloud disk for your app, it would make sense to locate the bucket as close as physically possible to your main servers – there are transfer time and latency benefits. In this case, I’m selecting Ireland because I like potatoes.

Step 2: The IAM User

Navigate to IAM and create a new user. AWS has made some updates to this process recently, so it has a lot more of a wizard look and feel.

2017-01-04 00_37_17-IAM Management Console.png

1. Add a new user from the Users tab

2017-01-04 00_37_33-IAM Management Console.png

2. Make sure the Programmatic Access is ticked, so the system generates a Key and Secret

Step 3: The IAM Policy

The wizard will now show the Permissions page. AWS offers a few template policies we’ll completely ignore, since they grant far too much access. We need our user to only be able to access the specific bucket we created.

Instead, we’ll opt to attach existing policies:

2017-01-04 01_45_53-IAM Management Console.png

3. This one

And then create a new policy:

2017-01-04 01_46_01-IAM Management Console.png

4. And then this one

This will pop out a new tab. On that screen, Select “Create Your Own Policy”.

  • Policy name: Something unique
  • Policy description: Something descriptive
  • Policy document: Click here for the sample

Paste that gist into the Policy Document section, taking care that there are no blank spaces preceding the {. Replace “bucket-name” with your actual bucket name, then save:

2017-01-04 01_50_40-IAM Management Console.png

If only insurance were this easy

Go back to the IAM wizard screen and click Refresh – you should see your brand new policy appear at the top of the list.

2017-01-04 00_40_36-IAM Management Console.png

Perfect!

Tick the box, then click Next: Review, and then Create user. It’ll give you the Access key ID and Secret like so:

2017-01-04 00_40_50-IAM Management Console.png

3. When you complete the wizard, you’ll get these.

The Access Key ID (key) and Secret access key (secret) will be plugged into the config file.

Step 4: Configure Laravel

You’ll want to edit the filesystem details at:config/filesystem.php

Near the bottom you should see the s3 block. It gets filled in like so:

fixed.png

Remember to set the correct region for your bucket

And done! The filesystem is now configured. If you’re making your app portable, it would be smart to use env() calls with defaults instead, but I’ll leave you figure that one out 🙂

Step 5: Test

The simplest way to test this is to drop into a tinker session and try working with the s3 disk.

2017-01-04 01_57_16-forge@write500_ ~_write500.net.png

And you should see the corresponding file in the S3 interface itself:

2017-01-04 01_57_43-S3 Management Console.png

Step 6: parrot.gif

Now that you have cloud storage configured, you should use it!.

First (and this will take minimal effort), you should set up the excellent spatie/laravel-backup package. It can do file and db backups, health monitoring, alerts, and can be easily scheduled. Total win.

You can also have Laravel do all its storage there. Just change the default disk:

2017-01-04 02_01_34-write500 - Cloud9.png

config/filesystems.php

This has the benefit of ensuring that even if your server crashes/dies horribly, nothing gets lost. You can also have multiple server instances all talking to the same s3 disk.

In my case, I’m using S3 as the storage for regular backups from write500. I’ll also use the same connection and attempt to publish my internal statistics dumps as CSV directly to S3 – meaning I can pull the data in via Domo’s Amazon S3 connector. I can then undo the SFTP setup I created previously, further securing my server.