From Sea to Sky
I migrated my website to a new hosting service for the first time in a while.
Estimated reading time: .
For the last six years or so, I’ve used DigitalOcean as my web host. They’re pretty great; a single-core VPS is like $6-$7 per month (used to be $5, but that’s inflation) and dead simple to use.
I’ve never really figured out how to deploy software. When I was using Middleman, I never had to: my deploy sequence was to compile the site on my personal machine, then
rsync the build folder up to the server. It was all precompiled static assets that
nginx just streamed directly to the client upon request, nothing more. So when I switched to using a real live application as my site engine, I didn’t really know how to send my program to the server and have it run there.
For a while I would deploy the source code of my application to the server, then install the relevant language tools and use those to compile and run it directly on that machine. This rapidly soured on me, as it turns out language environments drift over time and if I didn’t maintain exactly the same version set on my server as my laptop, it would start behaving weirdly and I’d have no idea why.
I never bothered really getting into learning how to do better. I knew technologies for it existed; when I switched my website over to Elixir, their documentation and toolchain has a whole section dedicated specifically to building a directly-runnable program and shipping it to other machines to run. I toyed with this, but gave up. I liked being able to say
mix serve, which required the development tools, and figuring out how to manage secrets and environments without
mix was just frustrating.
Supposedly this dilemma is what things like Docker are supposed to solve, but that requires learning how to set up a Docker image that has enough of a Linux installation to run your programs, and for anything more complicated than one program running on one port with one bridge to the real filesystem, it’s …not fun.
People I follow on Twitter started talking about a company called Fly.io, and out of idle boredom I started reading their docs, and saw they had a page dedicated to sending them an Elixir/Phoenix app for them to host. I of course read it, and was delighted to find out all I had to do was run a quick command to spin up their template, and they would handle creating a Docker image on my machine and sending it to their fleet for me. This was exactly what I wanted! So I followed the tutorial, a few times to get past my errors, and now that’s what is hosting my site. Updates are just a
fly deploy, nothing more. It’s incredibly freeïng.
Perfection is an ever-moving target. Surely you didn’t think you were going to make it to the end of one of my articles without any pitfalls, drawbacks, or disappointments!
Like many things in computers, websites can be divided into two1 categories: code and data. The code of my website is an Elixir project that has to be recompiled every time I make updates to the compute logic or, because of the way the Phoenix framework is structured, the HTML, CSS, or (mumbling)-Script assets that form the main structure of each page you visit.
But you know what doesn’t affect the way my Elixir program serves up content upon request? The actual content it serves. My articles and associated media aren’t part of the build process; the compiled program loads them from disk every2 time a browser asks for them. I shouldn’t need to restart, or worse, rebuild my entire web application just because I edited one of my blog articles!
Unfortunately, the way Fly works is that it bundles my entire project directory into a single Docker image, and ships that off as a fairly self-contained bundle to run. My content gets bundled into this image, so updating the content rebuilds the whole image.
There is, of course, a solution, and Fly already has it ready to go. I could provision a dedicated storage volume, put my content in that instead of inside my application image, and tell my application to load from the Fly volume when it’s deployed. Elixir, like most languages adapted for internet deployments, can distinguish between when it’s being run on a developer’s laptop vs a real server, and change its configuration accordingly. This would be fairly straightforward.
… Except the data on Fly volumes can only be modified by Fly applications. I can’t ship new article contents from my laptop to a volume, and have my website pick it up next time it looks at the disk. I’d have to somehow add logic to my application to allow foreign connections to submit new content that it would then write to disk, and I definitely don’t want to try doing that.
I think the “easiest” thing to do would be to write a webhook processor in my application, tell GitHub to request that hook URL every time I push to my content repository (which is separate from the application, precisely because they don’t actually need to be bundled!), and have my application pull my content repository to the volume. This would require ensuring that I ship
git in my application image, but it’d probably work?
Or I could just rebuild my entire program every time I fix a typo, in a move sure to infuriate every programmer who ever had to work with whole entire computer systems smaller than the Docker bundle I ship off with each rebuild.
Imperfections and less-than-ideal solutions are all around us. We happen to have a truly obscene amount of compute and network resources available to us in the post-industrial world, so I just do not feel driven to pursue smaller and harder improvements that aren’t in my particular area of hyperfocus.
Sometimes it’s nice to just stop thinking about the machine and publish some stuff.
I do not pay for nor am I paid by Fly. I don’t work for them or speak for them. I simply had a nice time using their free service to accomplish a goal. Etc etc.