Thoughts on Whatnot
Blog Restructure and IPFS
May 05, 2020

It’s been a long time coming, but I’ve finally reworked this blog. The design is not final, but there are three main goals that I wanted to achieve:

  1. Ditch Hugo in favor of a simpler site generation flow.
  2. Drastically simplify the presentation a la the various little tech blogs that are basically just plain text.
  3. Put everything on top of IPFS.

The first was actually the most complicated. In order to ditch Hugo, I wrote my own site generator, bog. It’s got a lot of missing features and things that need fixing and/or tweaking, but it has accomplished my goal of being extremely simple. No required file structure. No multiple levels of template files. No archetypes and taxonomies and metadata sections that are longer than the actual post. Just a couple of templates, some input files, and some output files. That’s pretty much it. Very, very, very simple.

The presentation, as I mentioned previously, is definitely not final. The index in particular looks to me a little bit too much like one of those awkward placeholder ‘buy this domain’ sites. But I’ve basically just completely stripped out everything that wasn’t entirely necessary. It has a title, the article, and a license information footer. And that’s about it. Which is how I’d like to keep it.

Finally, and most interestingly, is the new hosting system. Originally, this site ran on top of a little HTTP server that was basically just a very light wrapper around Go’s net/http package. That was replaced a while back with a Caddy instance, but ever since I discovered IPFS I thought that it might be interesting, since this is pretty much entirely a static site, to host the site on top of it. So, after some fiddling, I come up with a solution that I think works, and it actually simplifies the workflow for publishing new posts on the site, interestingly.

In the old workflow, in order to publish the site, I had a git hook set up that, when I pushed to a certain branch, would basically copy the directory with all of the files to be published to somewhere that they could be found by the Caddy server. I experimented with Caddy’s git directive, but it never quite seemed to work the way that I wanted it to, so I opted for the custom hook instead.

The main problem with setting up an IPFS-based system was that I needed some way to automatically update my DNS records for IPNS and DNSLink. The other problem was how to make sure that both IPFS-based clients and regular HTTP clients would get the same site. In the end after a bit of experimentation, I decided on not trying to keep a separate server running for each. Instead, this site is actually being served entirely through IPFS. When an HTTP client accesses it, the reverse proxy that I’m using has been configured to route traffic to the gateway of a local IPFS node that also serves as the primary source for the site on that network. In other words, HTTP clients that don’t know how to handle IPFS are simply served an automatically converted version of the site. For publishing, therefore, I only need to update the IPFS network’s information about the site, and the reverse proxy and gateway combo will take care of the rest.

In order to accomplish that, I’ve created a small script that does the following: First, it finds the old CID of the site by using the local IPFS node on whatever machine that I’m working on to resolve /ipns/deedlefake.com to an IPFS address. To do this, I use curl to access the local API’s resolve endpoint and pipe the result through jq:

old="$(curl -s -X POST "http://localhost:5001/api/v0/resolve?arg=/ipns/deedlefake.com | jq -r ".Path // error(.Message)" | cut -d '/' -f 3)"

Then, it calculates the new CID of the site by recursively adding the root of the static site to the local node:

new="$(ipfs add -Qr --pin=false pub)"

It does a quick check to make sure that the CID has changed, since if it hasn’t then there’s no point in continuing, and then it updates the pin in the IPFS cluster that the machine is on. Since the machines that I’m writing on are all part of the cluster, then it also automatically updates the central server that’s providing the main hosting for the site.

ipfs-cluster-ctl pin update "$old" "$new"

Finally, it uses a CLI client for my DNS provider to update the DNSLink TXT record.

As complicated as that sounds, it’s actually significantly more convenient to work with than the old publishing system was, provided that IPFS Cluster doesn’t but out. Again. And by putting everything in an IPFS cluster, and machine I add in order to write on automatically becomes a server for the site, too, at least for IPFS clients.