I’ve been playing around with a few different technologies for awhile, and wanted to bring them together under a single umbrella, my “toy” domain Geotrocities. I’ll add details of more projects as I get further alongs, but I wanted to get a nice foundation for producing content into place first. That led me down a number of rabbit holes, as one might expect from the great hydra that is every tech problem ever. Unwittingly, I decided to go with Hugo. Not my first time using it, as I’ve got two other very lightly maintained websites running there. But I’ve never gotten very good at Hugo, and I felt like I really needed to prove that this weekend while listening to Twitch StreamAid. So I got started. I went to Hugo and found a theme I found appealing, installed it, and then the troubles began. I swear that in an effort to maximize minimal the documentations contains infinite circular dependencies.

I spent hours trying to figure out how to have a separate folder linked to the header and listed. Never quite succeeded, although I’m not sure if the issue is an idiosyncracy of the Terminal Theme or if it’s me being daft, or just not really a thing that’s done in Hugo. I got to the point where I was able to list the folder, but the primary folder had doubled lists, but then actually it didn’t because when I rebuilt the site that went away, so my testing was probably bad anyways. The action for that was in the config.toml file, I changed the contentTypeName from posts to projects.

I then landed on using Taxonomies to build the list, which largely works. Just need to get that link into place.


Awhile ago I stumbled upon Victoria Drake’s brilliant writeup of automating Hugo deployments with Make. I thought it was a super-cool idea, so decided to model my effort off that. And then didn’t, until today. I decided that I wanted to play with the AWS tools a bit, so needed to make a few adjustments to the approach, but so far I’ve left the emojis in the Makefile.


OPTIMIZE = find $(DESTDIR) -not -path "*/static/*" \( -name '*.png' -o -name '*.jpg' -o -name '*.jpeg' \) -print0 | \
xargs -0 -P8 -n2 mogrify -strip -thumbnail '1000>'

.PHONY: all
all: clean get build test push deploy invalidate

# removing get_repository
#.PHONY: get_repository
#	@echo "🛎 Getting Pages repository"
#	git clone https://github.com/victoriadrake/victoriadrake.github.io.git $(DESTDIR)

.PHONY: clean
	@echo "Cleaning old build"
	cd $(DESTDIR) && rm -rf *

.PHONY: get
	@echo "Checking for hugo"
	@if ! [ -x "$$(command -v hugo)" ]; then\
		echo "Getting Hugo";\
	    wget -q -P tmp/ https://github.com/gohugoio/hugo/releases/download/v$(HUGO_VERSION)/hugo_extended_$(HUGO_VERSION)_Linux-64bit.tar.gz;\
		tar xf tmp/hugo_extended_$(HUGO_VERSION)_Linux-64bit.tar.gz -C tmp/;\
		sudo mv -f tmp/hugo /usr/bin/;\
		rm -rf tmp/;\
		hugo version;\
		echo "Hugo exists";\

.PHONY: build
	@echo "Generating site"
	hugo --gc --minify -d $(DESTDIR)

	@echo "Optimizing images"

.PHONY: test
	@echo "Testing HTML"
	docker run -v $(DIR)/$(DESTDIR)/:/mnt 18fgsa/html-proofer mnt --disable-external

.PHONY: push
	@echo "Preparing commit"
	git config user.email "$(EMAIL)" \
	&& git config user.name "$(NAME)" \
	&& git status \
	&& git add .\
	&& git commit -m "blog update" \
	&& git push origin master
	@echo "Commit complete!"

.PHONY: deploy
	@echo "Preparing push to s3 live site"
	cd $(DESTDIR) \
	&& aws s3 cp . s3://$(S3_DESTINATION_PATH) --recursive
	@echo "Site is deployed!"

.PHONY: invalidate
	@echo "Invalidating pushed files"
	aws cloudfront create-invalidation --distribution-id $(DISTRIBUTION_ID) --paths "/*"

I was able to steal fairly thoroughly from Victoria’s post, but got to make some cool modifications too.

Because I’m using S3 instead of Github Pages, and I wanted the entire site to reside in a git repo, I needed to figure out a way to reconcile capturing the site with getting everything into version control. I split the deploy command from the push to implement this, with push getting the site up to git, and deploy pushing to S3. I decided to try using AWS Code Commit instead of Github for no reason other than playing with CodeCommit. I’m already thinking that I should move the push function to before build so that I’m not storing the static build in VCS. So that’ll likely happen unless it turns out flawed. Decided to go ahead with that change, so that’ll be reflected in the code above.

  1. The test and build functions are pretty nice. I needed to change the GITHB_WORKSPACE to something that worked in my environment, so I just grabbed the CWD.
  2. Split push from deploy.
  3. Stopped the get repository. This is partly due to not wanting to blow away code/template changes when doing the push.
  4. Thinking I should yank the get hugo part, but maybe I’ll desire that portability at some point (container build?)
  5. Added a bit to invalidate my CloudFront cache. The implementation has a couple flaws for a larger site.
  • it’s a bit of a cudgel and invalidates everything which could impact performance on a larger traffic site
  • a better approach would be to just invalidate new content using something like git show --name-only --oneline HEAD |grep ^public |xargs dirname |sort|uniq
  • and just pipe that into the invalidation paths
  • of course … that won’t work now that I yanked public out of git push…
  • but my way only causes a single request, so it should be good to run like 1000/mo without generating a fee


I’m not happy with the automated git. My buddy Adam has beaten it into me to be quite explicit about my commits, so making this a cudgel with git add . will probably be frowned upon. So then I think: “Rick, perhaps all the commits should be manual” and I reply, “Sure, me. But then why move into make for the push. Feels like an unnatural context switch to me. Maybe you should just leave the version control out of the make.” And then a get a sad trombone and sulk away.