I haven’t really written much for this blog. But one thing that really gets to me every time I write a post is the way I publish it. When I first created this site, it wasn’t for the blogging itself, it was more the experience of hosting my own website. I was going to put the first blog post on GitHub pages but I ended up creating my own website infrastructure.
I got the domain fredrb.com
from GoDaddy over two years ago. My initial intent was to create many services and host them in sub-domains of fredrb.com
. I wanted to have things like api.fredrb.com
, blog.fredrb.com
and projects.fredrb.com
. The goal was to host my own experiments and play around with Load Balancing, Service to Service communication and even Scalability. Due to lack of time and effort put on this, I ended up setting a server with only the contents for blog.fredrb.com
.
In this post I’m going to go over what I currently use to host this blog and what other options were considered. Always to remember that the infrastructure decisions were more based on learning and experimenting than cost and simplicity.
Deciding on Infrastructure
I was sure I didn’t want to host on GitHub pages. I wanted to have my own infrastructure. So that was the starting point of all. My goal was to get an IaaS (Infrastructure as a Service) plan and set-up my server from ground up. The other PaaS (Platform as a Service) or SaaS (Software as a Service) options from GCP and AWS would do the trick for static hosting, but I still wanted to build my own thing. The options I had to choose from were GCP Compute, AWS EC2 or a VM on Digital Ocean. I choose Digital Ocean’s VM plan due to ease of use. I was not going to pay per usage, the plan I got was a $5/month machine, only to experiment with whatever I needed.
Digital Ocean configuration for Domain was also easy [1], the only thing I had to do was to specify Digital Ocean’s nameserver on GoDaddy DNS configuration and then point the domain to the machine I just acquired.
Deciding on static hosting technology
The blog itself is built using Jekyll. A Static Website written in Ruby. Post are written in Markdown and the website is generated by simply running a command (jekyll build
). When I got access to the server and had the static websites files there I had to make a choice on how to serve those files. The first idea was to write a simple NodeJS HTTP server that would expose the _site
folder (where are the static files are) and be done with it.
Now since I had a couple of more ideas, like having different applications per sub-domain and experimenting with Load Balancing, I went for an NGINX installation. Now looking back I could have installed NGINX using its docker image but at the time I installed system-wide and configured like an On-Prem server. Website content is under /srv/www/fredrb.com
and there is a configuration file on /etc/nginx/sites-enabled
with post, sub-domains and path to the static content.
Deciding on automation
Now for the most exciting part. The first post I published I did it manually. I built the jekyll project locally and copied the folder from my machine to the remote machine via scp
. That works. It’s not too much of an effort, but I liked the idea of automating this process. The goal was the following:
Every push to the master branch, my website content is updated automatically.
The blog is hosted on GitHub. I knew I needed a CI in order to do this. I though about creating my own Jenkins instance on my server on the ci.fredrb.com
sub-domain but I opted for Circle CI (which already provides seamless GitHub integration) [2]. With Circle CI I had to configure the following steps after every push to master:
- Install dependencies with Bundler
- Build using
jekyll build
- Copy
_site
content to/srv/www/fredrb.com
in my remote server
The last step was a bit tricky cause I need to provide access from my CircleCI instance to my server. I accomplished this by generating an ssh key on my local machine, then adding the private key to Circle CI and the public key to /etc/authorized_keys
in my server [3].
Automation worked fine after all. However I still question myself on some of the decisions. Specially regarding security.
- Should I hide my server’s IP? It’s there on the Circle CI config and I feel like it should be on an environment variable or something.
- Should I only trigger a script in my server that will pull the latest from GitHub and build it there? Rather than copying files from my CI to my server?
Like I said in the beginning. This blog, the server, using Circle CI is all pure experimenting. Perhaps if it was for the blog post itself I would have opted for GitHub pages.
Footnotes
[1] - For some reason I have failed to configure my own domain on GCP. Maybe it is my stupidity. Anyway, creating a bucket with static files and then “proving” to GCP I own that domain was incredibly counter intuitive. Also later on mapping sub-domains to applications was not something I was able to do easily. Hence my satisfaction with Digital Ocean by simply pointing sub-domain requests to specific machines.
[2] - It is something I really wanted to do but at the same time I felt that if I tried to set-up everything by myself I would never finish. So having a SaaS CI is fine.
[3] - I don’t think this is a major exploit but I felt like Circle CI could have a feature for generating its own keys and then only expose the Public for me to place on my server (like BitBucket Pipeline does).
If you hated this post, and can't keep it to yourself, consider sending me an e-mail at fred.rbittencourt@gmail.com or complain with me at Twitter X @derfrb. I'm also occasionally responsive to positive comments.