Home its_back_up


Deploying a secure, highly available blog site with Jekyll, Cloudflare, Linode and ansible.

After being down for a few weeks, my terrible blog is back up and going to be better than ever! I’ve got things nicely automated and I thought I would share some of the love around about how I did it. I have created a few script and ansible playbooks which I will host on my github for people to use.

System architecture

For some context, my site used to be a wordpress instance on a single Ubutnu server hosted on linode for $30 a month. It was very lame and I hated how big Wordpress was. I needed something simple that I could dump my notes into and move on.
I stumbled into a video about Jekyll by Techno Tim, which is a static site generator from markdown. Maintaining a site with this is much easier then Wordpress. I take all of my notes in markdown anyway so it was perfect. No changes to my workflow and I feel a lot more productive.
I also felt like $30 for a single server that gets no traffic is kind of boring and lame. So I decided to deploy 2 alpine VM’s and a Load Balancer for the same price. Am I ever gonna need load balancing or multiple servers, probably not, but its been fun.
I’ll stop waffling and show you the set up:

At the Moment Cloudflare is proxying traffic to a Linode NodeBalancer (Linode’s node balancing offering… How clever) which is then distrubiting traffic between my 2 alpine machines.

Alright first things first, configurations. I don’t wanna be clicking a bunch of buttons on a web UI to be able to spin up and configure the new servers and all the other configurations I was gonna need. I wanted something on the command line that can do it twice as fast. For this, I’ll be using the linode and cloudflare API’s and linode stackscripts.

Linode stack scripts

I have a pretty simple stack script, it just installs my SSH keys on the server

Alpine setup stacks script
# Create a user with bash commands
# Make a SSH folder
mkdir -p /home/.ssh
# Add public keys from your github user, Don't forget to change the account.
wget https://github.com/my_githubusername.keys -O /home/ah34/.ssh/authorized_keys

# Configure openssh (it is preinstalled on the image)
sed -i -e 's/PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i -e 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i -e 's/PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i -e 's/#UseDNS no/UseDNS no/' /etc/ssh/sshd_config
/etc/init.d/sshd restart

# Install and configure docker
apk add docker
addgroup a user you create name docker
rc-update add docker boot
echo "cgroup /sys/fs/cgroup cgroup defaults 0 0" | tee -a /etc/fstab
service docker start

The Linode and Cloudflare API.

I may open source the scripts I used to configure everything at some stage, but I want to improve them privately first. For now I will just tell you about the configurations I have done and then you can do them yourself.

Cloudflare API

In cloudflare I have SSL enabled and a few CNAME records proxied for the root of my domain. This lets me be lazy and not have to worry about https. I created this script to help me update IP addresses for the DNS as I built and deleted Load Balancer from linode:

Cloudflare update DNS script
import argparse
from lib.cf_zones import zones
from lib.common import common

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('-n','--name', required=True, help='Name of the record')
    parser.add_argument('-i', '--address', required=True, help='IP address for the record')
    parser.add_argument('-zi', '--zone_identifier', required=True, type=str)
    parser.add_argument('-ri', '--record_identifier', required=True,type=str)
    parser.add_argument('-rt', '--record_type', required=True,type=str)
    parser.add_argument('-t', '--ttl', required=True,type=int)
    parser.add_argument('-p', '--proxied', action='store_true', help='Enable cloudflare proxy for the record')
    args = parser.parse_args()

All of the magic is hidden in lib but I want improve these before I show them off.

Linode API

I went a bit nuts on the linode API, I have scripts to automate Firewalls, Load balancer’s, Linode node’s configuration deployment and deletion.

The main configuration I want to look at for this is a firewall configuration script:

Linode firewall config script
import requests
from lib.linode_firewall import firewall
from lib.common import common

if __name__ == "__main__":
    CFiprangesv4 = requests.get('https://www.cloudflare.com/ips-v4')
    if CFiprangesv4.status_code != 200:
            exit(1,'Could not get the list of IPv4 addresses')
    CFiprangesv6 = requests.get('https://www.cloudflare.com/ips-v6')
    if CFiprangesv6.status_code != 200:
            exit(1,'Could not get the list of IPv6 addresses')
    inbound_firewall_list = []
    outbound_firewall_list = []
        ipv4addresses= CFiprangesv4.text.split('\n'),
        ipv6addresses= CFiprangesv6.text.split('\n'),
        description='Cloudflare IP ranges for web ports',

Yet again, another magic lib but I promise that it works well and I’ll release them later. At the top, what we are doing is getting a list of the IPv4 & IPv6 ranges that Cloudflare is using at the moment. I have this running in a cron job to make sure that only Cloudflare is able to talk to the servers.

Other then that the configurations are just standard. If you can set up a linode and node balancer, you can keep up with my environment.


So now I have an environment to run my site, I need a way to deploy it. For this I thought I would use docker as it would be easiest to spin up and deploy across machines. I created this an nginx-alpine dockerfile with my ./_site configured to be the nginx site root:

FROM nginx:1.21.6-alpine
# Set up the built files
RUN mkdir /app
COPY ./_site /app
# Set up the NGNIX files
COPY ./docker/nginx/default.conf /etc/nginx/conf.d/default.conf
RUN mkdir /etc/nginx/logs
EXPOSE 80/tcp
# NGNIX will start and run on its own with this configuraion

This works pretty well when I provided a working nginx configuration. In the end, my directory structure looks like this:

Github actions

To make the build and deploy process easy and secure, I wanted to deploy my container to a private registry in the cloud. I decided to deploy it ghcr to integrated some sort of automated building on there servers. This is where github actions comes in. I created the following file in the .github/workflows/build_container.yml This workflow builds the container using github actions, then will send a notification to my discord server via a webhook. Make sure you fix the formatting on the variables.

name: Create and publish a Docker image
on: [push]
  REGISTRY: ghcr.io
  IMAGE_NAME: $\{\{ github.repository \}\}

    runs-on: ubuntu-latest
      contents: read
      packages: write
      - name: Checkout repository
        uses: actions/checkout@v3  
      - name: Log in to the Container registry
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
          registry: $\{\{ env.REGISTRY \}\}
          username: $\{\{ github.actor \}\}
          password: $\{\{ secrets.GITHUB_TOKEN \}\}
      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
          images: $\{\{ env.REGISTRY \}\}/$\{\{ env.IMAGE_NAME \}\}
      - uses: ruby/[email protected]
          ruby-version: '3.0'
          bundler-cache: true
      - name: Build the site
          JEKYLL_ENV: production
        run: bundle install && bundle exec jekyll b
      - name: Build and push Docker image
        uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
          context: .
          push: true
          tags: $\{\{ steps.meta.outputs.tags \}\}
          labels: $\{\{ steps.meta.outputs.labels \}\}
      - name: Success discord web hook
        uses: Ilshidur/[email protected]
            DISCORD_WEBHOOK: $\{\{ secrets.WEBHOOK_URL \}\}
        if: $\{\{ success() \}\}
            # Add build success message for discord server here:
            args: 'Build successful for branch $\{\{steps.meta.outputs.tags\}\}'
      - name: Failure discord web hook
        uses: Ilshidur/[email protected]
            DISCORD_WEBHOOK: $\{\{ secrets.WEBHOOK_URL \}\}
        if: $\{\{ failure() \}\}
            # Add build failure message for discord server here:
          args: 'Build failure for branch $\{\{steps.meta.outputs.tags\}\}'


Finally, it was time for deployment automation, I have a rather large inventory yaml file with lots of roles, and these servers are under the “alpinecloud” group. I mapped the following group vars:

ghcrpass: # Github container repo personal access token
ghcruser: # Github container repo username

Firstly, I created a role with ansible-galaxy

Create a role template
ansible-galaxy init "{rolename}"

Then in the tasks folder I create the task files The main.yml task will import the other tasks that I’ll need. It just keeps all of my jobs clean and simple to re-use. It skips the install pip step if pip is already installed.

- name: check if pip is installed
    path: /usr/bin/pip
  register: pipinstalled
- name: install pip and docker
  import_tasks: install_python_modules.yml
  when: pipinstalled == True
- name: deploy blog
  import_tasks: deploy_blog.yml

The first task we import is install_python_modules.yml. This ensures that pip and the docker package are installed on the machine.

- name: ensure pip is installed
    name: py3-pip
    update_cache: yes
  become: true
  become_method: su
- name: Copy the requirements file over
    src: requirements.txt
    dest: /tmp/requirements.txt
- name: Install python module
    requirements: /tmp/requirements.txt
- name: Clean up the requirements file
    state: absent
    path: /tmp/requirements.txt

That script grabs a local file found at files/requirements.txt


Now, onto actually deploying the container. I created the tasks/deploy_blog.yml task to handle that job. It uses the docker module to log into the container registry and pull the latest version of my containers.

- name: Login to ghcr
    username: ""
    password: ""
    registry: ghcr.io
    state: present
- name: Pulling and starting latest build from ghcr
    hostname: jekyell-site
    pull: true
    image: ghcr.io/path to image
    restart_policy: unless-stopped
    memory: 1.5g
    name: jekyell-site
    state: started
- name: Log out of ghcr
    state: absent
    registry: ghcr.io
    username: ""
    password: ""

After all of this we finally can create our play book. In the root of your ansible directory, create “ansible/playbooks/deploy_blog.yml”

- hosts: { the hosts you need }
  - '../roles/blog'

Run the playbook and watch the magic happen!

Ansible playbook command
ansible-playbook ansible/playbooks/deploy_blog.yml -i "ansible/inventories/{your environment}/" --vault-password-file "{Location of the password  file}"
This post is licensed under CC BY 4.0 by the author.