Article Image
Article Image
read

How we got here

Okay so, I haven’t been a real devops guy since around 2009… however I still love playing with the tools and utilities and constantly learning new stuff in that realm as a hobby. When I finally get something, and truly understand it and roll it out to my satisfaction, it’s just SO very rewarding to me!

So, I have a few sites, nothing major. Just this one, a static CV-style professional one at jeckert.net and my self hosted link-tree equivalent at me.sargonas.com. Now for YEARS, I’m talking like, 12+, this have all lived on a web server on AWS. I have had an EC2 instance and a database for EONS. Every year or two however I do some major work to update/change things to just keep fresh with the tech, refresh instances, change deployment toosl, etc to remind myself what I setup and how, and to inevitably fix something that slowly broke over time.

Now what I am about to run through is of little use to most folks who stumble across this, I think gobs of people have written very eloquent how to guides and documentation one can follow far better than mine. This isn’t mean to be a “hey look at me and learn from this!” This is more of a reference guide for me 2 years from now when I try to remember how I set something up or why, when I inevitably break something, lol.

So this time around I decided I no longer need WordPress, I no longer need all this EC2 horsepower and databases (or the overhead with maintaining it), and I can just move things to static. Since the links and CV site are already just one static page, it was time to bring sargonas.com along for the ride as well.

First thing I did was export sargonas.com via a wp plugin into Jekyll post format and set that aside. Then I whipped up a NEW jeckert.net site using Jekyll as well, after I found a theme I liked I just hand entered all the info I wanted to capture. Then the fun began:

AWS prep

First up, I setup some new s3 buckets, one for each one. For each bucket I set them up as a site in the profile settings to make them able to host static pages, and gave them an index.html default routing. Next I created a new IAM user that I would need later for GitHub Automations, and created 3 IAM profiles that each had access to edit the files in a given s3 bucket, then linked all 3 profiles to the user in question. Each of the profiles had a policy written in JSON as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Resource": [
                "arn:aws:s3:::domain.com",
                "arn:aws:s3:::domain.com/*"
            ],
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ]
        }
    ]
}

GitHub prep

Next up I setup 2 new repositories, one for each of the two main sites (me.sargonas.com already had a repository I was using for it as it is a fork of LittleLink). In all 3 repositories, I set some Actions secrets in their settings where I locked in the AWS User ID and secret Key from aboe, as well as the s3 bucket name for each one.

Build the sites

Won’t go too deep into this one, suffice to say I found some Jekyll templates I liked, and set them up as needed. Took a LONG time with this site cause I had a lot of export/import cruft to clean up, but the attention to detail was worth it long term.

Build the GH Action to deploy

For this I had to setup two different types of actions in the .github/workflows/ directories. For my LittleLink fork I just did a vanilla s3 copy of the files, using a very basic template from GitHubs own actions library:


name: Upload Site

on:
  push:
    branches:
    - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v1

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-west-2

    - name: Deploy static site to S3 bucket
      run: aws s3 sync . s3://${{ secrets.AWS_BUCKET_NAME }} --delete

This one is super straight forward. It just grabs the full slate of files, and puts them in s3 anytime main is updated. Nothing fancy there. For the other two, I had to get a bit more complex with my job to account for the need to build the Jekyll site:


name: S3 Build and Deploy

# Controls when the action will run. 
on:
  # Triggers the workflow on push for the main branch
  workflow_run:
    workflows: [ "Lint Code Base" ]
    types: 
      - completed

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
  
env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  AWS_DEFAULT_REGION: 'us-west-2'

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  on-success:
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    steps:
    - uses: actions/checkout@v3
    - name: Set up Ruby
      uses: ruby/setup-ruby@v1
      with:
        bundler-cache: true
        ruby-version: 2.6
    - name: "Build Site"
      run: bundle exec jekyll build
      env:
        JEKYLL_ENV: production
    - name: "Deploy to AWS S3"
      run: aws s3 sync ./_site/ s3://${{ secrets.AWS_S3_BUCKET_NAME }} --delete --cache-control max-age=604800 --ignore "[I made this different per file]"

  on-failure:
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'failure' }}
    steps:
    - name: "Linter failed, skipping deploy"
      run: exit 1

Now this one I got pretty froggy with! For starters, it checks out and then builds the Jekyll static html first, then only uploads the parts of it I care about to the s3 bucket. However, I also added some extended logic.

This parameter makes it wait for my super-linter to complete first, before triggering this run:


workflow_run:
    workflows: [ "Lint Code Base" ]
    types: 
      - completed

Then, the following logic makes sure it either uploads the files if super-linter completes successfully, OR it will fail out if not.

 
 if: ${{ github.event.workflow_run.conclusion == 'success' }}
 

and


on-failure:
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'failure' }}
    steps:
    - name: "Linter failed, skipping deploy"
      run: exit 1

With those files in place, now all I had to do was one last step:

Setup Cloudflare

I run Cloudflare in front of all my domains, have for ages, because random script kiddies on the internet love to poke at web servers. I just hopped into my DNS settings and redirected my domains away from the static IP that I had bound to my EC2 instance (RIP 52.32.157.115. I have had that AWS static IP since 2012… I’ll miss you!) and CNAME’d it over to the s3 buckets host names. (I also toggled off end to end SSL/TLS and changed it to “User to Cloudflare”, since s3 won’t respond correctly to Cloudflare via https and will error out.)

Finishing touches

After that, and once all my tests were happy, I said goodbye to my EC2 instance, my RDS database, all my monitoring scripts and tools, my static IP… and that was that! (Also yay no more $30 a month AWS bills and hello a few dollars a month!)

So there you go, future me, enjoy this quick reminder of what you setup, how, and why, when you inevitably break something and have to fix it because you were poking around trying something new! ;)

Blog Logo

J. "Sargonas" Eckert


Published

Image

Sargonas ://: J. Eckert

Completely making it up as I go along

Back to Overview