Update theme to Lynx

This commit is contained in:
Daniel Mason 2025-06-07 21:52:58 +12:00
parent df8776949c
commit 5b18cfd0c5
Signed by: idanoo
GPG key ID: 387387CDBC02F132
2716 changed files with 227 additions and 57948 deletions

View file

@ -0,0 +1,172 @@
---
title: "Employment History"
---
<h1 style="font-size: 1.6rem;" class="animate__animated animate__pulse animate__faster">
Certifications
</h1>
<div style="display: flex;">
<div style="flex-direction: row; text-align: center; margin-right: 20px">
Feb 2023</br>
<a href="https://www.credly.com/users/daniel-mason.2a91e126/badges" target="_blank">
<img alt="PagerDuty Certified" title="PagerDuty Certified" src="/pagerduty.png" height="100px" width="100px"><br/>
PagerDuty Certified</br>Incident Responder
</a>
</div>
<div style="flex-direction: row; text-align: center; margin-right: 20px">
May 2022</br>
<a href="https://www.credly.com/users/daniel-mason.2a91e126/badges" target="_blank">
<img alt="AWS Logo" title="AWS Logo" src="/hashicorp.jpg"><br/>
Hashicorp Certified</br>Terraform Associate
</a>
</div>
<div style="flex-direction: row; text-align: center; margin-right: 20px">
Jun 2020</br>
<a href="https://www.credly.com/users/daniel-mason.2a91e126/badges" target="_blank">
<img alt="AWS Logo" title="AWS Logo" src="/aws.jpg"><br/>
AWS Certified</br>Cloud Practicioner
</a>
</div>
</div>
<br/>
<h1 style="font-size: 1.6rem;" class="animate__animated animate__pulse animate__faster">OpenSource Contributions</h1>
2025/04/06 <a href="https://www.librenms.org/" target="_blank">LibreNMS</a>
(<a href="https://github.com/librenms/librenms/commit/beb356073d30fac74e928dac96ffa4a9d63036c6" target="_blank">beb3560</a>)
- Add support for Grandstream GWN7822P Switch<br/>
2023/05/28 <a href="https://opnsense.org/" target="_blank">OpnSense</a>
(<a href="https://github.com/opnsense/core/commit/25e2c0a301e1437a98c8958637ca405a5fab21fa" target="_blank">25e2c0a</a>)
- Resolving static routes being created for on far gateways<br/>
2023/01/29 <a href="https://pixelfed.org/" target="_blank">PixelFed</a>
(<a href="https://github.com/pixelfed/pixelfed/pull/4122/commits/55dc9b0aaa4623491387d06fcb83fc9e937e5e16" target="_blank">55dc9b0</a>)
- Resolve issue with backticks in raw SQL for postgresql migrations<br/>
2023/01/09 <a href="https://pixelfed.org/" target="_blank">PixelFed</a>
(<a href="https://github.com/pixelfed/pixelfed/pull/4067/commits/52d65538cdfbebc73456db15390af3cfb8ee2d46" target="_blank">52d6553</a>)
- Update User artisan commands to match UserDelete<br/>
2022/02/15 <a href="https://registry.terraform.io/providers/hashicorp/aws" target="_blank">terraform-provider-aws</a>
(<a href="https://github.com/hashicorp/terraform-provider-aws/pull/23120/commits/5c45fc13a28d111abf44afe16246073dbe93c10c" target="_blank">5c45fc1</a>)
- Allow S3 lifecycle 'days' parameter in S3 transition lifecycle to be >= 0<br/>
2022/01/06 <a href="https://pixelfed.org/" target="_blank">PixelFed</a>
(<a href="https://github.com/pixelfed/pixelfed/pull/3122/commits/9cc18eb82acb46d2d8da22c66d12e55c192dbdb8" target="_blank">9cc18eb</a>)
- Fix for firstOrCreate failing hashtags with case differences on name column
<br/><br/>
<h1 style="font-size: 1.6rem;" class="animate__animated animate__pulse animate__faster">Employment</h1>
<div style="display: flex;">
<div style="flex-direction: row; margin-right: 20px">
<img alt="ezyVet Logo" title="ezyVet Logo" src="/ezyvet_logo.png">
</div>
<div style="flex-direction: row">
<h3 style="margin-top: 5px">ezyVet</h3>
<h4 style="margin-top:-16px; color: #afbac4">Senior Site Reliability Engineer</h4>
<p style="margin-top:-20px;">December 2022 - Current</p>
</div>
</div>
Working with a wide range of technologies and teams to deliver resiliant, secure and performant services.
- Database migrations, maintenance and patching.
- Airflow to move and transform data between services.
- EKS Kubernetes Clusters.
- ElasticSearch migrations, maintenance and patching.
- Golang APIs deployed with Helm into EKS.
- Migrating legacy AWS resources into Terraform.
- Packer/HCL building custom Linux/Windows AMIs.
- Rebuilding existing platforms to scale with a focus on performance and reporting.
<br/>
<div style="display: flex;">
<div style="flex-direction: row; margin-right: 20px">
<img alt="ezyVet Logo" title="ezyVet Logo" src="/ezyvet_logo.png">
</div>
<div style="flex-direction: row">
<h3 style="margin-top: 5px">ezyVet</h3>
<h4 style="margin-top:-16px; color: #afbac4">Site Reliability Engineer</h4>
<p style="margin-top:-20px;">August 2019 - December 2022</p>
</div>
</div>
My role at ezyVet is primarily focused around performance, uptime and cost across the entire platform, primarily working with PHP, MySQL, ElasticSearch and Redis.
- Maintaining services and patching latest versions for security and performance improvements.
- Writing complete automation pipelines on GitLab CI.
- Building a Golang application utilizing multiple best practice AWS services including<br/>&nbsp;&nbsp;API Gateway websockets/SQS/SNS/DynamoDB/Lambda/Terraform.
- Incident response and improving process.
- Creating a docker engineering environment to improve development workflows and onboarding.
- Focus on reducing RDS load and improving SQL query times across the board.
- Completely rewriting Elastic Search service from the ground up to improve search times.
- Writing custom service monitoring tools in Go that interface with Slack/Pagerduty.
- MySQL performance optimizations and maintaining integrity across all environments.
- Introducing Grafana and reachitecting monitoring for observability.
<br/>
<div style="display: flex;">
<div style="flex-direction: row; margin-right: 20px; margin-left: 5px">
<img alt="Catalyst IT Logo" width="90" title="Catalyst IT Logo" src="/catalyst_logo.png">
</div>
<div style="flex-direction: row">
<h3 style="margin-top: 5px">Catalyst IT</h3>
<h4 style="margin-top:-16px; color: #afbac4">Devops Engineer</h4>
<p style="margin-top:-20px;">October 2018 - August 2019</p>
</div>
</div>
At Catalyst, I worked directly with several large corporate clients doing routine patching, performance improvements and general BAU. While primarily configuring CI/CD pipelines for full automated tested and deployments, I also gained experience with helm chart deployments into Kubernetes clusters hosted on OpenStack.I have also had the opportunity to gain more experience working with Puppet, Ansible and GitLab CI.
<br/>
<div style="display: flex;">
<div style="flex-direction: row; margin-right: 20px">
<img alt="ezyVet Logo" title="ezyVet Logo" src="/ezyvet_logo.png">
</div>
<div style="flex-direction: row">
<h3 style="margin-top: 5px">ezyVet</h3>
<h4 style="margin-top:-16px; color: #afbac4">Cloud Environment and Application Engineer</h4>
<p style="margin-top:-20px;">March 2017 - October 2018</p>
</div>
</div>
At ezyVet, I was primarily in charge of reviewing and merging code into our production environment as well as being our in-house DevOps/Infrastructure person. I occasionally did development work as well. In this position I gained vast experience in multiple areas as outlined below.
The position currently included the following responsibilities:
- Setting up and improving our CI/CD processes.
- Maintaining and constantly improving our environment. (AWS/Terraform)
- Setting up custom alerts / metrics with Cloudwatch & Grafana.
- Responding proactively to the above alerts to fix potential issues.
- Heavily involved in rolling out new technologies to our stack.
- Revision, escalation and resolution of tickets and other critical issues.
- Mentored Junior Developers through their induction into ezyVet.
DevOps:
- AWS: Consistently Improving our infrastructure to be efficient and reliable
- Linux: Daily experience with Debian, Ubuntu and Arch.
- CI / CD: Setting up and maintaining GitLab runners to run our Unit Tests / Automatic deploy pipelines inside docker containers.
- Packer/Chef/Vagrant: Building custom AMI images for AWS & Vagrant for our development machines.
- Terraform: Assisted migration of our AWS environments to code.
- Virtualization: Setting up Vagrant / Virtual Box environments that mock our production environment.
Programming:
- PHP: Building small features and day-to-day bug fixes as well.
- MySQL.
- Shell / Bash: Created custom scripts to achieve specific goals.
- Python / Node.js: Lambda functions in serverless.
- HTML / CSS / JS / jQuery.
Management:
- Code Review: In charge of reviewing all code before it gets merged in to our production branches.
- JIRA / Confluence / Freshdesk: Primarily used JIRA for ticket-tracking and Confluence for In-house documentation.
- Mentoring: I have taken on several junior developers over the 2 years to train them and assist them in becoming familiar with out code-base and work-flow.
<br/>
<div style="display: flex;">
<div style="flex-direction: row; margin-right: 20px">
<img alt="ezyVet Logo" title="ezyVet Logo" src="/ezyvet_logo.png">
</div>
<div style="flex-direction: row">
<h3 style="margin-top: 5px">ezyVet</h3>
<h4 style="margin-top:-16px; color: #afbac4">Full Stack Developer</h4>
<p style="margin-top:-20px;">March 2016 - August 2017</p>
</div>
</div>
Focused on building new features and improving our cloud environment and application.<br/>Primarily working with PHP.

View file

@ -0,0 +1,48 @@
---
title: "Configuring an APRS receiver with an RTL-SDR and Direwolf"
tags: ["aprs", "amateur radio", "direwolf"]
date: 2021-12-16
# featuredimagepreview: "/aprs.jpg"
---
What is APRS? Automatic Packet Reporting System! Essentially a VHF radio sending out GPS location pings in the 2M ham band which can be seen at [aprs.fi](https://aprs.fi/). If you take a look at that link you will often see radiosondes and their reported data, weather stations as well as bunch of amateur operators transmitting data all over the world. Fascinating stuff.
![Yaesu FT3D with APRS tuned](/aprs.png)
I recently got my hands on an RTL-SDR v3 which claims much better accuracy and performance over previous models. Figured I'd give it a go receiving APRS and see what I could get. I did a bit of research and tested a few different packages but ended up settling with [Direwolf](https://github.com/wb2osz/direwolf). For those interested, I'm running this on an Ubuntu 20.04 LXC container with USB passthrough for the SDR itself.
Here is a run through of installing/configuring an APRS feeder with an SDR and Direwolf.
```shell
# Install required packages
apt-get install rtl-sdr direwolf
```
My `sdr.conf` file for reference with callsign and location blanked out.
```shell
ACHANNELS 1
ADEVICE null null
CHANNEL 0
MYCALL <CALLSIGN>-10
IGSERVER aunz.aprs2.net
IGLOGIN <CALLSIGN> 23018
MODEM 1200
AGWPORT 8000
KISSPORT 8001
PBEACON sendto=IG delay=0:30 every=10:00 symbol="/-" lat=-12.3456 long=12.3456 alt=in_meter comment="RTLSDR"
```
This is my `run.sh` script I currently launch in `screen` until I can get around to setting up a systemd service.
It simply gets the output from the APRS frequency and pipes it into direwolf launched with our above `sdr.conf` config.
```shell
#!/bin/bash
# -f 144.575M Tunes to 144.575 Mhz
# -g 20.7 is the gain value, rtl_test to view available options
rtl_fm -f 144.575M -s 22050 -g 20.7 | direwolf -c /root/sdr.conf -r 22050 -D 1 -B 1200 -
```
If you set up the LONG/LAT coords properly, you should be able to view your home station on [aprs.fi](https://aprs.fi/)!

View file

@ -0,0 +1,24 @@
---
title: "Expanding OpnSense root disk (21.7+)"
tags: ["opnsense", "disk"]
date: 2024-01-13
# featuredimagepreview: "/opnsense_expand.jpg"
---
Expanding root partition on OpnSense VM
- Expand underlying VM disk
- Reboot
- Run below commands to online resize
- Reboot (Optional?)
```shell
# View the layout
gpart show
# Resolve GPT issue before extending
gpart recover da0
# Make it use all the new space
gpart resize -i 3 da0
# Grow the FS to fill the new partition
growfs /dev/gpt/rootfs
```

View file

@ -0,0 +1,38 @@
---
title: "Nginx stable install on debian/ubuntu from official repos"
tags: ["nginx", "debian", "ubuntu"]
date: 2025-02-22
---
A lot of the time, intsalling nginx from ubuntu/debian repositories is out of date and sometimes we just want the latest version :shrug:.
First up we want to install all required software and pull the latest signing key.
**Run the following commands as the root user**
```shell
apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring
curl -s https://nginx.org/keys/nginx_signing.key | gpg --dearmor > /usr/share/keyrings/nginx-archive-keyring.gpg
```
The next step will vary depending on if you are using debian or ubuntu.
**Debian:**
```shell
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/debian `lsb_release -cs` nginx" \
| tee /etc/apt/sources.list.d/nginx.list
```
**Ubuntu:**
```shell
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
| tee /etc/apt/sources.list.d/nginx.list
```
Once we have the packages && keyring setup, it's time to install:
```shell
apt update && apt install -y nginx
```
Keep in mind, the latest version will have the vhosts configured under `/etc/nginx/conf.d` instead of the legacy `sites-enabled` folder.

View file

@ -0,0 +1,56 @@
---
title: "Mastodon.nz and the recent twitter influx"
tags: ["mastodon", "twitter", "self hosting"]
date: 2022-05-09
# featuredimagepreview: "/server1.jpg"
---
Where to begin.. It's been an interesting fortnight!
I started hosting [mastodon.nz](https://mastodon.nz) around January 2020 on my home 'server' for myself and few friends. It started off as a small LXC container inside Proxmox with 2 cores and 2GB RAM allocated. It stayed this way until about 6 months ago when I kept having scheduled power outages and I couldn't keep running it from my house while maintaining a reasonable uptime.
{{< image src="/kitchen_server.jpg" caption="\"Server\" - April 2021" src_s="/kitchen_server.jpg" src_l="/kitchen_server_l.jpg" >}}
Mid 2021 a good friend of mine offered a great deal on some actual hosting in a datacenter. Now we had 8vCPU + 8GB RAM to play with, time to migrate everything. After a few hours of mastering `rsync` and `pg_dump` - it was done, we were set for a while. At least that's what I thought..
April 2022 brought a nice little surprise - I started getting notifications of new users joining [mastodon.nz](https://mastodon.nz). Confused I started to look at what has changed.. Didn't take long to realise this was only the beginning. I had to allocate more resource to the container to account for the higher throughput.
You can see in the image below it started to approach the allocated 3GB - I bumped it up to 4GB, then shortly after up to 6GB. That's as much as I could give it without moving to bigger hardware. (The drop at the end was a service restart after a version upgrade).
{{< image src="/masto_memory.jpg" caption="Memory usage in mastodon.nz container" >}}
I thought that would be enough for now. Nope.
I got an email shortly after from Sendgrid advising that we were approaching their 100 email/day free tier limit. Crap. I managed to upgrade the plan to account for the burst of registrations before any emails started to bounce.
{{< image src="/sendgrid.jpg" caption="Email throughput peaking around 400/day" >}}
It was about at this time I figured we probably needed to look into the future of mastodon and if we plan on maintaining it long term with a much higher capacity. I jumped on TradeMe and started working out what we could afford and what we needed to maintain a decent growth over the next 12+ months. Boom! Found the perfect thing in our price range. A secondhand HP DL360p Gen8 - With 32 cores and 80GB ram. The only issue is disks. I've never been one to trust secondhand disks. So ended up going over budget and bought 2 x 500GB EVO870's and 2 x 2TB EVO870's to run in a ZFS mirror for storage.
{{< image src="/server1.jpg" caption="New server" >}}
{{< image src="/server_prov.jpg" caption="Provisioning everything (Excuse the mess)"src_s="/server_prov.jpg" src_l="/server_prov_l.jpg" >}}
{{< image src="/server_disk.jpg" caption="Disk setup" >}}
After about 6 hours of configuring and installing everything required, I was ready to make a trip to a local datacenter to rack it (Thanks to previously mentioned friend!) and start the painful migration.
Unfortunately I don't have any photos of the datacenter/server install. But it was awesome - you can take my word for it!
Here is where we are currently sitting:
{{< image src="/server_resource.jpg" caption="Current status" >}}
I'm pretty happy overall with how this has turned out. I have just finished configuring nightly off-site backups to b2backblaze (Remember: RAID is not a backup!).
Costs have gone up a little bit. It's still manageable for the time being and I plan on supporting this long term. If you want to help out, feel free to shout me a coffee @ [OpenCollective](https://opencollective.com/nz-federated-services).
<br/><br/>
We are currently hosting a few other ActivityPub (Fediverse) related sites you might be interested in!
[Pixelfed.nz](https://pixelfed.nz) - **Photos** - Photo Sharing. For Everyone.
[Mtrx.nz](https://mtrx.nz) - **Chat** - An open network for secure, decentralized communication.
[OpenEvents.nz](https://openevents.nz) - **Events** - Tool to help manage your events, your profiles and your groups.
[Peertube.nz](https://peertube.nz) - **Videos** - ActivityPub-federated video streaming platform using P2P directly in your web browser.

View file

@ -0,0 +1,39 @@
---
title: "GeoIP blocking countries using Nginx"
tags: ["nginx", "geoip", "spam", "geo block"]
date: 2024-04-06
---
Quick and easy way to block entire countries using simple nginx rules.
Note this is for Ubuntu/Nginx but may work on other systems.
Install required packages & add to nginx config.
The GeoIP DB will be under /usr/shared/GeoIP/GeoIPv6.dat (Or GeoIP.dat for v4 only):
```shell
sudo apt install -y libnginx-mod-http-geoip geoip-database
echo 'geoip_country /usr/share/GeoIP/GeoIPv6.dat;' > /etc/nginx/conf.d/geoip.conf
```
Add this block under the main "http" block in nginx.conf:
```shell
# /etc/nginx/nginx.conf
map $geoip_country_code $allowed_country {
default yes;
BD no; # Country code to block - Can list mulitple
}
```
Then we need to add a simple check in our site vhost inside the `server {` block, but before the `location /` block:
```shell
# /etc/nginx/sites-enabled/site.conf or /etc/nginx/conf.d/site.conf
if ($allowed_country = no) {
return 403;
}
```
Quick reload and boom! Done!
```shell
sudo systemctl reload nginx
```
Based off an older gist found on [Github here](https://gist.github.com/dunderrrrrr/8d3fced1f73de2d70ede38f39c88d215)

View file

@ -0,0 +1,24 @@
---
title: "Recovering from a root 'rm' and why you need backups"
tags: ["recovery", "backup", "proxmox"]
date: 2021-02-06
# featuredimagepreview: "/proxmox.jpg"
---
Last week I was finally getting around to fixing my backup tooling and validating the recovery procedures. I thought it would be a good idea to have rolling nightly backups on a secondary ZFS pool with weekly syncs to an external disk. I ended up using this exact command to purge the rolling backups:
> `$ find ${BACKUP_FOLDER}/* -mtime +${DAYS_TO_KEEP} -exec rm {} \;`
Before we go any further, I want to point out that I was using the variable `${BACKUP_DIR}`, not `${BACKUP_FOLDER}`.
Can you see where this is going? Yeah.. I didn't pick that up before running my new majestic backup script. Half an hour later I came back to pages of ssh logs resembling `/bin/rm: command not found`. Oops. I instantly knew what had happened, I'd completely written off my Proxmox host and I was lucky it removed the `/bin/rm` binary before removing any actual data.
Further investigation showed that I'd wiped most of /bin and the system was barely usable, I still had rsync.. yay! I quickly shutdown all of my containers/vms and rsync'd `/etc` to my primary ZFS cluster before rebooting to a fresh Proxmox install. After reformatting my root drive and reinstalling from scratch, I copied `/etc/pve/lxc` and
`/etc/pve/qemu-sever` back into place, ran zpool import `{data,storage}` and gave it one more reboot.
I have never been so relieved to check and see all of my containers booting up back to their normal state. There was one or two other small things I had to fix up post-install such as reinstall `nfs-kernel-server` and several other packages. Overal I was pretty lucky I could copy my container configs before reinstalling and total downtime was under an hour.
### Backups are important
**Test your backups.**
Don't get caught out and expect everything to be there when you really need it.

View file

@ -0,0 +1,32 @@
---
title: "iCloud synced SSH configs on macOS"
tags: ["syncing", "config", "ssh"]
date: 2023-08-17
# featuredimagepreview: "/cathacker1.jpg"
---
Maintaining SSH configs can be a pain when you're hopping between multiple devices, even more when syncing apps (Synology/NextCloud/etc) are blocked on some networks/devices.
Make sure you have iCloud Drive enabled and this will symlink your SSH config + key to a folder in iCloud Drive and sync across devices.
The iCloud Drive is located at `~/Library/Mobile Documents/com~apple~CloudDocs`. By putting this file + your config/id_rsa files in here, it will simply symlink them. Warning - the setup script will overwrite existing config/id_rsa files.
Install
```bash
cd ~/Library/Mobile Documents/com~apple~CloudDocs
mkdir macOS && cd macOS
# Create above setup.sh file and copy in your .ssh/config and .ssh/id_rsa keys
./setup.sh
```
setup.sh file
```bash
#!/bin/bash
FOLDER=$(pwd)
ln -sf "$FOLDER/config" ~/.ssh/config
ln -sf "$FOLDER/id_rsa" ~/.ssh/id_rsa
```
Heavily inspired by [This post](https://leihao0.github.io/Sync-macOS-ssh-config-with-iCloud/)