mirror of
https://github.com/idanoo/m2.nz
synced 2025-07-02 21:32:15 +00:00
Update theme to Lynx
This commit is contained in:
parent
df8776949c
commit
5b18cfd0c5
2716 changed files with 227 additions and 57948 deletions
48
m2.nz-20250607/content/posts/configuring_aprs.md
Normal file
48
m2.nz-20250607/content/posts/configuring_aprs.md
Normal file
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: "Configuring an APRS receiver with an RTL-SDR and Direwolf"
|
||||
tags: ["aprs", "amateur radio", "direwolf"]
|
||||
date: 2021-12-16
|
||||
# featuredimagepreview: "/aprs.jpg"
|
||||
---
|
||||
|
||||
What is APRS? Automatic Packet Reporting System! Essentially a VHF radio sending out GPS location pings in the 2M ham band which can be seen at [aprs.fi](https://aprs.fi/). If you take a look at that link you will often see radiosondes and their reported data, weather stations as well as bunch of amateur operators transmitting data all over the world. Fascinating stuff.
|
||||
|
||||

|
||||
|
||||
I recently got my hands on an RTL-SDR v3 which claims much better accuracy and performance over previous models. Figured I'd give it a go receiving APRS and see what I could get. I did a bit of research and tested a few different packages but ended up settling with [Direwolf](https://github.com/wb2osz/direwolf). For those interested, I'm running this on an Ubuntu 20.04 LXC container with USB passthrough for the SDR itself.
|
||||
|
||||
Here is a run through of installing/configuring an APRS feeder with an SDR and Direwolf.
|
||||
```shell
|
||||
# Install required packages
|
||||
apt-get install rtl-sdr direwolf
|
||||
```
|
||||
|
||||
My `sdr.conf` file for reference with callsign and location blanked out.
|
||||
```shell
|
||||
ACHANNELS 1
|
||||
ADEVICE null null
|
||||
CHANNEL 0
|
||||
|
||||
MYCALL <CALLSIGN>-10
|
||||
IGSERVER aunz.aprs2.net
|
||||
IGLOGIN <CALLSIGN> 23018
|
||||
|
||||
MODEM 1200
|
||||
AGWPORT 8000
|
||||
KISSPORT 8001
|
||||
|
||||
PBEACON sendto=IG delay=0:30 every=10:00 symbol="/-" lat=-12.3456 long=12.3456 alt=in_meter comment="RTLSDR"
|
||||
```
|
||||
|
||||
This is my `run.sh` script I currently launch in `screen` until I can get around to setting up a systemd service.
|
||||
It simply gets the output from the APRS frequency and pipes it into direwolf launched with our above `sdr.conf` config.
|
||||
|
||||
```shell
|
||||
#!/bin/bash
|
||||
|
||||
# -f 144.575M Tunes to 144.575 Mhz
|
||||
# -g 20.7 is the gain value, rtl_test to view available options
|
||||
|
||||
rtl_fm -f 144.575M -s 22050 -g 20.7 | direwolf -c /root/sdr.conf -r 22050 -D 1 -B 1200 -
|
||||
```
|
||||
If you set up the LONG/LAT coords properly, you should be able to view your home station on [aprs.fi](https://aprs.fi/)!
|
24
m2.nz-20250607/content/posts/expand_opnsense_root_disk.md
Normal file
24
m2.nz-20250607/content/posts/expand_opnsense_root_disk.md
Normal file
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: "Expanding OpnSense root disk (21.7+)"
|
||||
tags: ["opnsense", "disk"]
|
||||
date: 2024-01-13
|
||||
# featuredimagepreview: "/opnsense_expand.jpg"
|
||||
---
|
||||
|
||||
Expanding root partition on OpnSense VM
|
||||
|
||||
- Expand underlying VM disk
|
||||
- Reboot
|
||||
- Run below commands to online resize
|
||||
- Reboot (Optional?)
|
||||
|
||||
```shell
|
||||
# View the layout
|
||||
gpart show
|
||||
# Resolve GPT issue before extending
|
||||
gpart recover da0
|
||||
# Make it use all the new space
|
||||
gpart resize -i 3 da0
|
||||
# Grow the FS to fill the new partition
|
||||
growfs /dev/gpt/rootfs
|
||||
```
|
38
m2.nz-20250607/content/posts/installing_latest_nginx.md
Normal file
38
m2.nz-20250607/content/posts/installing_latest_nginx.md
Normal file
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
title: "Nginx stable install on debian/ubuntu from official repos"
|
||||
tags: ["nginx", "debian", "ubuntu"]
|
||||
date: 2025-02-22
|
||||
---
|
||||
|
||||
A lot of the time, intsalling nginx from ubuntu/debian repositories is out of date and sometimes we just want the latest version :shrug:.
|
||||
|
||||
First up we want to install all required software and pull the latest signing key.
|
||||
|
||||
**Run the following commands as the root user**
|
||||
```shell
|
||||
apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring
|
||||
curl -s https://nginx.org/keys/nginx_signing.key | gpg --dearmor > /usr/share/keyrings/nginx-archive-keyring.gpg
|
||||
```
|
||||
|
||||
The next step will vary depending on if you are using debian or ubuntu.
|
||||
|
||||
**Debian:**
|
||||
```shell
|
||||
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
|
||||
http://nginx.org/packages/debian `lsb_release -cs` nginx" \
|
||||
| tee /etc/apt/sources.list.d/nginx.list
|
||||
```
|
||||
|
||||
**Ubuntu:**
|
||||
```shell
|
||||
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
|
||||
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
|
||||
| tee /etc/apt/sources.list.d/nginx.list
|
||||
```
|
||||
|
||||
Once we have the packages && keyring setup, it's time to install:
|
||||
```shell
|
||||
apt update && apt install -y nginx
|
||||
```
|
||||
|
||||
Keep in mind, the latest version will have the vhosts configured under `/etc/nginx/conf.d` instead of the legacy `sites-enabled` folder.
|
56
m2.nz-20250607/content/posts/mastodon_influx.md
Normal file
56
m2.nz-20250607/content/posts/mastodon_influx.md
Normal file
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
title: "Mastodon.nz and the recent twitter influx"
|
||||
tags: ["mastodon", "twitter", "self hosting"]
|
||||
date: 2022-05-09
|
||||
# featuredimagepreview: "/server1.jpg"
|
||||
---
|
||||
|
||||
Where to begin.. It's been an interesting fortnight!
|
||||
|
||||
I started hosting [mastodon.nz](https://mastodon.nz) around January 2020 on my home 'server' for myself and few friends. It started off as a small LXC container inside Proxmox with 2 cores and 2GB RAM allocated. It stayed this way until about 6 months ago when I kept having scheduled power outages and I couldn't keep running it from my house while maintaining a reasonable uptime.
|
||||
|
||||
{{< image src="/kitchen_server.jpg" caption="\"Server\" - April 2021" src_s="/kitchen_server.jpg" src_l="/kitchen_server_l.jpg" >}}
|
||||
|
||||
Mid 2021 a good friend of mine offered a great deal on some actual hosting in a datacenter. Now we had 8vCPU + 8GB RAM to play with, time to migrate everything. After a few hours of mastering `rsync` and `pg_dump` - it was done, we were set for a while. At least that's what I thought..
|
||||
|
||||
April 2022 brought a nice little surprise - I started getting notifications of new users joining [mastodon.nz](https://mastodon.nz). Confused I started to look at what has changed.. Didn't take long to realise this was only the beginning. I had to allocate more resource to the container to account for the higher throughput.
|
||||
|
||||
You can see in the image below it started to approach the allocated 3GB - I bumped it up to 4GB, then shortly after up to 6GB. That's as much as I could give it without moving to bigger hardware. (The drop at the end was a service restart after a version upgrade).
|
||||
|
||||
{{< image src="/masto_memory.jpg" caption="Memory usage in mastodon.nz container" >}}
|
||||
|
||||
I thought that would be enough for now. Nope.
|
||||
|
||||
I got an email shortly after from Sendgrid advising that we were approaching their 100 email/day free tier limit. Crap. I managed to upgrade the plan to account for the burst of registrations before any emails started to bounce.
|
||||
|
||||
{{< image src="/sendgrid.jpg" caption="Email throughput peaking around 400/day" >}}
|
||||
|
||||
It was about at this time I figured we probably needed to look into the future of mastodon and if we plan on maintaining it long term with a much higher capacity. I jumped on TradeMe and started working out what we could afford and what we needed to maintain a decent growth over the next 12+ months. Boom! Found the perfect thing in our price range. A secondhand HP DL360p Gen8 - With 32 cores and 80GB ram. The only issue is disks. I've never been one to trust secondhand disks. So ended up going over budget and bought 2 x 500GB EVO870's and 2 x 2TB EVO870's to run in a ZFS mirror for storage.
|
||||
|
||||
{{< image src="/server1.jpg" caption="New server" >}}
|
||||
|
||||
{{< image src="/server_prov.jpg" caption="Provisioning everything (Excuse the mess)"src_s="/server_prov.jpg" src_l="/server_prov_l.jpg" >}}
|
||||
|
||||
{{< image src="/server_disk.jpg" caption="Disk setup" >}}
|
||||
|
||||
After about 6 hours of configuring and installing everything required, I was ready to make a trip to a local datacenter to rack it (Thanks to previously mentioned friend!) and start the painful migration.
|
||||
|
||||
Unfortunately I don't have any photos of the datacenter/server install. But it was awesome - you can take my word for it!
|
||||
|
||||
Here is where we are currently sitting:
|
||||
|
||||
{{< image src="/server_resource.jpg" caption="Current status" >}}
|
||||
|
||||
I'm pretty happy overall with how this has turned out. I have just finished configuring nightly off-site backups to b2backblaze (Remember: RAID is not a backup!).
|
||||
|
||||
Costs have gone up a little bit. It's still manageable for the time being and I plan on supporting this long term. If you want to help out, feel free to shout me a coffee @ [OpenCollective](https://opencollective.com/nz-federated-services).
|
||||
<br/><br/>
|
||||
We are currently hosting a few other ActivityPub (Fediverse) related sites you might be interested in!
|
||||
|
||||
[Pixelfed.nz](https://pixelfed.nz) - **Photos** - Photo Sharing. For Everyone.
|
||||
|
||||
[Mtrx.nz](https://mtrx.nz) - **Chat** - An open network for secure, decentralized communication.
|
||||
|
||||
[OpenEvents.nz](https://openevents.nz) - **Events** - Tool to help manage your events, your profiles and your groups.
|
||||
|
||||
[Peertube.nz](https://peertube.nz) - **Videos** - ActivityPub-federated video streaming platform using P2P directly in your web browser.
|
39
m2.nz-20250607/content/posts/nginx_geo_block.md
Normal file
39
m2.nz-20250607/content/posts/nginx_geo_block.md
Normal file
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
title: "GeoIP blocking countries using Nginx"
|
||||
tags: ["nginx", "geoip", "spam", "geo block"]
|
||||
date: 2024-04-06
|
||||
---
|
||||
|
||||
Quick and easy way to block entire countries using simple nginx rules.
|
||||
Note this is for Ubuntu/Nginx but may work on other systems.
|
||||
|
||||
Install required packages & add to nginx config.
|
||||
The GeoIP DB will be under /usr/shared/GeoIP/GeoIPv6.dat (Or GeoIP.dat for v4 only):
|
||||
```shell
|
||||
sudo apt install -y libnginx-mod-http-geoip geoip-database
|
||||
echo 'geoip_country /usr/share/GeoIP/GeoIPv6.dat;' > /etc/nginx/conf.d/geoip.conf
|
||||
```
|
||||
|
||||
Add this block under the main "http" block in nginx.conf:
|
||||
```shell
|
||||
# /etc/nginx/nginx.conf
|
||||
map $geoip_country_code $allowed_country {
|
||||
default yes;
|
||||
BD no; # Country code to block - Can list mulitple
|
||||
}
|
||||
```
|
||||
|
||||
Then we need to add a simple check in our site vhost inside the `server {` block, but before the `location /` block:
|
||||
```shell
|
||||
# /etc/nginx/sites-enabled/site.conf or /etc/nginx/conf.d/site.conf
|
||||
if ($allowed_country = no) {
|
||||
return 403;
|
||||
}
|
||||
```
|
||||
|
||||
Quick reload and boom! Done!
|
||||
```shell
|
||||
sudo systemctl reload nginx
|
||||
```
|
||||
|
||||
Based off an older gist found on [Github here](https://gist.github.com/dunderrrrrr/8d3fced1f73de2d70ede38f39c88d215)
|
24
m2.nz-20250607/content/posts/proxmox_rm_recovery.md
Normal file
24
m2.nz-20250607/content/posts/proxmox_rm_recovery.md
Normal file
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: "Recovering from a root 'rm' and why you need backups"
|
||||
tags: ["recovery", "backup", "proxmox"]
|
||||
date: 2021-02-06
|
||||
# featuredimagepreview: "/proxmox.jpg"
|
||||
---
|
||||
|
||||
Last week I was finally getting around to fixing my backup tooling and validating the recovery procedures. I thought it would be a good idea to have rolling nightly backups on a secondary ZFS pool with weekly syncs to an external disk. I ended up using this exact command to purge the rolling backups:
|
||||
|
||||
> `$ find ${BACKUP_FOLDER}/* -mtime +${DAYS_TO_KEEP} -exec rm {} \;`
|
||||
|
||||
Before we go any further, I want to point out that I was using the variable `${BACKUP_DIR}`, not `${BACKUP_FOLDER}`.
|
||||
Can you see where this is going? Yeah.. I didn't pick that up before running my new majestic backup script. Half an hour later I came back to pages of ssh logs resembling `/bin/rm: command not found`. Oops. I instantly knew what had happened, I'd completely written off my Proxmox host and I was lucky it removed the `/bin/rm` binary before removing any actual data.
|
||||
|
||||
Further investigation showed that I'd wiped most of /bin and the system was barely usable, I still had rsync.. yay! I quickly shutdown all of my containers/vms and rsync'd `/etc` to my primary ZFS cluster before rebooting to a fresh Proxmox install. After reformatting my root drive and reinstalling from scratch, I copied `/etc/pve/lxc` and
|
||||
`/etc/pve/qemu-sever` back into place, ran zpool import `{data,storage}` and gave it one more reboot.
|
||||
|
||||
I have never been so relieved to check and see all of my containers booting up back to their normal state. There was one or two other small things I had to fix up post-install such as reinstall `nfs-kernel-server` and several other packages. Overal I was pretty lucky I could copy my container configs before reinstalling and total downtime was under an hour.
|
||||
|
||||
### Backups are important
|
||||
|
||||
**Test your backups.**
|
||||
|
||||
Don't get caught out and expect everything to be there when you really need it.
|
32
m2.nz-20250607/content/posts/ssh_icloud_sync.md
Normal file
32
m2.nz-20250607/content/posts/ssh_icloud_sync.md
Normal file
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
title: "iCloud synced SSH configs on macOS"
|
||||
tags: ["syncing", "config", "ssh"]
|
||||
date: 2023-08-17
|
||||
# featuredimagepreview: "/cathacker1.jpg"
|
||||
---
|
||||
|
||||
Maintaining SSH configs can be a pain when you're hopping between multiple devices, even more when syncing apps (Synology/NextCloud/etc) are blocked on some networks/devices.
|
||||
|
||||
Make sure you have iCloud Drive enabled and this will symlink your SSH config + key to a folder in iCloud Drive and sync across devices.
|
||||
|
||||
The iCloud Drive is located at `~/Library/Mobile Documents/com~apple~CloudDocs`. By putting this file + your config/id_rsa files in here, it will simply symlink them. Warning - the setup script will overwrite existing config/id_rsa files.
|
||||
|
||||
Install
|
||||
```bash
|
||||
cd ~/Library/Mobile Documents/com~apple~CloudDocs
|
||||
mkdir macOS && cd macOS
|
||||
# Create above setup.sh file and copy in your .ssh/config and .ssh/id_rsa keys
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
setup.sh file
|
||||
```bash
|
||||
#!/bin/bash
|
||||
FOLDER=$(pwd)
|
||||
ln -sf "$FOLDER/config" ~/.ssh/config
|
||||
ln -sf "$FOLDER/id_rsa" ~/.ssh/id_rsa
|
||||
```
|
||||
|
||||
|
||||
|
||||
Heavily inspired by [This post](https://leihao0.github.io/Sync-macOS-ssh-config-with-iCloud/)
|
Loading…
Add table
Add a link
Reference in a new issue