Tag Archives: homelab

Uninterruptible Power Supply (UPS)

Do you know what is a REALLY good idea to have when your neighborhood has occasional brownouts?

Take a guess.

Do you know what helps protect your 3D printer and octoprint server from someone accidentally flipping the wrong light switch?

Go ahead, guess.

Do you know what helps you connect to the rest of your network so you can hurriedly shut things down when you have a neighborhood power outage?

Guess yet?

Did you guess a UPS? Good for you! You got it!

Yes, after too many close calls and losing 8 hours on a 12hr print, I’ve finally put the Snapmaker on a UPS, as well as the router, web server, and cable modem.

Now, I know that in all likelihood, if there’s a neighborhood outage the cable line will lose power too, so I’m really not betting on having 30min of internet access while waiting for the power to come back on. (though that would be pretty cool)

To Do:
some sort of monitoring on the big ups downstairs that will tell the two PIs to shut down 15 min after the power goes out so I don’t have to scramble in the middle of the night.

Switching it up

The other day Raspberry Pi foundation finally made their 64bit version available and the new standard for their installs.

So, you know what that means…

For as much as I enjoy Ubuntu, I’d really like to run the OS build for the Pi on the Pi, so migration was in order!

Now, this was barely a week after release… (danger!)

Logged into the PiServer, started the backup process for all of the docker-compose files and directories, in case something went sideways.

Powered it all down and unplugged the USB-Sata adaptor for the drive.

Found a MicroSD card, imaged it up with the new RaspPix64 image, booted it up and went to town getting it set up. Updated SSH keys, system name, local, time/date, etc. All the good stuff. Enjoyed having a desktop (yes, went with full instead of lite) for the ease of drag-and-dropping a lot of files from the old OS install.

And then I went to install Docker.

At the time, Raspbian arm64 was not listed as supported. (as of the writing of this, it still isn’t)

But, I’d made it this far, so I gave it a go anyway!

And it worked! Turnes out they’d been prepping it in the background and just didn’t update their site documentation.

Client:
Version: 20.10.5+dfsg1
API version: 1.41
Go version: go1.15.15
Git commit: 55c4c88
Built: Sat Dec 4 10:53:03 2021
OS/Arch: linux/arm64
Context: default
Experimental: true

Server:
Engine:
Version: 20.10.5+dfsg1
API version: 1.41 (minimum version 1.12)
Go version: go1.15.15
Git commit: 363e9a8
Built: Sat Dec 4 10:53:03 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.12~ds1
GitCommit: 1.4.12~ds1-1~deb11u1
runc:
Version: 1.0.0~rc93+ds1
GitCommit: 1.0.0~rc93+ds1-5+b2
docker-init:
Version: 0.19.0
GitCommit:

We’re up and running!

Feeling confident at this point, did a test run of all of the containers and made sure they were able to do their thing, and it was flawless. No need to tweak anything aside from a few file permissions due to the copy.

Used the built in SD card Copier to copy the full install over to the SSD.

Powered it back down, ejected the MicroSD card, and powered it back up. Booted just as expected off of the SSD. Started all of the containers.

Now It’s back up and running as if it was running that way all along. Which is as it should be with docker containers, really.

Adventures in Ansible

Recently had some fun with the homelab and Ansible. While getting nrpe to work with Nagios, I found myself on one box, testing, and updating. After I got the nrpe.cfg set up just right, I started the daunting task of pushing the new file out to the rest of the Linux hosts.

even over nfs, it”s daunting.

Enter Ansible. I already had ssh keys set up and sudo access across the board. Five minutes later I had an Ansible Playbook that pulled the updated file to the nfs mount, then turned around and copied the updated file into place with a service restart after.

Pretty darn slick if I do say so my self.

I didn”t post about it, but I also have a sysprep script for raspbian and ubuntu fresh installs that sets up my account and copies the public key into place along with a host of app installs and service updates.

The process of setting up a new Pi is now cake. The only per-requisites are enabling ssh and installing the avahi-daemon so it can be found by it”s default system name.

Adventures in Nagios (Version 4.4.6)

So, for a while now I”ve been wanting to try out Nagios on my home lab. But until recently, I never had a real reason to dig any further than “yep, got it to install and see the localhost”, especially while I was riding hi on ESXi on a real server. Everything I could want to monitor was either my laptop or already monitored through ESXi.

Recently though I downgraded (is it though?) to a few Pi-4Bs, dug out my Pi-2B, and repurposed some laptops I had kicking around. I now have a variety of hardware to monitor.

The Pi-2B (Raspberry Pi 2 Model B Rev 1.1) became my monitoring server. I put Raspbian GNU/Linux 10 (buster) armv7l on it as it”s what was recommended, and went to town. Decided to build from source rather than rely on what is in the repo (was it even in the repo?). Install was pretty easy and localhost was found and all green.

I”ll admit, digging through the config files is NOT a fun time. Nagios lets you either split everything up so that everything has its own config OR you could just dump it all into one single massive config.

the config schematic I borrowed from work before went to a neat managed thing using racktables, puppet, and a few other more advanced IT toys. (might look into racktables myself just for the pain of it)

├── cgi.cfg
├── htpasswd.users
├── nagios.cfg
├── objects
│   ├── commands.cfg
│   ├── contacts.cfg
│   ├── localhost.cfg
│   ├── monitoring
│   │   ├── groups
│   │   │   ├── hw_printers.cfg
│   │   │   ├── os_linux.cfg
│   │   │   ├── os_storage.cfg
│   │   │   ├── os_windows.cfg
│   │   │   ├── srv_web.cfg
│   │   │   └── srv_workstation.cfg
│   │   ├── hosts
│   │   │   ├── linux.cfg
│   │   │   ├── printers.cfg
│   │   │   ├── storage.cfg
│   │   │   └── windows.cfg
│   │   └── services
│   │   ├── linux.cfg
│   │   ├── printer.cfg
│   │   ├── service_grups.cfg
│   │   ├── storage.cfg
│   │   ├── web.cfg
│   │   ├── windows.cfg
│   │   └── workstation.cfg
│   ├── printer.cfg
│   ├── switch.cfg
│   ├── templates.cfg
│   ├── timeperiods.cfg
│   └── windows.cfg
├── resource.cfg
└── workspace.code-workspace

This layout made it MUCH easier for me to figure out what I wanted to monitor and how without making it too easy or too cumbersome. There”s just enough complexity that every now and then I need to backtrack to make sure I”m tweaking things right. I gave my account access so I could use VSCode to help juggle the file names, config names, group names, all the names!

That was a nice exercise to be sure! I”ve already started thinking of ways to streamlines the configs just a little bit, but I haven”t fully decided on that.

Now, time to up the challenge a little bit with NRPE. The daemon for nrpe 3.2.1 is available in all of the Raspbian repos, but not in the Ubuntu repos, which is at 4.0.0 which turned out to be a bit of a problem. The newer version ignores packets form version 3.x. You have to make sure your check_nrpe command uses the -2 flag to make sure it only uses the version 2.x packets.

At some point I”ll try upgrading the server nrpe version to 4.x and see if it”ll talk to the 3.x clients, but today is not that day.