The little things

While our Snapmaker 2.0 has been wonderful to use, it need to under extrude just a little bit. I decided to bite the bullet and try to perform an Extruder Calibration.

I followed the steps in this post on the Snapmaker forums, and it was pretty easy. About 15 min of marking, extruding and performing the math, and I found myself extruding at the proper rate. 100mm is now 100mm, and not 86 like it had been.

Now, with every 3D-Print change, there’s balancing that needs to happen. I had tweaked profile settings enough that my prints were turning out pretty good. Now they’re over-extruded. This is where the balancing comes into play. Now to figure out what to un-tweak./

First step is to make sure other hardware settings are correct. So I’ll be working on the Liner Advance first. With this turorial video and this Marlin documentation, I hope to make some progress soon.

Inversely, if I’d left things alone it’d be printing just fine for the most part…

Edit:

And on that note, after much testing, oddly enough, I was much better off. looks like having an accurate E-Step causes over extrusion. I’ve since gone back to the default 212.21 that it ships with.

Creative Upgrades

It’s finally here! For Christmas as a gift to ourselves, we decided to upgrade from our XYZ Davinci Jr. 2.0 Mix to a Snapmaker 2.0 A350.

The original printer we have is good, but it’s got a very small build area for some of the things I wanted to design and print. The color mix option is really cool. But it’s biggest faults are the NFC chipped filaments that are required for use, and the software to slice the prints.

The NFC chips are a neat idea, they pre-set the heating information and keep a loose track of how much filament is supposed to be on the spool. However, if you try to load filament that isn’t chipped, then it won’t print. You can just mount a pair of chips to it, after removing them from the spools, but then you have to deal with the persistent warnings that your spools “are most likely empty”.

We’ll still keep it around, but it’s time to upgrade.

Enter the Snapmaker 2.0 A350, a nice large 3-in-1. The build area on this thing is HUGE compared to what we had before. Assembly wasn’t too bad, leisurely got it into a running condition in about an hour.

The initial calibration took a few tries after the firmware upgrade, mostly because I’m used to dealing with a nozzle that’s a little larger. The nozzle on this is VERY fine.

Currently my first print is chirping away on the print bed as the actuators move everything around. Yes, it’s very very chirpy.

Looking forward to all the projects this new system will able to make happen!

Conference Pi

Idea:

Use a Pi hooked up to the living-room TV with a webcam to make family and group video calls easier to manage without having to frequently rearrange computer locations.

Parts:

  • Raspberry Pi 4B 8G
    • Ubuntu arm64 w/Xubuntu Desktop
  • HDMI to miniHDMI cable
  • USB-C power adaptor
  • Webcam
    • USB Webcam
    • Pi Cam
  • Connectivity Options:
    • External keyboard & mouse (less optimal)
    • VNC (doable)
    • Barrier (preferred)

First off is the microphone enabled USB Webcam.

Update:

So close! The webcam was seen, the mics picked up sound from across the room just fine, everything was beautiful on the initial tests. That is until I joined a google meet. Video both directions were good, but the Pi began to choke on the audio processing. No sound in or out.

TFW: 3D printing

That feeling when you look at your print and realize that the basement is too cold for printing from here on out, until spring.

Will have to make a deal with the family to see about moving it upstairs to resume printing of ornaments.

Adventures in Ansible

Recently had some fun with the homelab and Ansible. While getting nrpe to work with Nagios, I found myself on one box, testing, and updating. After I got the nrpe.cfg set up just right, I started the daunting task of pushing the new file out to the rest of the Linux hosts.

even over nfs, it’s daunting.

Enter Ansible. I already had ssh keys set up and sudo access across the board. Five minutes later I had an Ansible Playbook that pulled the updated file to the nfs mount, then turned around and copied the updated file into place with a service restart after.

Pretty darn slick if I do say so my self.

I didn’t post about it, but I also have a sysprep script for raspbian and ubuntu fresh installs that sets up my account and copies the public key into place along with a host of app installs and service updates.

The process of setting up a new Pi is now cake. The only per-requisites are enabling ssh and installing the avahi-daemon so it can be found by it’s default system name.

Adventures in Nagios (Version 4.4.6)

So, for a while now I’ve been wanting to try out Nagios on my home lab. But until recently, I never had a real reason to dig any further than “yep, got it to install and see the localhost”, especially while I was riding hi on ESXi on a real server. Everything I could want to monitor was either my laptop or already monitored through ESXi.

Recently though I downgraded (is it though?) to a few Pi-4Bs, dug out my Pi-2B, and repurposed some laptops I had kicking around. I now have a variety of hardware to monitor.

The Pi-2B (Raspberry Pi 2 Model B Rev 1.1) became my monitoring server. I put Raspbian GNU/Linux 10 (buster) armv7l on it as it’s what was recommended, and went to town. Decided to build from source rather than rely on what is in the repo (was it even in the repo?). Install was pretty easy and localhost was found and all green.

I’ll admit, digging through the config files is NOT a fun time. Nagios lets you either split everything up so that everything has its own config OR you could just dump it all into one single massive config.

the config schematic I borrowed from work before went to a neat managed thing using racktables, puppet, and a few other more advanced IT toys. (might look into racktables myself just for the pain of it)

├── cgi.cfg
├── htpasswd.users
├── nagios.cfg
├── objects
│   ├── commands.cfg
│   ├── contacts.cfg
│   ├── localhost.cfg
│   ├── monitoring
│   │   ├── groups
│   │   │   ├── hw_printers.cfg
│   │   │   ├── os_linux.cfg
│   │   │   ├── os_storage.cfg
│   │   │   ├── os_windows.cfg
│   │   │   ├── srv_web.cfg
│   │   │   └── srv_workstation.cfg
│   │   ├── hosts
│   │   │   ├── linux.cfg
│   │   │   ├── printers.cfg
│   │   │   ├── storage.cfg
│   │   │   └── windows.cfg
│   │   └── services
│   │   ├── linux.cfg
│   │   ├── printer.cfg
│   │   ├── service_grups.cfg
│   │   ├── storage.cfg
│   │   ├── web.cfg
│   │   ├── windows.cfg
│   │   └── workstation.cfg
│   ├── printer.cfg
│   ├── switch.cfg
│   ├── templates.cfg
│   ├── timeperiods.cfg
│   └── windows.cfg
├── resource.cfg
└── workspace.code-workspace

This layout made it MUCH easier for me to figure out what I wanted to monitor and how without making it too easy or too cumbersome. There’s just enough complexity that every now and then I need to backtrack to make sure I’m tweaking things right. I gave my account access so I could use VSCode to help juggle the file names, config names, group names, all the names!

That was a nice exercise to be sure! I’ve already started thinking of ways to streamlines the configs just a little bit, but I haven’t fully decided on that.

Now, time to up the challenge a little bit with NRPE. The daemon for nrpe 3.2.1 is available in all of the Raspbian repos, but not in the Ubuntu repos, which is at 4.0.0 which turned out to be a bit of a problem. The newer version ignores packets form version 3.x. You have to make sure your check_nrpe command uses the -2 flag to make sure it only uses the version 2.x packets.

At some point I’ll try upgrading the server nrpe version to 4.x and see if it’ll talk to the 3.x clients, but today is not that day.

Migrations and Upgrades

Well, it’s only upgrades if you look at the power the web server now has.

I have finally bitten the bullet and took steps to retire the ESXi server that had been hosting my web server and my windows VM. I will miss the compute power that I had at the ready, but anything that will need that much power, could probably be done on the new MacBook.

“But what did you replace it with?”

That’s a very good question. The answer would be 2xRasbperryPi 4Bs. I’m still working out whether I should bother replacing the workstation, but the webserver is doing quite nicely on the new hardware. In fact, with the SD card as storage and with a comparable number of cores and RAM, the CLI reaction time is MUCH faster. It also doesn’t have the connection lag that the 2B has.

While this migration will mean less of a power draw on the house, it does put some limitations on what I can do without purchasing more hardware, but that’s okay. I don’t need a whole lot. Right now my biggest concern is how long the SD card will last before it needs to be replace.

I am enjoying the progress that is being made in the Pi form factor systems. I’ve got some ideas as to what I can to with the other system. Which spins off into what I can do with a couple of the other systems I have kicking around the house.

Backups

I really have to hand it to the wordpress backup plugin I’ve been using. I was able to copy over the base wordpress setup, and create bare mysql entries and restore everything into place quite nicely. Quite nicely indeed! Nothing was lost and the most recent wordpress has checks built in to make sure thing run as they should.

BTT for MacOS

Today I’d like to sing the gospel of BTT(BetterTouchTool). I have seen the light and it is BTT.

Upon hearing news that my favorite browser cut about 250 jobs, it left me in a bit of a panic. My favorite feature was a hidden feature that only worked on MacOS, which was a two-finger twist that would allow you to switch tabs. To say I LOVE this feature is an understatement. I scoured other browsers to see if they had this available, and nope, nobody else had it.

Enter BTT.

I downloaded it figuring I’d get full use out of the 45day trial (nope), and began tinkering with what I could do with it. Not only did it have BST (BetterSnapTool) embedded in it (which I’ve been using for a while) but with a little research, I found the command that Chrome used to change tabs and made it system-wide. Not only did it work in Chrome, Safari, and Firefox but it also worked in the tab enabled Finder windows and anything else with tabs, like iTerm. I was ecstatic and bought a lifetime license.

But wait, there’s more…

One of the issues I’ve had with mice and MacOS is that I tend to make use of multiple desktops. It’s easy enough to switch between them with the trackpad, but it was painful with a mouse. BTT to the rescue! I found that it had options for “Generic Mice” and was able to program the mouse wheel tilt (yes, not only do they scroll, but some tilt!) and set it to change to space to the left or right accordingly. So now, I can use my Dell BT mouse with my MacBook which is a nice option to have. Also knowing that if I ever replace it, it won’t take much to enable the same features without having to jump through too many hoops.

Thank you BTT for helping me have more power over my whole computer.

New toys – USB Camera

I finally upgraded my video only streaming camera from a tiny pico-cam to a KLP-USB500W02M-SFV(2.8-12mm).

The Pico cam worked well enough, but is auto-focus and about 6 years old. The auto-focus messes with where it would focus regarding the paper and pen. As my hand and pen moved across the paper, focus would shift.

The KLP is a manual focus, with a wide to telephoto adjustment, as well as a privacy shutter. Additionally it mounts either from the top or the bottom, so I don’t have to fiddle with flipping or rotating the image while streaming. I can use the image as is with little to no tweaking.

I’m greatly looking forward to trying it out!

more automount madness

So, I’m dense, but I finally found out how to properly mount NFS shares to macOS. Turns out it’s much simpler and hopefully will keep automount from pegging my cpu like crazy.

Permanently mount an NFS share:

  1. Connect to the NFS share as explained in the previous procedure.
  2. Open System Preferences > Users & Groups.
  3. Select your user in the left panel and click Login Items in the right-hand panel:
  4. Click the plus sign and navigate to the connected NFS share.
  5. Click Add.
  6. Close System Preferences.