Categories
Mac

How to set an APFS quota to a Time Machine Volume

My main computer is a 2020 M1 Mac mini, which is so small, quiet and cool that I can keep it in a drawer of my desk, which due to me living in a small apartment is located in my bedroom.

As part of my backup regime, I have an old 2.5” USB drive connected to it for Time Machine. The problem is that often my Mac would wake up in the middle of the night to perform a backup. That is nice, but it also means that the drive would start grinding, sometimes waking me up or preventing me to get asleep in the first place.

I was fed up with it, and I switched to a 1 TB SSD, which is totally silent, just like my Mac.

Having an entire drive dedicated to Time Machine only, though, was a bit excessive, and I wanted to be able to keep some files on it as well. I know I could just put them in the same APFS partition Time Machine is using, but given how finicky it can be at times, I preferred leaving it alone and resorting to a nice feature of APFS, which is having different volumes on the same disk. Volumes are not like partition, they are not fixed-size: they share all the available space on the drive. This way you cold have 10 volumes on a single 1 TB drive, and they all would see 1 TB of free space in the beginning.

That is all great, and you can easily add volumes to an APFS drive thorough Disk Utility, you just have to select the drive from the sidebar and click the + button on the tool bar on the right. You also get a handy feature which I wanted to enable for my Time Machine volume: the ability to set a quota for that volume, which is basically a hard limit on the size of that volume. I wanted to make sure Time Machine could use at most 500 GB of the 1 TB, while allowing my data volume to use all the available space if necessary.

Time Machine's preference pane showing the 500-GB volume I created on a 1 TB APFS-fromatted disk

The problem is that when I went to the Time Machine preference pane and selected my newly created volume as its target, it would reformat it and remove its quota, thus giving it the ability to expand to the whole drive. What’s worse, it even erased all the contents of my other volume, which I’m afraid is a bug.

The solution, as always, was to resort to the Terminal. I found my answer in this helpful Stack Exchange thread: you have to add the volume using the diskutil command and applying the T role, which stands for Time Machine and will avoid having to erase the volume after selecting it as the destination drive.

diskutil ap addvolume disk7 APFSX 'TimeMachine USB'  -passprompt -passphraseHint 1Password -quota 500g -role 'T'

You will, of course, need to replace disk7 with your disk identifier (which you can find in Disk Utility or through the diskutil list command). As you can see I’m also encrypting my volume and I am setting an hint, but if you don’t need that and are fine with your backups sitting in the clear on your drive, you can omit the -passprompt and the -passphraseHint options.

Disk Utility showing a 500 GB Time Machine volume on a 1 TB APFS-formatted SSD
Success! Disk Utility is now showing a 500 GB Time Machine volume on a 1 TB APFS-formatted SSD
Categories
Linux

Fix portainer-agent restart loop

For I while I’ve had two Docker hosts on my home network, one with Portainer installed and the other one with portainer-agent, to be able to have a single webinterface to manage both.

Portainer Logo

The one hosting Portainer has always had watchtower installed, in order to automate the upgrade of all containers, the second one did not. Once I realized it, I setup watchtower on it as well, and that’s when issues begun.

The issue

portainer-agent would enter a restart-loop, filling the logs with entries such as:

2021/10/31 07:53:21 [INFO] [main] [message: Agent running on Docker platform]
2021/10/31 07:53:21 [ERROR] [main,docker] [message: Unable to retrieve local agent IP address] [error: Error: No such container: 6d3437033dce]

The weird thing is that it was mentioning container 6d3437033dce, while portainer-agent was running as 9bf8b8a94d03.

I suspected it was due to something watchtower was doing when recreating containers after pulling the latest version.

A few searches later, I ended up finding this GitHub issue for portainer-agent, where jackman815 found that the issue was related to the way watchtower assigns the same hostname to the new containers.

I found the problem is about the watchtower, it clones the container configs included hostname when upgrading the container.

When upgrade/re-create the container, by default the docker daemon will assign a new hostname to the container and update internal dns, but the watchtower won’t update the container hostname after the container upgrade so the agent will try to look up the old hostname and it would be getting nxdomain result by docker internal dns.

The solution

As suggested in that issue, the solution is assigning a static hostname to the portainer-agent container.

To do so, stop and delete the old container, then re-create it with the --hostname option.

docker stop portainer_agent
docker rm portainer_agent
docker run -d -p 9001:9001 --name portainer_agent --hostname portainer_agent --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent:latest

Of course you’ll have to adjust the container name/hostname to match your setup.

Categories
Mac

Fix garbled output of the tree command on macOS

Recently I had the need to produce a text file containing the directory structure of a folder and all its content.
The tree command is just what I needed. A quick brew install tree installed it on my Mac.

One would think that it would just be a matter of running

tree /path/to/folder > /path/to/tree.txt

That’s true, for small directories (or those with no special characters?), for others I get this lovely crap:

Screwed up oupput of the tree command on macOS

Turns out the fix is pretty easy (altough you loose non ASCII characters in file names):

LC_ALL=C tree /path/to/folder > /path/to/tree.txt
Categories
Miscellanea

Podcast chapters and Ableton Live

TL;DR Extract markers timestamps from Ableton Live .als project files into .cue files to generate podcast chapters with als2cue.

I’ve been recording and publishing podcasts for almost 10 years, and for most of the last decade I’ve been using Ableton Live to record and edit them. I was already familiar with Live from my “deejay period”, when I approached EDM music production, so it felt natural to keep using the tool I already knew.

The rise of podcast chapters

In the last few years, adding chapters to podcast files has become mainstream. It is very handy for listeners to jump around within an episode to reach the section they’re interested in. Heck, I even added automatic chapter listings to the CMS I developed for my podcast network (see an example in this episode, look at the “Capitoli” section).

But while it is great for listeners, the same cannot be said for podcasters. It requires significant effort to place and name chapters by hand, and I needed to automate the process a bit to make it doable without investing too much time in it.

Ableton Live Markers Locators

Ableton Live includes a great feature, Locators, to help mark sections of an arrangement. I add them while recording to mark roughly when we changed topics, then during editing I fine tune their location.

Screenshot of Ableton Live project with many locators

See all the vertical lines? They’re Ableton Live Markers.

The thing is, when you export an .aiff file from Live, you don’t get the position of each marker in the file’s metadata like you do in Logic Pro, and my mp3 encoding app of choice, Forecast, cannot automatically insert chapter markers in the .mp3 file.

Extracting Locators from .als project files

Digging around, I found out that Ableton Live’s .als project files are basically just zipped xml files.

I then wrote a Python script that takes the .als file as input, and spits out a .cue file containing all the timestamps. Why a .cue file, you ask? That’s because Podcast Chapters, an amazing Mac app by Fredrik Björeman, supports .cue files to create chapters, which can then be named easily through its GUI.

It has worked great for me for more than a year, but it is not really user friendly.

My brother needed to edit an episode of another podcast, and it was just too cumbersome to have him run the Python script, so I decided to build a really simple web interface around my script: als2cue_web was born.

als2cue_web, who dis?

The web app is available for everyone to use at als2cue.lucatnt.com, it just asks you to upload a .als and returns a .cue file with each locator’s timestamp.

Screenshot of als2cue_web
als2cue_web’s sophisticated, hand-crafted interface in all its glory

It is open source, and thus can be easily self hosted. It is available from my Github repo (pull requests welcome!) and as a ready-to-use Docker image, lucatnt/als2cue_web, you just need to expose port 80 of the container to a local port.

Categories
Linux

GRUB: unknown filesystem on ZFS-based Proxmox

Error: unknown filesystem.
grub rescue>

This was the nice error that greeted me when I tried to (re)boot my Proxmox server.
What unknown filesystem? The server was still supposed to boot from the same ZFS disk it always booted from.

TL;DR it was due to the large_dnode getting enabled on a dataset I had manually created. GRUB does not support that feature. Create a new dataset with that feature explicitly disabled, move the data to it, delete the original and rename the new one. Done.

Some background

Being space-constrained in my apartment, my Proxmox server is actually my everything server. It is my virtualization server, my router (it runs pfSense in a VM) and my NAS (it runs Samba in an LXC container).

For NAS duties, I decided a while back to create a dedicated dataset rpool/nas, which I then mounted in the Samba container. Without much thinking, I just ran a simple:

zfs create rpool/nas

Due to ZFS defaults, it had the feature dnodesize set to auto. It was never an issue, until the other day.

What happened

Some file must have triggered a non-legacy (512 bytes) dnode size in the dataset, which meant that GRUB could no longer read the drive.

I came to that conclusion by installing Proxmox in a VM on my Mac (with an ext4 boot drive, to avoid having to rename the pool), and attaching the server’s SSD through a SATA-USB adapter.

I changed the mountpoint of my server’s boot partition from / to /recovery, mounted /dev /sys and /proc in there and then chrooted into /recovery:

zfs set mountpoint=/recovery rpool/ROOT/pve-1
mount --rbind /dev /recovery/dev
mount --rbind /sys /recovery/sys
mount -t proc /proc /recovery/proc
chroot /recovery

I ran grub-probe -vvvv / to get some insight on why GRUB was failing and one line was interesting:

grub-core / fs / zfs / zfs.c: 2112: zap: name = org.zfsonlinux: large_dnode, value = 1, cd = 0

I read a bit online and I found out about the dnodesize thing. I usually like to link back to all the places that were useful to find a solution to any problem I blog about, but this time it was just too difficult to keep track of everything, except for this discussion on the german Proxmox forums.

The solution

I ran zfs get -r dnodesize rpool to get a sense of what were the different dnodesize values for all the datasets in the pool. They were all set to legacy, except for my rpool/nas dataset.
So I made a new dataset for my NAS data with dnodesize explicitly set to legacy, then I rsync’d everything from the old dataset into the new one, destroyed the old dataset and renamed the new one.

zfs create -o dnodesize=legacy rpool/nas2
rsync -av --progress /rpool/nas/ /rpool/nas2/
zfs destroy rpool/nas
zfs rename rpool/nas2 rpool/nas

Last step: moving back the mountpoint for `rpool/ROOT/pve-1`:

zfs set mountpoint=/rpool/ROOT/pve-1

And then I moved the SSD back into my server.

That’s it. It just took an afternoon of cursing.

Categories
Linux

Moving Proxmox ZFS boot drive to a new disk

When I assembled my little Proxmox home server, mainly used for pfSense, Home Assistant, Nextcloud and a few other apps, I underestimated the amount of storage I needed. I went with a cheap 120 GB SSD, but it was pretty much always full. I then found a deal on a 960 GB Kingston A400 SSD, so I got it.

My Kettop Mi3865L6 runs Proxmox on ZFS

The thing is I didn’t really want to go through a complete reinstall of Proxmox, restore of my VMs/CTs and reinstallation of Zabbix Agent on the host itself. Thankfully, my whole drive is ZFS formatted, so I have access to the great zfs send and zfs receive commands to move stuff around.

The steps

  1. Connect the SSD with a USB-SATA adapter to a Virtual Machine running on my Mac, and install Proxmox on it. This takes care of the GRUB bootloader.
  2. On the main Proxmox install, shutdown all CTs and VMs, and take a snapshot
  3. Connect the drive to Proxmox and import the pool with a different name
  4. ZFS send/receive the snapshot (recursively) to the new pool
  5. Export the pool
  6. Shutdown Proxmox and swap the drives
  7. Power on Proxmox, fix the pool name, reboot
  8. Fix the bootloader and initial ramdisk
  9. Profit (?)

1. Proxmox install on the new SSD

So the first step is to install Proxmox on the new SSD, and the easiest thing I could think was to use a simple USB3 to SATA adapter to connect it to my Mac, and then pass it to a VM in Parallels with the Proxmox ISO mounted. I then proceded with a regular install, choosing ZFS as the SSD’s filesystem. I could have done it all on a VM on the Proxmox server, but I didn’t bother.

2. Snapshot the old drive

Then I moved to the Proxmox server, shut down every VM and every CT, and took a recursive snapshot of the main pool (rpool is the default pool name on Proxmox):

sudo zfs snapshot -r [email protected]

3. Connect the new SSD to Proxmox

After shutting down the VM I used to install Proxmox on the new SSD, I moved the USB3-SATA adapter to the Proxmox server.

First I needed to import the pool with a new name (rpoolUSB), since of course rpool was already taken.

sudo zpool import -d /dev
sudo zpool import [ID-OF-THE-POOL-ON-THE-NEW-SSD] rpoolUSB

4. Clone the old SSD onto the NEW one

Having just taken the snapshot on the old drive, It was just a metter of a ZFS send/receive, with the -F to overwrite the pool. This operation left the bootloader intact, which is great.

 sudo zfs send -R [email protected] | sudo zfs recv -F rpoolUSB

5. Export the new pool

sudo zpool export rpoolUSB

6. Shutdown Proxmox and swap the drives

Connect a display to your Proxmox server if you don’t have one, or connect through KVM if your server has IPMI capabilities.

7. Fix the pool name

Remember how we renamed the pool to rpoolUSB in step 3? Proxmox doesn’t like that. Or rather, it doesn’t know about that. So the boot process with fail leaving you at a Busybox shell. Just import the pool giving it the usual rpool name and exit.

zpool import -d /dev
zpool import rpoolUSB rpool
exit

8. Fix the bootloader and initial ramdisk

The boot process now works fine, but it complains about some missing things. What’s needed is a fix of the initial ramdisk and possibly of the GRUB bootloader, I did both just to be on the safe side.

sudo update-grub2
sudo update-initramfs -u -k all

9. Profit

Let me know if you actually profited from this. I think you owe me 1% of your profits 😁

Categories
Linux

IPv6 route keeps getting re-added on Linux

One of my Raspberry Pis is connected to a VLAN through a cheap Netgear switch, and on that VLAN there is a /24 IPv4 network and a /64 IPv6 network.
On my regular (untagged) LAN, different IPv4 and IPv6 networks are used.

I noticed that somehow the Pi kept getting a route for the LAN /64 through its ethernet interface, without going through the router. That is not supposed to happen. As soon as I deleted the route (sudo ip route del IPv6_NET/64), it was re-added. This prevented any of the two subnets to talk to each other through the router, because the Pi did not send the traffic to the router, but tried to send it as if the other host were on the same network segment.

Then, suddenly, I remembered that I had to restart that stupid switch because I had to move its power brick to a different outlet: somehow while booting it ignored all VLAN configuration, so for a few seconds the Pi was connected to the main VLAN, got the route advertisment whiche for some reason dhcpcd kept hanging on and dutifully re-adding every time I deleted it.

The solution was easy: sudo systemctl restart dhcpcd

Categories
FreeBSD Linux Mac

BASH: Capturing both the output and the exit code of a process

Of course I found an answer to this in a collection of Stack Overflow questions, but to make things easier for anybody who might stumble into this post (and mainly for Future Me), here’s the answer.

Seriously, don’t do this.

Basically I have a command that sometimes produces an error, but usually just re-running it produces the correct output. When it encounters an error, it dutifully sets its error code to something other than 0 (success). But how to I capture both the command’s output and its exit code? This way.

output=$(myCommandWhichSometimesDoesntWork)
exit_code=$? # This HAS to be exeuted right after the command above

So I made a little wrapper script that repeatedly calls the first one until it gets an answer, with a set maximum number of retries to avoid infinite loops.

#!/bin/bash
retries=0
max_retries=10 # Change this appropriately
 
# The "2> /dev/null" silences stderr redirecting it to /dev/null
# The command must be first executed outside of the while loop:
# bash does not have a do...while construct
output=$(myCommandWhichSometimesDoesntWork 2> /dev/null)
while [[ $? -gt 0 ]]
do
        ((retries++))
        if [[ retries -gt max_retries ]]
        then
                exit 1
        fi
 
        output=$(myCommandWhichSometimesDoesntWork 2> /dev/null)
done
 
echo $output
Categories
FreeBSD Linux

Improve FreeNAS NFS performance when used with Proxmox

TL;DR: zfs set sync=disabled your/proxmox/dataset

Lately I’ve been playing around with Proxmox installed on an Intel NUC (the cleverly named NUC6CAYH, to be precise), and I must say it is really, really, cool.

I usually store my containers and VMs on the local 180 GB SSD that used to be in my old MacBook Pro, since it’s reasonably fast and it works well, but I wanted to experiment with NFS-backed storage off my FreeNAS box (4x4TB WD Reds in RAIDZ1, 16 GBs of RAM, an i5–3330 processor).

Frankly, I was pretty unsatisfied with the performance I was getting. Everything felt pretty slow, especially compared to the internal SSD and, surprisingly, to storing the same data on a little WD MyCloud (yes, the one with the handy built-in backdoor).

My very unscientific test was creating a fresh container based on Ubuntu 16.04, and upgrading the stock packages that came with it. As of today, it meant installing around 95 MB’s worth of packages, and a fair bit of I/O to get everything installed.

The task was completed in around 1’30″ with the container on the internal SSD, 2’10″ on the WD MyCloud, and an embarassing 7’15″ on the FreeNAS box.

After a bit of googling, I came to an easy solution: set the sync property of the ZFS dataset used by Proxmox to disabled (it is set to standard by default).

The complete command is zfs set sync=disabled your/proxmox/dataset (run that on FreeNAS as root or using sudo).

To be honest, I don’t really know the data-integrity implications of this flag: both machines and the switch between them are protected from power failures by two UPSs, so that shouldn’t be much of an issue.

Anyway, just changing that little flag signlificantly reduced the time required to complete my “benchmark”, bringing it down to around 1’40″, very close to Proxmox’s internal SSD. Again, at the moment I don’t really need to run VMs/CTs off the FreeNAS storage, but it is good to know that it is possible to achieve much faster performance with this little tweak.

Categories
FreeBSD

How to use a USB drive with FreeNAS

First of all, a disclaimer: I use this setup just as a secondary backup, I do not recommend you rely on USB drives as storage media for your FreeNAS server.

Overview

I have a few spare 2 TB drives that used to be in my homeserver, and I wanted to connect them to my FreeNAS every few weeks to store some backups on them, then disconnect them and put them in a drawer.

Preparing the USB hard drive

If your drives are not brand new, chances are you already have some data on them, so you’ll need to wipe them first. To do so, connect your drives to FreeNAS, login to the webinterface, go to the Storage tab, click on “View Disks”, select the correct one from the list, then click on the “Wipe” button on the bottom of the screen.

Everything will turn red now. That’s because you’re doing something potentially dangerous. Make sure you really select the correct drive, otherwise you might destroy all of your data.

A Quick wipe is usually enough. Select that and proceed.

Once the wipe is completed, you can turn the disk into a Volume you can use on FreeNAS, just like your main one(s). Head over to the Storage tab, click on the “Volume Manager” button. You will see your newly wiped disk there, give it a volume name, tick the “Encryption” checkbox if you’d like, then click on “Add Volume”. I’ll call mine “ColdStorage” for the purpose of this tutorial.

Do NOT select any “Volume to extend”. That would mean you’d be adding a single (USB!) hard drive striped to your main pool, and that drive would become the single point of failure for all the data in that pool.

After the disk has been initialized, you can start adding datasets, clone other ones, setup network sharing, whatever.

Disconnecting the USB drive

Once you’re done with it, you will want to disconnect your USB drive to store it somewhere. To do so, once again go to the Storage Tab, select your pool from the list (ColdStorage in this example), then click on the “Detach Volume” button. You will only see this button if you select the pool, while if you select the dataset with the same name you will get a different set of buttons.

I’m not sure that I have to point that out, but let’s do it anyway: do NOT select “Mark the disks as new (destroy data)”. Once the pool has been detached, you can disconnect the USB drive.

Reconnecting the USB drive

For the thousandth time, go back to your trusty Storage tab, then click on the “Import Volume” button. A box will appear, select wether you have an encrypted volume or not, then click OK. FreeNAS will stop a few seconds to reflect on the good it’s made in its life, then it will present you a dropdown from which you’ll be able to select your USB volume. Click OK and it will be once again available in FreeNAS.