New Server and Automating the Install

Last summer (2021) I came into some new hardware to replace my my current home server. With 5x the cores (and 10x the threads), and with nearly 6x the performance, I wanted to rethink of how I leverage the server. I had gotten used to either running things on the core system, or spinning up a lxc container. I've wanted to also start playing with a small kubernetes cluster with out buying new hardware, and while running something simple like k3s is a container is an option, it's not fully indicative of a full experience. Since I haven't had much luck in getting any kubernetes running in LXC, I figured it was time for a change.


So far I've been playing with using Fedora Workstation as the base and trying to leverage podman for containerization and playing with QEMU as the virtualization platform. So far it's been great, incredibly performant and exactly what I'm looking f

First up is being able to reinstall the system in within an hour and have it configured and ready to go. I decided to take a page out of work's playbook. Automate the installer with a kickstart file, finish off with salt or ansible for anything that can't be handled in the %post step.

Kickstart is a configuration file created by RedHat around the time of RedHat Linux 6.2 (possibly older, but this is the fist reference I can find.) to help automate the installer. Back then, the adding of the kickstart file would create what was called a Kick-Me disk. The setup has gone through a few revisions since then, and most distributions support it, but the concept is the same - include a kickstart file, either on disk, another piece of removable media, or on a webserver. It's most notably the kickstart configuration is used as part of PXE boot environments.

In the kickstart file, you can define everything from disk partitioning and network setup to a what packages you want installed by default on first boot. There's also steps to happen either pre or post install. You can find all the option references in this guide.


The kickstart file I'm going to leverage will be a modified one that was generated by the anaconda installer  from my inital setup of the server. To test the configuration, I've been creating VMs in QEMU and modifying the iso I want to use as I go. There's been 2 major revisions so far. The first was a basic config that also tried to install snapper and configure it, the second worked on setting up the partitions exactly without any post s

Above is the test kickstart file I was using ... Lets break it down.


url --mirrorlist="https://mirrors.fedoraproject.org/metalink?repo=fedora-34&arch=x86_64"

# Use graphical install
graphical

# Keyboard layouts
keyboard --xlayouts='us'
# System language
lang en_US.UTF-8
# System timezone
timezone America/New_York --utc

# Run the Setup Agent on first boot
firstboot --enable

Instead of pulling from the iso, I chose to pull from the the default mirror for fedora. So I could watch the process in case it got stuck, which happened a lot in the beginning, I left it in graphical mode. Next bit is just your normal language and timezone settings. Finally is the firstboot. This allows you to go through the inital setup after boot (think in KDE environments, you configure your inital account after the install is complete). I'll be able to remove this in my final product, since the user will be created differently.

# Generated using Blivet version 3.3.3
ignoredisk --only-use=vda
# Partition clearing information
clearpart --all --initlabel
# Disk partitioning information

part btrfs.1 --fstype="btrfs" --ondisk=vda --size=15360
part btrfs.2 --fstype="btrfs" --ondisk=vda --size=59389
part /boot --fstype="ext4" --ondisk=vda --size=2048 --label=boot
part biosboot --fstype="biosboot" --ondisk=vda --size=2
btrfs /vol_root --label=root_vol btrfs.1
btrfs /home_pool --label=fs_pool btrfs.2
btrfs / --subvol --name=@root LABEL=root_vol
btrfs /home --subvol --name=@home LABEL=fs_pool

Now we're getting to the disk setup part. We only want the installer to worry about vda, the disk that I created to install the VM onto. If I'm using a disk over, I'll want to wipe it by using the clearpart command. This will wipe the disk and reinitalize it for use.

Now we're into the juice of the partitioning. I created the VM disk to be 75GB, to test giving a small root partition and a separate home partition on different btrfs volumes for testing snapper rollbacks. Being a vm, i have to setup the biosboot partition. There's also an issue with setting up /boot on a btrfs volume and interacting with grub currently, especially when snapshots are involved. I decided to expose the core btrfs volumes to make the snapper setup easier later.

#Root password
rootpw --lock

user --name=nagel --password=XXXXXXXXXXXXX --groups=wheel

Simplest section. login to root user is locked. My user is defined. There's an additional setting to add the sshkey to login to the account that's created that I'm probably going to add.

%packages
@^server-product-environment
@headless-management
@container-management
@server-hardware-support
@virtualization
ansible
openssh-server
vim
git
python3-dnf-plugin-snapper
snapper
%end

Next is the packages section. This allows me to define both what fedora install i want (mostly due to the nature of using url step instead of cdrom), as well as the packages I want included from start. Everything that starts with an @ symbol is a group of packages to install. headless-management adds packages to support cockpit, container-management adds all the packages needed for podman, server-hardware-support adds things like lm_sensors and other utils to monitor the health of the server, virtualization installs qemu and supporting packages, and the rest of the packages are things I want as part of the setup of my install. This list will change as I finalize the core install.

There's also the option for pre and post install scripts. %pre is usesful if you have specific partion schemes you want setup that are not easily done with the default partitioning commands. %post is there for anything you want in place after the installation is finished. I tried to use it for the snapper config, but even running outside of the chroot, I could not get it to work.


I'll keep tweaking the kickstart file as I go, and start building the ansible or salt configuration to finalize the setup before moving to the vm setups and other items.