<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Jon Nagel's Personal Notes]]></title><description><![CDATA[Musings of a tech enthusiast passionate about his work, D&D and occasional book and video game...]]></description><link>https://blog.jonnagel.us/</link><generator>Ghost 5.87</generator><lastBuildDate>Fri, 10 Apr 2026 10:03:36 GMT</lastBuildDate><atom:link href="https://blog.jonnagel.us/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[When company standards fight with the tools.]]></title><description><![CDATA[<p>Who wins?</p><hr><p>This is a more recent story for once, but it&apos;s something that should be shared now.<br><br>What happens when you standardize a company&apos;s way of setting up their Jenkins Pipelines, Terraform code, and Ansible playbooks? Great things, and reproducibility, no? </p><p>Absolutely.</p><p>What happens when</p>]]></description><link>https://blog.jonnagel.us/when-company-standards-fight-with-the-tools/</link><guid isPermaLink="false">6886e05c2d2fc400014960fa</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Mon, 28 Jul 2025 22:15:02 GMT</pubDate><content:encoded><![CDATA[<p>Who wins?</p><hr><p>This is a more recent story for once, but it&apos;s something that should be shared now.<br><br>What happens when you standardize a company&apos;s way of setting up their Jenkins Pipelines, Terraform code, and Ansible playbooks? Great things, and reproducibility, no? </p><p>Absolutely.</p><p>What happens when your standardization is done in a vacuum? Possibly good things still, but more chances for issues down the line. </p><p>What happens if you develop those standards without accounting for the standards of the tools the company is using? Mistakes.</p><hr><p>The company I currently work for has a pretty decent team of engineers who help shape code and tool standards. There&apos;s a YAML file that every team needs to use if they want to leverage the Jenkins Library and a few certification processes. It&apos;s kinda designed to act as an SBOM for deployments. It&apos;s a rather elegant solution (albeit annoying when you&apos;re not used to using it) to a problem that things like well-designed Helm charts are meant to solve, or ignition files for CoreOS.</p><pre><code class="language-YAML">packages:
  docker:
    images: {}
  rpms: {}
  vms:
    image: some-image-reference-string-like-ec2
...</code></pre><p>Above is a rough example of what it looks like. Straightforward, to the point. Naming convention could be better. The idea was &#x201C;how can deployments be certified, while standardizing how teams interact with the data?&#x201D;, and it simply acts as a reference for all the deployment tools. Terraform cares about the <code>vm</code> images. Jenkins can use a predefined container for tasks. Ansible can also use the containers if it&apos;s a docker host, as well as pulling a list of what rpm packages are needed to set up the host.</p><p>The issue probably still isn&apos;t clear. It&apos;s just a YAML file, Terraform can reference it directly for vars, so can Jenkins and Ansible, right?  Yes but the question becomes this &#x2026; how do you reference it for all the tools without duplicating it?</p><p>Terraform and Jenkins don&apos;t care, just give relative paths and it&apos;s golden. Ansible, on the other hand &#x2026; has a dozen ways you could reference the file, all ranging from good to bad. What&apos;s the best way to standardize it across the company in the simplest way? Symlinks.</p><hr><p>If this was live, I&apos;d ask for a show of hands &#x2026; </p><p>&#x201C;How many of you are familiar with all the ways to generate facts about a system using Ansible?&#x201D;<br><br>&#x201C;Now, how many are familiar with the way to generate facts for specific things like packages and services?&#x201D;<br><br>How many hands would I lose on the second? Majority, probably. Yes, there is in fact 3 modules that will generate facts for a host with data that&apos;s not normally gathered as part of <code>gather_facts</code> or <code>setup</code> module. They are <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/mount_facts_module.html?ref=blog.jonnagel.us" rel="noreferrer">mount_facts</a>, <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/package_facts_module.html?ref=blog.jonnagel.us" rel="noreferrer">package_facts</a>, and <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/service_facts_module.html?ref=blog.jonnagel.us" rel="noreferrer">service_facts</a>. There&apos;s also 2 Windows-specific facts, but I&apos;m less familiar with running Ansible against Windows targets. They each will add to host_vars with following keys respectively: <code>mounts</code>, <code>packages</code>, and <code>services</code>.</p><p>So, if you&apos;ve symlinked a YAML file, whose parent key is also packages, and a role or a task in your playbook also calls <code>ansible.builtin.package_facts</code>, which values would you see if you were to do a debug on the var <code>packages</code>?</p><p>The module package_facts would win because it&apos;s updating host facts. This has to do with the hierarchy of <a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html?ref=blog.jonnagel.us#understanding-variable-precedence" rel="noreferrer">variable precedence</a>. Anything symlinked into <code>group_vars</code> will be replaced by anything added as a host fact, or handled as include_vars.</p><p>So, what&apos;s the best way to prevent the collision? </p><p>You have a few options. You could do <code>include_vars</code> in your playbook, but the safest option is to add something like the following to your <code>group_vars</code> (preferably under all since this is a mostly static file)</p><pre><code>manifest: &quot;{{ lookup(&apos;file&apos;, &apos;path/to/package_manifest.yml&apos;) }}&quot;</code></pre><p>Now, you could use a relative path to the file, but my preference is to do absolute, or using <code>&quot;{{ playbook_dir }}&quot;</code> as the relative starting point. For example, if your project layout is the following: </p><pre><code>ansible/
|&#x2014;&#x2014; inventory/
|   |&#x2014;&#x2014; &lt;envs&gt;
|       |&#x2014;&#x2014; group_vars
|&#x2014;&#x2014; roles/
|   |&#x2014;&#x2014; myrole/
|       |&#x2500;&#x2500; tasks/
|       |&#x2500;&#x2500; templates/
|
|&#x2014;&#x2014; files
|&#x2014;&#x2014; playbooks/
&#x2502;   |&#x2014;&#x2014; deploy.yml
package_manifest.yml
</code></pre><p>Your lookup path would be <code>playbook_dir + &apos;/../package_manifest.yml&apos;</code>.</p>]]></content:encoded></item><item><title><![CDATA[Tools I'm using to run D&D sessions.]]></title><description><![CDATA[<p>Originally, I meant to include this as part of <a href="https://blog.jonnagel.us/getting-into-ttrpgs-and-becoming-a-dm/" rel="noreferrer">how I got into D&amp;D</a> and decided this probably should get its own spotlight. </p><p>There&apos;s nothing special I think in how I&apos;m running things, or doing my prep, but I think it&apos;s a</p>]]></description><link>https://blog.jonnagel.us/tools-im-using-to-run-d-d-sessions/</link><guid isPermaLink="false">671f0bc859281d00019d50f1</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sun, 20 Apr 2025 20:17:25 GMT</pubDate><content:encoded><![CDATA[<p>Originally, I meant to include this as part of <a href="https://blog.jonnagel.us/getting-into-ttrpgs-and-becoming-a-dm/" rel="noreferrer">how I got into D&amp;D</a> and decided this probably should get its own spotlight. </p><p>There&apos;s nothing special I think in how I&apos;m running things, or doing my prep, but I think it&apos;s a good example of how someone new getting started could do things. I&apos;m going to make a couple assumptions, like having a basic screen, but mostly break it up into shared tools, on-line, and off-line play. </p><h2 id="shared-tools">Shared Tools</h2><p>These are kind of the ones that I use regardless if I&apos;m running the session in person or on-line.</p><h4 id="dd-beyond">D&amp;D Beyond</h4><p>I unfortunately was introduced to playing using dndbeyond, and have <em>mostly</em> bought into using it on a regular basis for managing digital books and the campaigns. Mostly for the books and to share what I&apos;ve purchased with the players. I have lots of gripes with &apos;<em>Wizards of the Coast&apos;</em> with how they&apos;ve integrated the new 2024 edition into it. Other than not giving me an option for restricting which content I get to use (there&apos;s no saying no to the 2024 SRD), I like how simplifies managing the character if you&apos;re new. As the DM the campaign screen is invaluable to have quick access to everyone&apos;s AC and current HP.</p><h4 id="obsidian">Obsidian</h4><p>Obsidian has a lot of great features, with a pretty great and robust plugin system. I can definitely say I&apos;m not leveraging nearly as much as I could.<br>Since it&apos;s really just allowing me to write in Markdown format, with some extra features added, I don&apos;t have to do much other than write what I want and organize whatever is making sense for me.</p><p>I prefer to group single encounters, setting it up so that the scene is one document, creatures/combat is another page, and miscellaneous stuff that <em>might</em> be useful. I also have a separate folder for any world building information I&apos;m working on, mostly broken up into generic plots for everyone, character specific stuff, as well as tracking background stuff that is either to happen or will be happening.</p><p>There&apos;s a couple plugins that I could not get by without, mostly the dice roller and fantasy statblocks. </p><p>&lt;cover why dice roller is great, mostly the formulaic sidebar and random tables with a roller built-in&gt;</p><p><strong>Fantasy Statblocks</strong> is by far a required item, if you are reskinning an existing or even using it for something that exists already. By default, it only has access to creatures from the SRD, but has the ability to import JSON objects into it&apos;s bestiary from a variety of different sources. I went and added all the creatures I have access to from the books I own using a <a href="https://5e.tools/?ref=blog.jonnagel.us" rel="noreferrer">site</a> that is a little gray. Each statblock I add to my notes is editable by modifying fields.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2024/11/image.png" class="kg-image" alt loading="lazy" width="835" height="579" srcset="https://blog.jonnagel.us/content/images/size/w600/2024/11/image.png 600w, https://blog.jonnagel.us/content/images/2024/11/image.png 835w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">It&apos;s a simple layout, but it gives you everything you need at a quick glance</span></figcaption></figure><p>Above is an example of a standard Green Hag. Let&apos;s tweak it slightly &#x2026; 100 HP, less AC just because and definitely needs to be named Greenie McWitchface of the famous Waterdeep McWitchfaces.</p><pre><code>```statblock
creature: Green Hag
name: Greenie McWitchface
dice: false
hp: 100
ac: 12
```</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2024/11/image-1.png" class="kg-image" alt loading="lazy" width="324" height="168"><figcaption><span style="white-space: pre-wrap;">The above changes</span></figcaption></figure><p>Dice set to false is really only needed because I have it set to randomly roll based on the hit dice. So, if I want it to be hard-coded to 100, I have to tell it not to roll.</p>]]></content:encoded></item><item><title><![CDATA[Getting into TTRPGs and Becoming a DM]]></title><description><![CDATA[<p>While setting up camp for the night on the side of the road, a heavy fog starts to roll in. The surrounding forest has gone silent.<br><em>Roll a perception check</em></p><p>You hear the rustle of a bush a ways away but chalk it up to the wind. As you finish</p>]]></description><link>https://blog.jonnagel.us/getting-into-ttrpgs-and-becoming-a-dm/</link><guid isPermaLink="false">6567c17a21c500000146054d</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Mon, 28 Oct 2024 03:56:45 GMT</pubDate><content:encoded><![CDATA[<p>While setting up camp for the night on the side of the road, a heavy fog starts to roll in. The surrounding forest has gone silent.<br><em>Roll a perception check</em></p><p>You hear the rustle of a bush a ways away but chalk it up to the wind. As you finish lighting the fire for the evenings meal, 3 dire wolves leap out of the bush, with one landing a bite attack on the druid, pinning them to the ground. Roll for initiative...</p><figure class="kg-card kg-image-card"><img src="https://blog.jonnagel.us/content/images/2024/10/dusk-direwolves.webp" class="kg-image" alt loading="lazy" width="1792" height="1024" srcset="https://blog.jonnagel.us/content/images/size/w600/2024/10/dusk-direwolves.webp 600w, https://blog.jonnagel.us/content/images/size/w1000/2024/10/dusk-direwolves.webp 1000w, https://blog.jonnagel.us/content/images/size/w1600/2024/10/dusk-direwolves.webp 1600w, https://blog.jonnagel.us/content/images/2024/10/dusk-direwolves.webp 1792w" sizes="(min-width: 720px) 720px"></figure><hr><p>It&apos;s a small on-the-fly setup for an encounter, but you get the idea. But you kinda figured something like that was coming based on the title. Predictability in my blog posts is kinda expected at this point.</p><p>That&apos;s not the start of the story. For that we are going to have to take a trip back to April/May 2023. One of my friends mentioned they had friends moving back to the area and was looking to get a D&amp;D game started and wanted to know if interested. Me, never having played and only some minor interest or exposure, said sure. A few weeks later I was creating my first D&amp;D character. I created Shammagar, a Kobold barbarian whose way too strong and angry for his own good. We played most of the summer before schedules got to crazy, lending to protracted breaks between sessions.</p><p>I was enthralled. 80% of my TV time was bingeing Critical Role S3. Come fall, I found a campaign that was starting up at a local game store / bar.<br><br>That&apos;s it.</p><p>Pretty boring right? Expected something more? </p><p>That&apos;s just how I got into it. Becoming a DM is kind of a sillier story filled with dumb mistakes.</p><hr><p>We find ourselves in the middle of Oct, I&apos;m just thinking about what I want to do for my annual Charity stream with Extra Life. I had a couple friends participating that were D&amp;D-curious and decided I wanted to try and run a one-shot for them. So I did the sane thing, helped them create a couple of characters, picked a pre-written adventure and ran it during the event. What a sane thing to do right?</p><p><strong>WRONG!</strong></p><p>I may or may not have drunk the Kool-Aid a bit too strongly. I decided I was going to try and write a one-shot for them completely out of the blue, with custom maps and everything. </p><p>The maps honestly turned out fantastic, I spent dozens of hours decorating them. The story was simple, had a clear villain and a clear resolution with a few paths of variety to make things either easier or harder. From there, that&apos;s when things started falling apart. In my head, the whole thing should have taken 3&#x2013;4 hours. By hour 5 they were just getting to the last encounter.</p><p>I wrote too much, bit off more than I could chew. Honestly I kinda felt defeated for a long while. I didn&apos;t try to pick up a virtual DM screen again for a long time.</p><p>Fast-forward to March 2024. Spring break is coming up, our tables&apos; DMs are going to be gone for about 2 weekends. I offer to run so that anyone at the table who wants to still play while they&apos;re gone can.<br>I had been paying more attention to how much would get done in the 2-ish hour window we&apos;d been playing in, and had gotten my hands on <a href="https://rollandplaypress.com/products/one-shot-wonders?ref=blog.jonnagel.us" rel="noreferrer">One-Shot Wonders from Roll &amp; Play Press</a>. I had them pick an adventure based on a d100 roll, and started translating the prep into my laptop to make life easier. I erred more on easier for the encounter design, and they finished the whole thing in about 90 minutes. The next session I picked at random based on their level and left it on the defaults, finished in a little over 2 hours. Everything went back to normal for a bit until finals, and they ended the campaign in a glorious battle.</p><p>Meanwhile ... on a Discord server I&apos;m a part of, a bunch of people had interest in starting up a game, and I was sort of volunteered to run the campaign. I didn&apos;t mind really, busy lives meant it would be a once-a-month type thing. I decided I was going to run <a href="https://marketplace.dndbeyond.com/adventures/keys-from-the-golden-vault?ref=blog.jonnagel.us" rel="noreferrer">Keys from the Golden Vault</a> as a base for them. That campaign has been going good, but slow. </p><hr><p>I was starting to feel a little more comfortable telling a story and fitting it into a certain timeframe. I didn&apos;t want to run one-shots all summer and I had splurged a few months back on something ... daunting.</p><figure class="kg-card kg-image-card"><img src="https://blog.jonnagel.us/content/images/2024/10/410268.jpg" class="kg-image" alt loading="lazy" width="900" height="1165" srcset="https://blog.jonnagel.us/content/images/size/w600/2024/10/410268.jpg 600w, https://blog.jonnagel.us/content/images/2024/10/410268.jpg 900w" sizes="(min-width: 720px) 720px"></figure><p>It&apos;s a fantastic, modified system put together by Free League Publishing, based on The One Ring 2e. Key word being modified. A lot of the base is the same, probably 80% of it. Changes to certain checks, different rules for travel, palaver, and Shadow Points make enough tweaks that it&apos;s not as simple as &quot;create some new characters, and we&apos;ll drop them into the world and everything will run smoothly&quot;.</p><p>Except with it being a low magic setting, eldritch blasts and spells in general would really just make most encounters go too smoothly. So I made a few adjustments to bridge the rules to basic 5e rules.</p><ul><li>Limit the class options to Bard, Rogue, Ranger, with the options of Paladin and Cleric coming from the Callings in the book.</li><li>Limited the subclasses and the spells available, re-flavoring them to be more skills than conjuring healing out of thin air, for example.</li><li>Kept some of the traveling rules, but cut the Fellowship and Council phases, treating them more like average downtime and NPC encounters<ul><li>For keeping the difficulty of getting stuff out of Council type meetings, I decided to keep the intentions and still have them make basic persuasion or intimidation rolls based on expected DC&apos;s for what kind of support they could expect.</li></ul></li><li>Kept most of the checks the same as regular 5e, re-merging the 3 skills split from Nature, but keeping Medicine as Intelligence (one of the LotR changes).</li><li>Cut out the idea of Shadow Points</li></ul><p>Honestly it was hard and stressful. but SO rewarding. About a month or so into it, I started seeing a shift in my storytelling abilities, I started loosening up a bit more about the rules and going with the flow. I eventually ended up including the Shadow Points back into the story after one of the players decided to go full chaotic. Dropping a ceiling on your party members and refusing to pull them out of the rubble deserves some consequences.</p><hr><p>Which brings me to the present and the last few months. I&apos;m back to running one-shots for them in person. The Fall/Spring campaign has been on hold, and I&apos;ve been too busy with life stuff to focus on the conversions again. Did create a murder mystery event that&apos;s run all of October for them, but that&apos;s the closest to a campaign they&apos;ve gotten to do in the last couple of months.</p><p>I did start another campaign though. Two of my friends who did the stream one-shot wanted to play again, then an old coworker expressed interest. 2 of my friend&apos;s friends wanted to play. And then one of their relatives. Quickly, within 6 weeks (3 sessions), I had a table of 6 playing virtually. That campaign is a whirlwind, with the understanding from them that I&apos;m using them to tweak and adjust more of my storytelling skills. I decided I wanted to do more work to tie in backstories into the campaign and make actual plot points around it. So I find myself every other week building a narrative that&apos;s going to spawn <em>months</em>. </p><p>I&apos;m starting to feel outside my depths, and probably biting off way too much, but I&apos;m realizing so far the only pressure to perform is on myself. As things happen in the campaign I might write about them, I don&apos;t want to spoil much, but I will say that in 25 days in-game the campaign will no longer be really following Candlekeep Mysteries.</p>]]></content:encoded></item><item><title><![CDATA[Rebranding? Sorta…]]></title><description><![CDATA[<p>Real talk?<br><br>I&apos;ve kinda been annoyed at myself for falling off posting content here. I&apos;ve had a sort of uncomfortable experience with some extended burnout since March 2023. Due to a few situations with work and personal life, I kinda lost the passion to only focus</p>]]></description><link>https://blog.jonnagel.us/rebranding-sorta/</link><guid isPermaLink="false">668f573059281d00019d4f62</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Thu, 11 Jul 2024 04:15:51 GMT</pubDate><content:encoded><![CDATA[<p>Real talk?<br><br>I&apos;ve kinda been annoyed at myself for falling off posting content here. I&apos;ve had a sort of uncomfortable experience with some extended burnout since March 2023. Due to a few situations with work and personal life, I kinda lost the passion to only focus on tech things in my spare time. Every time I went to start a new post, I&apos;d end up maybe 300 words of rambling and none of it useful&#x2026;.</p><p></p><p>I&apos;ve got half finished blogs for ditching docker compose, migrating to Podman, Podman networking differences and solutions for a k8s-style deployment of Podman all lined up and in various states of completion. Unfortunately &#x2026; I&apos;ve got no interest in completing them right now.</p><hr><p>I think my plan for a bit is to change gears for a while. Instead of tech projects, or intricacies of SystemD, I&apos;m going to try to write about a few other passions I have. There&apos;s probably going to be A LOT of D&amp;D talk, occasionally books/TV/Movies/Video Games, and the rarer occasion a look into some self-care hobbies. Don&apos;t worry, the hobbies are mostly PG, rare occasion of PG-13 for when the shirts come off &#x1F60F;.</p><p></p><hr><hr><hr><p></p><p></p><p></p><p>Not going to make a habit of this, but shout-out to my friend and old co-worker Neel. He pointed me at a <a href="https://www.theverge.com/2024/4/22/24137296/ghost-newsletter-activitypub-fediverse-support?ref=blog.jonnagel.us" rel="noreferrer">TheVerge article about Ghost</a> (the platform used for this blog) starting up the process for ActivityPub support. I&apos;m a large fan of the Fediverse moment and design and will be definitely supporting it once it&apos;s fully baked.</p><p>Anyways, where was I? Right &#x2026; Neel. I mentioned that I hadn&apos;t really touched this in a year, and he was kinda like &#x201C;you should do it again&#x201D;. He also gave the nudge I needed to think about rebranding all this and not quite starting from scratch, but shifting what this would be about. Without him, this would undoubtedly probably have stayed dead for a long while.</p>]]></content:encoded></item><item><title><![CDATA[Am I alive? Maybe....]]></title><description><![CDATA[<p>It looks like me popping up once a year around this time is becomming the norm somehow.</p><hr><p>Life&apos;s had it&apos;s ups and downs, inside and out of work. Eventually I might share some of it.</p><p>Consider this a placeholder for some rambling to come soon.</p>]]></description><link>https://blog.jonnagel.us/am-i-alive-maybe/</link><guid isPermaLink="false">65ea3af2abb6890001e6f9c1</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Thu, 07 Mar 2024 22:10:32 GMT</pubDate><content:encoded><![CDATA[<p>It looks like me popping up once a year around this time is becomming the norm somehow.</p><hr><p>Life&apos;s had it&apos;s ups and downs, inside and out of work. Eventually I might share some of it.</p><p>Consider this a placeholder for some rambling to come soon.</p>]]></content:encoded></item><item><title><![CDATA[My feelings on the Internet Archive vs. the Big Four Publishers]]></title><description><![CDATA[<p>It&apos;s probably a mildly spicy take, but I 100% believe Internet Archive&apos;s National Emergency Library was a mistake, and they deserved the lawsuit.</p><p><strong>BUT, </strong>I hope they win their appeal, or some good at a federal level comes out of this. Let me explain.</p><hr><p>The National</p>]]></description><link>https://blog.jonnagel.us/my-feelings-on-the-internet-archive-vs-the-big-four-publishers/</link><guid isPermaLink="false">641f9c5b3ac5c2000165187c</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sun, 26 Mar 2023 03:25:38 GMT</pubDate><media:content url="https://blog.jonnagel.us/content/images/2023/03/32331-tablet-g774cf3aa0-640-1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.jonnagel.us/content/images/2023/03/32331-tablet-g774cf3aa0-640-1-.png" alt="My feelings on the Internet Archive vs. the Big Four Publishers"><p>It&apos;s probably a mildly spicy take, but I 100% believe Internet Archive&apos;s National Emergency Library was a mistake, and they deserved the lawsuit.</p><p><strong>BUT, </strong>I hope they win their appeal, or some good at a federal level comes out of this. Let me explain.</p><hr><p>The National Emergency Library was a program Internet Archive launched to try to support the nation through the start of the pandemic, by releasing the requirements borrowing limit. Essentially, removing the waitlist or number of copies that could be borrowed at a time. They had built up their library of materials mostly by buying materials from closing libraries, but also from users uploading their own scans. This ended up going against established controlled digital lending models where you can only loan out the same number of physical copies that are in your possession.</p><p>This is vastly different from how most libraries function. Usually they receive copies from the publishers, that have a limited shelf life before they have to be repurchased. This period is typically about 26 checkouts per copy.</p><p>Internet Archive essentially went around the process setup by the publishers, and used the pandemic to try to usurp the status quo.</p><hr><p>So, if it&apos;s fairly clear-cut, why do I wish something will change?</p><p>Simple, the entire system is broken. Hachette, HarperCollins, and then Penguin Group were both involved in the Apple ebook price fixing case. Penguin Random House and John Wiley weren&apos;t, but they don&apos;t mind the system. It&apos;s designed in a way for them to make profits first.</p><p>I currently live in a decent sized county library system. Let&apos;s take a look at one book I&apos;ve been waiting for.</p><figure class="kg-card kg-image-card"><img src="https://blog.jonnagel.us/content/images/2023/03/image.png" class="kg-image" alt="My feelings on the Internet Archive vs. the Big Four Publishers" loading="lazy" width="1312" height="470" srcset="https://blog.jonnagel.us/content/images/size/w600/2023/03/image.png 600w, https://blog.jonnagel.us/content/images/size/w1000/2023/03/image.png 1000w, https://blog.jonnagel.us/content/images/2023/03/image.png 1312w" sizes="(min-width: 720px) 720px"></figure><p>The book came out back in late September, and has consistently had between 50-70 on the hold line since. So, 6 months since launch and each copy has with a hard-set limit of 2 weeks per checkout (new release limitation). We can assume that each copy is due to expire about now. A digital item that expires. Can we just take a moment to realize how absurd that is? The publishers reasoning is the physical books need to be replaced after so much wear and tear, so why shouldn&apos;t the copy made of 1&apos;s and 0&apos;s!</p><hr><p>That&apos;s what honestly needs to change. As someone who tries to read 2&#x2013;3 books a month, and also is very unlikely to reread a book (if the story is memorable enough, I can remember it for quite a while). I&apos;m less likely to buy a book outright, and more likely to use my local libraries for either digital or physical copies, unless it&apos;s something I was interested in, and I find it on sale. I&apos;m fine with the concept of controlled digital lending and the normal restrictions of how long a book can be borrowed. What I have issue with is the library copy expiring after an arbitrary number of uses.</p><p>A slightly better, but still greedy option, would be to have the expiry time for each copy to 6 months, at the same rate they are being bought for now. This would allow a library to evaluate how many copies to renew, or how many more to add if a book is still popular. The libraries should be allowed to add additional copies at any time still to try to keep up with demand. In the case of Fairy Tale, my library should have been able to use this to get a few extra copies to lighten the time on the waitlist. Come September, if the waitlist is still about 20-30 deep, and they increased their copy count to 20, they might be able to scale back down to 10&#x2013;12 copies. A year later they&apos;d be able to evaluate again and maybe drop to 2&#x2013;4 copies depending on how popular the book still is (being a King novel, there&apos;s a good chance).</p><hr><p>That&apos;s why I want Internet Archive to win the appeal, not because I believe they were in the right, but because it might lead to a change for the better for everyone.</p>]]></content:encoded></item><item><title><![CDATA[Upgrading the Blog – Software design struggles]]></title><description><![CDATA[<p>It&apos;s been a bit, and it&apos;s been busy. This one will definitely feel a little more rambly.</p><p></p><p>A few months back when I started writing up content, the blog platform software that I use to run the blog, Ghost, started not subtly letting me know that</p>]]></description><link>https://blog.jonnagel.us/upgrading-the-blog-software-design-struggles/</link><guid isPermaLink="false">640cf7153ac5c2000165151f</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sun, 19 Mar 2023 03:59:22 GMT</pubDate><content:encoded><![CDATA[<p>It&apos;s been a bit, and it&apos;s been busy. This one will definitely feel a little more rambly.</p><p></p><p>A few months back when I started writing up content, the blog platform software that I use to run the blog, Ghost, started not subtly letting me know that the latest version of the software was available. With a catch, it was not compatible with my current database. When I spun up the blog originally, MariaDB was the more performant option, with little to no differences between the original MySQL. MariaDB has been making great strides in continuing to improve performance. Although, this means the feature-set is going to continue to shift. I saw something a few months back, regarding the next version of MySQL was going to be fundamentally different enough, that MariaDB would start to no longer be the drop-in replacement. (I didn&apos;t save the link because it didn&apos;t seem relevant at the time &#x2639;&#xFE0F;).</p><p>(drake meme, with ghost logo, MariaDB vs MySQL)</p><p>Ghost seems to be relying on certain features of MySQL for newer features that don&apos;t exist in MariaDB. Which means, if I want to keep my platform updated and secure, I will have to put in the work to do this migration. Migrating and upgrading also means a chance to change some underlying things, and redesign workflows.</p><hr><h3 id="current-stack-vs-new-stack">Current stack vs New stack</h3><p>Now, fundamentally, there&apos;s no major changes. </p><p>The current stack is as follows:</p><!--kg-card-begin: markdown--><p>Docker via docker-compose</p>
<ul>
<li>Traefik (webserver)</li>
<li>mariadb (db)</li>
<li>ghost (website)</li>
<li>watchtower (updater)</li>
<li>Matomo (monitoring and analytics)</li>
</ul>
<!--kg-card-end: markdown--><p>Now, I liked the stack when I originally used it, but for other projects Traefik felt more like a slog. Matomo never worked properly the way I wanted it to. Everything else has been stable.</p><p>The new stack:</p><!--kg-card-begin: markdown--><p>Docker via Ansible</p>
<ul>
<li>Caddy (webserver)</li>
<li>mysql (db for ghost)</li>
<li>ghost (website)</li>
<li>watchtower (container updater)</li>
<li>shynet (analytics)</li>
<li>postgres (db for shynet)</li>
<li>telegraf/grafana (monitoring)</li>
<li>giscus (comments)</li>
</ul>
<!--kg-card-end: markdown--><p>There are a few new faces involved. Some of it I&apos;ll go into deep detail with other posts, like using Ansible instead of compose, as well as some other new tricks I picked up through this process.</p><p>Monitoring via telegraf and grafana is still in flux, I&apos;ll eventually hone down what I want exactly, and will cover it at a later date.</p><hr><p>The second-biggest difference, while the smallest in the amount of work, is the switch to Caddy from Traefik. Don&apos;t get me wrong, Traefik works wonderfully, but the config was finicky in my experience. It&apos;s truly designed with a docker or Kubernetes focus first, which made it hard for me to implement it back when I was running a mix of LXC instances and containers outside the VPS the blog runs on. At the time, there also wasn&apos;t a clean way to do DNS challenges for the certs. I think the config has grown a bit and some new features have been added to improve QoL.</p><p>I chose Caddy over Nginx with certbot, mostly because I want the cert renewal and everything more integrated. Personally, I prefer using nginx as an ingress controller or in a more traditional environment setup. Besides, the Caddyfile makes it really easy to spin up sites with reverse proxies:</p><pre><code>blog.jonnagel.us {
  encode gzip
  
  proxy_pass $container_name:2368
}</code></pre><p>That&apos;s it! Just add more blocks as needed for different domains. DNS auth for LetsEncrypt works slightly differently, but not enough to really cover right now. Their <a href="https://caddyserver.com/docs/caddyfile?ref=blog.jonnagel.us">documentation</a> is top-notch in my opinion. I have a script that I use for my other VPS&apos;s that I&apos;ll share in a different post how to somewhat automate the DNS creation with caddy installed and managed by systemd. It could easily be modified to work with a docker environment.</p><hr><p>Now, the most exotic thing on that list is probably giscus. Disqus back in 2017 made it only possible to remove scripts and ads from their system for a fee of $10/month originally, now it&apos;s $11/month. While that&apos;s might seem pretty affordable, I don&apos;t have nearly enough traffic per month to justify it, or get a cut from the ad revenue. Ads and injections scripts are known to carry malware, and I personally do not feel comfortable to potentially expose others to that kind of risk.</p><p>That&apos;s where giscus comes in. It&apos;s an interesting app that allows you to leverage GitHub Discussions as a comment system. That means, you don&apos;t need to build and manage your own database for users, deal with spam or bots. Authentication is managed by GitHub, making it simple to use. There&apos;s a bit of configuration to do to get it setup, but adding it onto the template for the posts was quite simple.</p><p>This is all that&apos;s needed:</p><pre><code class="language-html">&lt;script src=&quot;https://giscus.app/client.js&quot;
            data-repo=&quot;org/reponame&quot;
            data-repo-id=&quot;XXXXXXXXXXXX&quot;
            data-category=&quot;Q&amp;A&quot;
            data-category-id=&quot;XXXXXXXXXXX&quot;
            data-mapping=&quot;pathname&quot;
            data-strict=&quot;0&quot;
            data-reactions-enabled=&quot;1&quot;
            data-emit-metadata=&quot;0&quot;
            data-input-position=&quot;top&quot;
            data-theme=&quot;preferred_color_scheme&quot;
            data-lang=&quot;en&quot;
            crossorigin=&quot;anonymous&quot;
            async&gt;
        &lt;/script&gt;</code></pre><p>This will get generated by the<a href="https://giscus.app/?ref=blog.jonnagel.us"> app&apos;s site</a>, and will for the most part walk you through what&apos;s needed to get it setup. I just added some additional CSS configuration to get it mostly set up the way I wanted it. I might to more configuration later, so I don&apos;t need as many modifications to get it to look how I want. For now, it looks good enough for my needs.</p><hr><p>Now, for the meat of the migration. Moving from MariaDB to MySQL. Now, normally, such a migration <em>should</em> be a piece of cake in theory, but there were 2 major things I needed to take care of as part of the migration.</p><ol><li>Collation schema updates</li><li>Manage the additional bloat or memory that MySQL uses compared to MariaDB.</li></ol><p>The first one was an unfortunate issue. Like I mentioned, the steps to migrate are quite simple. </p><ol><li>Stop Ghost</li><li>Take a mysqldump backup.</li><li>Spin down MariaDB container and delete volume.</li><li>Create new MySQL container and volume, with backup script mounted as part of the initdb.d folder.</li><li>Wait a little bit and start Ghost again.</li></ol><hr><p>Unfortunately, due to the collation changes, I was prompted with errors like the following:</p><pre><code>MigrationScriptError: alter table `posts_products` add constraint `posts_products_post_id_foreign` foreign key (`post_id`) references `posts` (`id`) on delete CASCADE - UNKNOWN_CODE_PLEASE_REPORT: Referencing column &apos;post_id&apos; and referenced column &apos;id&apos; in foreign key constraint &apos;posts_products_post_id_foreign&apos; are incompatible.</code></pre><p>It took some googling, and I was able to parse that the error. MariaDB 10 normally uses the default schema of <code>latin1_swedish_ci</code>, but since the original database setup was caused by an initialization script as part of Ghost, the collation was set to <code>utf8mb4_general_ci</code> when I did my initial installation. MySQL 8 changed their default uft8 collation to <code>utf8mb4_0900_ai_ci</code>. </p><p>So, what is one to do? Do a fresh install, migrate the posts one by one and update the dates somehow? Do I update the script from the SQL dump to have the correct collation value? I prefer to work smarter than harder &#x1F643;, so I went with the update the script method, similar to how <a href="https://www.ajfriesen.com/ghost-migration-fails-how-to-migrate-the-default-collation-from-mysql-5-to-mysql-8/?ref=blog.jonnagel.us">Andrej did theirs</a>. Instead of using sed, I leveraged the global replace feature in vi. Afterwards, starting up the database and consequently Ghost went smoothly after. Or so I thought&#x2026;</p><hr><p>For reasons, I have been running the blog on a simple $5/month <a href="https://m.do.co/c/802f3a4f4a8c?ref=blog.jonnagel.us">DigitalOcean droplet</a> for a few years now. It&apos;s all been running fine for 4 years. During the process of upgrading the OS to the latest LTS and deploying the stack, everything started to hang. Looking at htop in a different terminal, I was able to see that load was low, but memory was completely filled. Since there&apos;s not normally swap space on droplets, OOM Killer was killing things temporarily (hooray for <code>restart: unless stopped</code> &#x1F926;). Took a look at docker stats, MySQL was using nearly 550 MB of memory! That will not do, so my options are to either increase the size of the droplet, or try to perform some memory tuning. This time, working smarter is memory tuning, and probably stupidly perform some memory limits on the container. I had docker set the memory limit to 512M.</p><p>It&apos;s probably not the most efficient config, but here&apos;s what I added to the my.cnf that gets applied:</p><pre><code>connect_timeout         = 5
wait_timeout            = 600
max_allowed_packet      = 16M
thread_cache_size       = 0
sort_buffer_size        = 32K
bulk_insert_buffer_size = 0
tmp_table_size          = 1K
max_heap_table_size     = 16K

myisam_recover_options = BACKUP
key_buffer_size         = 2M
table_open_cache        = 400
myisam_sort_buffer_size = 512M
concurrent_insert       = 2
read_buffer_size        = 16K
read_rnd_buffer_size    = 16K

slow_query_log_file     = /var/log/mysql/mariadb-slow.log
long_query_time = 10

binlog_expire_logs_seconds        = 259200
max_binlog_size         = 100M

default_storage_engine  = InnoDB
# you can&apos;t just change log file size, requires special procedure

#innodb_log_file_size   = 50M
innodb_buffer_pool_size = 64M
innodb_log_buffer_size  = 2M
innodb_file_per_table   = 1
innodb_open_files       = 400
innodb_io_capacity      = 400
innodb_flush_method     = O_DIRECT

[client]
socket              = /var/run/mysqld/mysqld.sock

[mysqldump]
quick
quote-names
max_allowed_packet      = 16M
</code></pre><p>I&apos;ll probably tweak it the longer it runs, but for now it seems to tame the beast enough. Memory appears to max out for the container around 240 MB and when everything is idle it settles down around 70 MB.</p><p>The total memory usage now is around 750 MB at it&apos;s peak.</p><hr><p>The only other new thing in the stack is an application called <a href="https://github.com/milesmcc/shynet?ref=blog.jonnagel.us">Shynet</a>. It&apos;s a very modern, privacy analytics tool. It doesn&apos;t use cookies, and respects the DNT setting in browsers. From the GitHub page, here are the exact metrics that it will collect:</p><!--kg-card-begin: markdown--><pre><code>* Hits &#x2014; how many pages on your site were opened/viewed
* Sessions &#x2014; how many times your site was visited (essentially a collection of hits)
* Page load time &#x2014; how long the pages on your site look to load
* Bounce rate &#x2014; the percentage of visitors who left after just one page
* Duration &#x2014; how long visitors stayed on the site
* Referrers &#x2014; the links visitors followed to get to your site
* Locations &#x2014; the relative popularity of all the pages on your site
* Operating system &#x2014; your visitors&apos; OS (from user agent)
* Browser &#x2014; your visitors&apos; browser (from user agent)
* Geographic location &amp; network &#x2014; general location of your visitors (from IP)
* Device type &#x2014; whether your visitors are using a desktop, tablet, or phone (from user agent)
</code></pre>
<!--kg-card-end: markdown--><p>Looking at how it&apos;s configured to work, it uses a combination of an invisible pixel to get the basic information about your session (IP for geolocation, browser type, and OS/Device Type). Then it uses a simple script to track load times and everything else. Now, I personally have it set so that IP is <strong><u>not tracked</u></strong> if you don&apos;t have DNT set. I don&apos;t need that information, knowing where you might be from and checking out my blog is more than enough.</p>]]></content:encoded></item><item><title><![CDATA[Automating the creation of the virtual Kubernetes cluster - Part 1]]></title><description><![CDATA[<p>Progress has been finally made with the next step of building out my new server.</p><p>It took a while to get to this point, mostly because there&apos;s no clear documentation to do all the things that I wanted to piece together.</p><p>Let&apos;s take a recap of</p>]]></description><link>https://blog.jonnagel.us/automating-the-creation-of-the-virtual-kubernetes-cluster/</link><guid isPermaLink="false">63a5336bb4900a0001bb710a</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sat, 28 Jan 2023 18:24:41 GMT</pubDate><content:encoded><![CDATA[<p>Progress has been finally made with the next step of building out my new server.</p><p>It took a while to get to this point, mostly because there&apos;s no clear documentation to do all the things that I wanted to piece together.</p><p>Let&apos;s take a recap of what I&apos;m trying to do; A <a href="https://blog.jonnagel.us/automating-server-installs/" rel="noreferrer noopener">year ago I got some new hardware</a> to build a new server to replace my existing one, and a couple of months ago I had <a href="https://blog.jonnagel.us/hiccups-in-automating-the-new-server/" rel="noreferrer noopener">a few hiccups</a> in finishing the build-out. This server is to replace my slow and extremely long in the tooth HP Microserver, and to act as a local server and learning playground.</p><p>Next up has been figuring out what was needed to automate the creation of the VMs for the Kubernetes cluster (as well as any other VMs I plan on deploying). Now, there&apos;s a fairly well documented set of tools that I already used previously for setting up the libvirt bits. I already had gone and set up the storage pools and the networking to allow DHCP on my LAN. Unfortunately, that&apos;s where the documentation starts to fall apart. To do the installation of a VM, you use the community.libvirt.virt module.</p><p>Here, you do the following:</p><ol><li>Have a disk already created, and know the path to it.</li><li>Any additional resources you need created (network, shared disk, etc.)</li><li>Have an XML domain definition file you can feed to the playbook</li></ol><p>That&apos;s all the information that&apos;s given. 1 and 2 are pretty obvious, it&apos;s a similar situation to deploying a cloud VM with something like terraform. Unfortunately, this means, if you don&apos;t know how to make a minimal domain definition file. This gets even more obfuscated once you try to also do this while using Fedora CoreOS as the guest OS.</p><p>That changes today!</p><p>No seriously, I&apos;m going to walk you through how to make a bare-bones definition file, what you need for it to have Fedora CoreOS as the OS, and a bare-bones ignition file to build your VM.</p><hr><h2 id="libvirts-domain-definition">Libvirt&apos;s Domain Definition</h2><p>I unfortunately was not able to find any real &#x201C;here&apos;s a real basic XML file you can use&#x201D; or &#x201C;here are the bits you require in the file for it to truly work properly&#x201D; for the VM definition. There is the whole fat <a href="https://libvirt.org/formatdomain.html?ref=blog.jonnagel.us" rel="noreferrer noopener">domain specification</a>, but that&apos;s too much for starting out. I did find a page that was &#x201C;here run this role that contains this file&#x201D;, but that&apos;s not good enough for me. So, let&apos;s take a look at <a href="https://gist.github.com/nagelxz/15f201570e4983289d91d377c6102baf?ref=blog.jonnagel.us" rel="noreferrer noopener">what I used</a>, and break down what I can.</p><p>Let&apos;s work our way down.</p><pre><code class="language-XML">&lt;domain type=&apos;kvm&apos; xmlns:qemu=&apos;http://libvirt.org/schemas/domain/qemu/1.0&apos;&gt;
  &lt;name&gt;{{item.node_name}}&lt;/name&gt;
  &lt;metadata&gt;
    &lt;libosinfo:libosinfo xmlns:libosinfo=&quot;http://libosinfo.org/xmlns/libvirt/domain/1.0&quot;&gt;
      &lt;libosinfo:os id=&quot;http://fedoraproject.org/coreos/stable&quot;/&gt;
    &lt;/libosinfo:libosinfo&gt;
  &lt;/metadata&gt;
...
&lt;/domain&gt;</code></pre><p><code>&lt;domain&gt;</code> is the top-level tag everything will live under. <code>&lt;name&gt;</code> is going to be the name of the domain (VM). It&apos;s how you would reference with the virsh command. <code>&lt;metadata&gt;</code> tag is a little less clear, the docs point to it being used depending on what application is creating the domain.</p><pre><code>&lt;memory unit=&apos;MiB&apos;&gt;{{item.vmem}}&lt;/memory&gt;
&lt;vcpu placement=&apos;static&apos;&gt;{{item.vcpus}}&lt;/vcpu&gt;</code></pre><p><code>&lt;memory&gt;</code> is straightforward. It&apos;s the allocation for the memory being made available to the VM. <code>unit</code> can be changed between KiB, MiB and GiB (maybe more, but I don&apos;t have enough ram on the host to test that &#x1F609;) depending on your needs. I still prefer to do it with MiB for the most part. <code>&lt;vcpus&gt;</code> is also straightforward, in practice. If you want the bare-bones usage, the way I have it with placement set to static and then define the number of cores to be available. If you&apos;re looking to manage NUMA layouts, core pass through, I strongly recommend checking <a href="https://libvirt.org/formatdomain.html?ref=blog.jonnagel.us#cpu-allocation" rel="noreferrer noopener">this section of the specification</a>.</p><pre><code>&lt;cpu mode=&apos;host-passthrough&apos; check=&apos;none&apos; migratable=&apos;on&apos;/&gt;
&lt;os&gt;
  &lt;type arch=&apos;x86_64&apos; machine=&apos;pc-q35-6.2&apos;&gt;hvm&lt;/type&gt;
  &lt;boot dev=&apos;hd&apos;/&gt;
&lt;/os&gt;</code></pre><p><code>&lt;cpu&gt;</code> is more for when you want to emulate different versions of CPU from what you&apos;re running. For what I&apos;m using, host-passthrough is the basic mode, it says &#x201C;send all the features available to the host CPU through&#x201D;. This is the section that you would use to define if you wanted to emulate an ARM CPU (or an x86 CPU while on ARM). The <code>&lt;os&gt;</code> tag will be also where you would define most of these specifications for more customization for the host OS. I didn&apos;t dive deep into using this, I figured out that I needed arch set to &apos;x86_64&apos;, machine set to &apos;pc-q35-6.2&apos; and then the inner tag set to hvm. Mostly due to that&apos;s how VMs created with the cockpit-machine package sets it, along with virt-install, on my system.</p><pre><code>&lt;features&gt;
  &lt;acpi/&gt;
  &lt;apic/&gt;
  &lt;vmport state=&apos;off&apos;/&gt;
&lt;/features&gt;
&lt;clock offset=&apos;utc&apos;&gt;
  &lt;timer name=&apos;rtc&apos; tickpolicy=&apos;catchup&apos;/&gt;
  &lt;timer name=&apos;pit&apos; tickpolicy=&apos;delay&apos;/&gt;
  &lt;timer name=&apos;hpet&apos; present=&apos;no&apos;/&gt;
&lt;/clock&gt;</code></pre><p><code>&lt;features&gt;</code> is for the hypervisor to know what features are available. <code>&lt;acpi&gt;</code> allows for better power management, while <code>&lt;apic&gt;</code> allows for interrupts to be sent. Other useful tag would be <code>&lt;pae&gt;</code> for larger memory addresses (useful for 8gb ram on a 32bit OS), <code>&lt;hyperv&gt;</code> if you&apos;re dealing with Windows guests.</p><p><code>&lt;clock&gt;</code> is to mimic passing through the internal clock of the motherboard to the VM, allowing you to define your own timezone offset based on the time on your system.</p><pre><code>&lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
&lt;on_reboot&gt;restart&lt;/on_reboot&gt;
&lt;on_crash&gt;destroy&lt;/on_crash&gt;
&lt;pm&gt;
  &lt;suspend-to-mem enabled=&apos;no&apos;/&gt;
  &lt;suspend-to-disk enabled=&apos;no&apos;/&gt;
&lt;/pm&gt;</code></pre><p><code>&lt;on_poweroff&gt;</code>, <code>&lt;on_poweron&gt;</code>, <code>&lt;on_crash&gt;</code> and <code>&lt;pm&gt;</code> tags define how the host OS should handle managing the guest for power states, as well as if the guest should have the ability to sleep.</p><h3 id="devices">Devices</h3><p>Everything under <code>&lt;devices&gt;</code>is everything I decided is needed at a minimum. Some like <code>memballoon</code> are required for KVM/QEMU guests.</p><p><code>&lt;disk&gt;</code> is required for every disk you want to add to your guest. With each disk you need to give the location, whether it be a block device, ISO, raw file, or a qcow2 image file and an address. Even though I originally wanted to do LVM devices, I chose to switch to qcow2 files for now. &#xA0;Until I can figure out how to do automation better for LVM devices (nearly all the tools are designed around only qcow2 or IMG files right now, with LVM or other block devices being considered advanced features).</p><p><code>&lt;controller&gt;</code> shouldn&apos;t be needed since we&apos;re doing basic device design, allowing libvirt to pick properly, but I left it since there was no reason to remove it. <code>&lt;channel&gt;</code> is used for host-guest communication. It&apos;s not explicitly required, but it makes it easier for graceful shutdown and sharing devices or directories with the guest.</p><p>Network <code>&lt;interface&gt;</code>, like disk, is defined for each network interface you want added to your guest OS. These are relatively straightforward as well, and the <a href="https://libvirt.org/formatdomain.html?ref=blog.jonnagel.us#network-interfaces">documentation</a> has some great examples of different use cases. I&apos;m using libvirt networks, hence why the source is using a network type. If you wanted to, you could predefine the mac address the interface would use, but if it&apos;s not defined, libvirt will generate it automatically.</p><p><code>&lt;console&gt;</code> is required if you want to be able to access the vm using the command <code>virsh console vm-name-here</code>, I chose to keep it as PTY instead of switching to TTY or virtio-serial. <code>&lt;input&gt;</code> with mouse and keyboard will allow you to interface with vm over console. The <code>&lt;video&gt;</code> tag more than likely is not needed, I haven&apos;t played around not including it. It&apos;s definitely required if you want to be able to connect over VNC using spice graphics, which is overkill for a headless vm. <code>&lt;rng&gt;</code> is also not needed, but it makes random number generation quicker by using pass-through.</p><p>Now, for the pi&#xE8;ce de r&#xE9;sistance, the magic sauce to get Fedora CoreOS (or RedHat CoreOS, or Flatcar Linux) working. Defining <code>&lt;qemu:commandline&gt;</code> and then adding the argument, <code>-fw_cfg</code> that are passed along to define the kernel parameters to inject the ignition file by the second line with <code>name=opt/com.coreos/config,file=/path/to/ignition/file</code>. Where <code>/path/to/igniton/file</code> is a path to an ignition file on your system.</p><h3 id="ansible-jinja-templating-and-disk-creation">Ansible Jinja Templating and disk creation.</h3><p>So, if you noticed, in a few places I have things like {{name}} or {{vcpus}} inside the example above. This is due to making this config into a jinja template, so I can use the same template for all my cluster VMs. To make life easier, inside my vars file I added something that looks like this:</p><pre><code class="language-YAML">k8s_node:
  - node_name: master1
    vcpus: 2
    vmem: 4096
    disk:
      pool: bulk
      size: 20
    networks:
      - external
      - default
    is_master: true</code></pre><p>In the playbook I have it looping through k8s_node.</p><p>is_master is currently not being used. Right now, it&apos;s acting more like a mental placeholder for me once I get to actually setting up the cluster install.</p><p>For networks, I loop through in the template to create the block for both external and default.</p><p>I&apos;m not currently doing more than one disk, but I&apos;d handle it similarly to networks after a bit of massaging.</p><p>As for the creation of the boot disk, this took a little time to figure out properly. The VM installation of Fedora CoreOS requires the use of the actual vm image, or at least a copy of it. Following <a href="https://docs.fedoraproject.org/en-US/fedora-coreos/provisioning-libvirt/?ref=blog.jonnagel.us#_launching_a_vm_instance" rel="noreferrer noopener">this section</a> from the Fedora Docs, while looking up how virt-install works, I realized that backing-store part of the command was creating a copy of the disk and resizing it to the supplied value.</p><p>To mimic this behavior, I added 2 separate tasks into the playbook. The first makes a copy of the base image I fetch, the second resizes it to match the value I have inside the vars for that VM.</p><hr><h2 id="the-ignition-file">The Ignition File</h2><p>An ignition file is what you use to define the setup and configuration of the OS. It&apos;s similar to how cloud-init works. In there you can define users, advanced network config (like static IPs), repos and disk configurations.</p><p>To get this working as part of the VM, it&apos;s a 2 part process. The first, a butane file, and generating the ignition file.</p><h3 id="butane">Butane</h3><p>Butane is a specific YAML-based config file. You can write the files without anything special, but to validate and convert into ignition file you will need the application. There are various ways to <a href="https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/?ref=blog.jonnagel.us#_getting_butane">install it</a>, but my preferred way is to run it via podman/docker.</p><p>The simplest Butane config you can create is one that defines just the user and an ssh key.</p><pre><code>variant: fcos
version: 1.4.0
passwd:
  users:
    - name: somename
      ssh_authorized_keys:
        - ssh-rsa AAAA...

</code></pre><p></p><p>I&apos;m taking things a step further, I want everything that&apos;s required for running something like kubespray in place. That means I need to do the definition for CRI-O, the Kubernetes repo and a few other things. I didn&apos;t figure this out on my own, that <a href="https://dev.to/carminezacc/creating-a-kubernetes-cluster-with-fedora-coreos-36-j17?ref=blog.jonnagel.us">credit goes to Carmine</a>.</p><pre><code>variant: fcos
version: 1.4.0
passwd:
  users:
    - name: nagelxz
      ssh_authorized_keys:
        - ssh-rsa ...
      home_dir: /home/nagelxz
      password_hash: ...
      groups:
        - wheel
      shell: /bin/bash
kernel_arguments:
  should_exist:
    # Order is significant, so group both arguments into the same list entry.
    - console=tty0 console=ttyS0,115200n8
storage:
  files:
    # CRI-O DNF module
    - path: /etc/dnf/modules.d/cri-o.module
      mode: 0644
      overwrite: true
      contents:
        inline: |
          [cri-o]
          name=cri-o
          stream=1.17
          profiles=
          state=enabled
    # YUM repository for kubeadm, kubelet and kubectl
    - path: /etc/yum.repos.d/kubernetes.repo
      mode: 0644
      overwrite: true
      contents:
        inline: |
          [kubernetes]
          name=Kubernetes
          baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
          enabled=1
          gpgcheck=1
          repo_gpgcheck=1
          gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
            https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    # configuring automatic loading of br_netfilter on startup
    - path: /etc/modules-load.d/br_netfilter.conf
      mode: 0644
      overwrite: true
      contents:
        inline: br_netfilter
    # setting kernel parameters required by kubelet
    - path: /etc/sysctl.d/kubernetes.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          net.bridge.bridge-nf-call-iptables=1
          net.ipv4.ip_forward=1
    - path: /etc/hostname
      overwrite: true
      contents:
        inline: master-potato-fries</code></pre><hr><p>After you convert that butane file into an ignition file and run the playbook, it&apos;ll take about 2&#x2013;5 minutes to generate the VMs and be available on the IPs that are created. It&apos;s an additional setting I need to do, to have Ansible generate the inventory file for the cluster. But I&apos;m leaving that for the tweaks that come in as part of building the next part for running kubespray as part of the playbook.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Charity Streams, Learning, and Improvements for the future]]></title><description><![CDATA[<p>Today is the 6th of November.</p><p>Last night, I survived the toughest thing I will do this year.</p><p>I streamed for 25 hours in support of <a href="https://www.extra-life.org/index.cfm?fuseaction=cms.home&amp;ref=blog.jonnagel.us" rel="noreferrer noopener">Extra Life</a> on Extra Life Day. It&apos;s not the toughest thing because raising money is hard, or streaming is hard. Those can</p>]]></description><link>https://blog.jonnagel.us/charity-streams-learning-and-improvements-for-the-future/</link><guid isPermaLink="false">636870d0f305090001c24e0a</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sun, 06 Nov 2022 17:40:00 GMT</pubDate><media:content url="https://blog.jonnagel.us/content/images/2022/11/ExtraLife21_PageHeader_2550x526_detail-1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.jonnagel.us/content/images/2022/11/ExtraLife21_PageHeader_2550x526_detail-1-.jpg" alt="Charity Streams, Learning, and Improvements for the future"><p>Today is the 6th of November.</p><p>Last night, I survived the toughest thing I will do this year.</p><p>I streamed for 25 hours in support of <a href="https://www.extra-life.org/index.cfm?fuseaction=cms.home&amp;ref=blog.jonnagel.us" rel="noreferrer noopener">Extra Life</a> on Extra Life Day. It&apos;s not the toughest thing because raising money is hard, or streaming is hard. Those can be as easy as you want them.</p><p>It&apos;s the toughest thing because of the energy needed to put everything together, to make the stream worth tuning in, for new or returning supporters. A full day stream is no different than a marathon. The concern becomes less about what are you going to do when the camera is rolling, but how you&apos;re going to support yourself. Twenty-Five hours is more than just snacks and energy drinks to keep pushing forward, you&apos;re awake for at the minimum of 4 decent meals. You need to take care of your body.</p><p>You have to plan how you&apos;re going to fill your time; are you just going to sit there and play games and not interact with anyone who tunes in? Did you come up with incentives and milestones to entice supporters to tune in and donate?</p><p>The real linchpin is doing it with a group of people and friends to keep you moving forward, even when you&apos;re at your lowest in the 22nd hour and wondering if you can push through.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2022/11/the-team.png" class="kg-image" alt="Charity Streams, Learning, and Improvements for the future" loading="lazy" width="805" height="206" srcset="https://blog.jonnagel.us/content/images/size/w600/2022/11/the-team.png 600w, https://blog.jonnagel.us/content/images/2022/11/the-team.png 805w" sizes="(min-width: 720px) 720px"><figcaption>The team for 2022 - Ghost Hunters Anonymous (and Ronnie, he didn&apos;t have a page but was with us and supporting us the entire time)</figcaption></figure><p>Even though I call the experience the toughest thing I&apos;ll do, even though I may not hit my personal goal for money raised, the amount of satisfaction that I get taking this on to raise money makes the entire event worth doing and looking forward to ever year I can do it in the future.</p><hr><p>This year, outside of the expected stuff (unfortunately, no cooking stream this time, no room in my current kitchen) I thought of what I could do to try and increase the reach. If anyone knows, it&apos;s nearly impossible to be found on twitch randomly, especially if you are a new or small-time streamer. The only way to grow is to branch out into other platforms and leverage them for growth.</p><p>I decided to try and take that somewhat literally.</p><p>I wanted to stream to Twitch and YouTube simultaneously. Sounds fairly simple, just run 2 copies of OBS Studio, no? Well, it can be that simple, if the internet into your home/apartment/dorm is strong enough to handle it, and the device you are streaming from has enough horsepower to do that.</p><p>The answer is to use something like <a href="https://word-edit.officeapps.live.com/we/restream.io?ref=blog.jonnagel.us" rel="noreferrer noopener">Restream</a>. It&apos;s free to use for up to 2 platforms at once, but if you want to remove the Restream branding from your video, it&apos;s $16/month. Personally, I wanted no extra branding on what I&apos;m uploading, and that feels expensive for a once-a-year thing. So, I decided to try and run my own restreamer.</p><hr><p>How did it go?</p><p>Surprisingly well actually. The setup is pretty straightforward to just get rocking right away. YouTube gives you a short delay by default if the stream signal disconnects, usually you&apos;re able to reconnect to the same session without having to change the stream&apos;s URL. Twitch has an additional &quot;Disconnect Protection&quot; setting that you can turn on that gives you a pretty decent window, it&apos;s supposed to be about 90 seconds, but thankfully due to a bug the stream is still live until Nginx was restarted. Overall, I&apos;d definitely set this back up for next year&apos;s stream. Will probably be doing a few more throughout the year, so stay tuned for more information!</p><hr><p>So, you want to try to set up the same thing? How does it work? Where do i get started?</p><p>Let&apos;s start with how the whole thing works:</p><p>OBS Studio uses a protocol known as <a href="https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol?ref=blog.jonnagel.us" rel="noreferrer noopener">RTMP</a> to broadcast the data to whatever streaming platform or endpoint you want to be &quot;live&quot; at.</p><p>There are quite a few projects out there that allow you to set up your own rtmp server, but nearly all of them (from what I&apos;ve been able to find) boil down to a <a href="https://github.com/arut/nginx-rtmp-module?ref=blog.jonnagel.us" rel="noreferrer noopener">single module for nginx</a>. That&apos;s awesome, we&apos;ll just install the module and start going!</p><p>.... Woah, pump the brakes there speedy. It&apos;s unfortunately a little more complicated than that depending on your platform of choice. Ubuntu, it&apos;s as simple as doing sudo apt install -y nginx nginx-rtmp-module, rpm-based systems don&apos;t have a compiled package, Arch can find it in the AUR, and there are docker images, but most have not been updated in at least a year (at time of the stream, but the rtmp module hasn&apos;t had updates since May 2021, though).</p><p>The best place to get started is more than likely the same thing I started with. I had set up my restreamer by following <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-video-streaming-server-using-nginx-rtmp-on-ubuntu-20-04?ref=blog.jonnagel.us" rel="noreferrer noopener">this post</a> on DigitalOcean. I ended up automating the setup using Ansible, so I could set it up consistently for future streams while only paying for what I need.</p><p>If you&#x2019;ve tried that post, and now you want to use something that&#x2019;s a little more automated, here&apos;s the repo <a href="https://github.com/nagelxz/restreamer-playbook?ref=blog.jonnagel.us" rel="noreferrer noopener">containing the playbook</a>. Some things about it. I have group_vars setup, normally, as a vault file to keep my secrets configured. The secrets are the IPs to limit who can stream to it, and what services are configured to be streamed to with their respective api keys.</p><p>The `nginx.conf` is setup and templated to build out based on the allowed_ips and the services I want to stream to.</p><hr><p>The things I hate:</p><p>1) I have no statistics endpoint for stats on the stream. I was unable to get it working in time for the stream with the playbook or by hand.</p><p>2) I&apos;d prefer for my end to be rtmps (TLS for encrypting the stream)</p><p>3) A real nice to have would be able to pick what services i push to dynamically. I&apos;d have to build something to handle this, but it&apos;s not a priority.</p><p>4) A dockerized version so I&apos;m not tied to just spinning up an ubuntu host. I could use or repurpose one of the existing ones, and I might, but i want to get the rest working, including rtmps first.</p><p></p><p>I&apos;ll do an update or a follow-up post once I get some of these fixed and working!</p><hr><hr><p>If you&apos;ve made it this far, thanks for finding this post. I means a lot that as niche as this information is, that there&apos;s someone out there trying to do the same things. Glad I can help, even if it&apos;s just future me.</p>]]></content:encoded></item><item><title><![CDATA[Hiccups in automating the new server]]></title><description><![CDATA[<p>There&apos;s some good and bad news with the new server....</p><p>Lets start with the good. I&apos;ve automated 80%, got it stood up and have migrated <em>most</em> of my data over to it. All the important, everyday bits are being run from it instead of my old</p>]]></description><link>https://blog.jonnagel.us/hiccups-in-automating-the-new-server/</link><guid isPermaLink="false">6381576012916e0001b78c5f</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Mon, 10 Oct 2022 03:36:42 GMT</pubDate><content:encoded><![CDATA[<p>There&apos;s some good and bad news with the new server....</p><p>Lets start with the good. I&apos;ve automated 80%, got it stood up and have migrated <em>most</em> of my data over to it. All the important, everyday bits are being run from it instead of my old microserver.</p><hr><p>The bad, well it&apos;s more like tepid, stale water. Lets go in order of things that went wrong or skipped.</p><h2 id="the-bad">The Bad</h2><h3 id="kickstart">Kickstart</h3><p>Yeah, the core of building the server got axed. Attempts were made, but for some reason I could not get the server to recognize the boot option, or when I did, it complained about issues that I did not have with testing on VMs. Instead of dwelling on this one, I&apos;ve chosen to skip indefinitely. It was more of a nice to have than a need. the disk layout and config options aren&apos;t complex enough to have to worry about messing it up if I have to redo from scratch for some reason.</p><h3 id="snapshots">Snapshots</h3><p>Yeah... Now we&apos;re crossing into breaking core designs, but the good news is, this is just skipped &quot;for now&quot;. This is partially from originally wanting to do the base of the system as openSUSE and switching back to Fedora. openSUSE makes it <strong><em>incredibly</em></strong> easy to setup and use snapper out of the box. On Fedora, it&apos;s easy with a giant asterisk. It&apos;s the biggest pain in the world to automate on Fedora 36. Whatever you&apos;re thinking of to be the biggest pain, double it. There is a fairly <a href="https://sysguides.com/install-fedora-36-with-snapper-and-grub-btrfs/?ref=blog.jonnagel.us">straighforward guide</a> to follow but I have not had luck automating it the way it needs to be done to get it to work.</p><p>The issue comes from the need to make very specific kernel modifications because of the filesystem defaults in Fedora. First, <code>SUSE_BTRFS_SNAPSHOT_BOOTING=true</code> needs to be added to the grub config, the boot kernel needs to be rebuilt to support this change. Next you need to change the subvolume ids for the /.snapshot volume so that you can control how the system boots. If these steps are in any way wrong (including trying to skip rebooting to set part of it), you lose access to the system. </p><p>I&apos;m not skipping it, eventually I will get this part done, even if it&apos;s unfortnately done manually. Mainly, this is going to be needed when I start doing system upgrades, doing more ad-hoc work with containers, and backing up the VMs in the cluster.</p><h3 id="plex-harware-encoding">Plex Harware Encoding</h3><p>This one is confusing. Plex is being run inside a Podman container. Nearest I can tell, the official plex container doesn&apos;t play nicely with Podman. I mostly has to do with how podman does the hardware passthrough compared to Docker. Docker does passthrough of the device, Podman utilizes hooks into the device to share the abilities. For some reason when the device is successfully passed through, running <code>nvidia-smi</code> returns command not found. Nor does Plex see that there&apos;s the option for hardware encoding. </p><p>I need more time to investigate this, mostly at a time when Plex can be down and not interrupting anything (which sucks because that&apos;s how i&apos;ve been watching most of my content lately).</p><h3 id="updates-kinda">Updates? Kinda</h3><p>I&apos;ll be honest, I didn&apos;t plan exactly how I would be running updates on things. In order this is what needs to be done</p><ol><li>Update the core system (security or all)</li><li>Update containers (currently Plex and Jellyfin)</li><li>VM upgrades (separation for cluster VMs and adhoc ones, but this is something I haven&apos;t spent any time figuring out since I haven&apos;t built the cluster part yet).</li></ol><p>A week or so ago, I did make a general playbook to do the updates to the system, along with tags to differentiate between security updates and full updates that will have a reboot. That went mostly fine, but it did show me that I had an issue with the containers and the systemd files that were created. I haven&apos;t spent time figuring out how I would fix them, yet, but it&apos;s on my list of things to do before the end of the year.</p><p>The VM part, I have thoughts about how I&apos;m going to update the cluster, but for other vms, I&apos;m either going to use the same playbook for the core system, or I&apos;m going to build in running updates via console somehow (I doubt it, I don&apos;t wand to deal with expect if I can avoid it).</p><h3 id="the-cluster-itself">The Cluster itself</h3><p>This one is more of a cop-out than anything &#x1F609;. I have the files needed to make the xml domain definitions, but i haven&apos;t had the time to tweak them for what I need and then build out the variables. I should also be able to leverage some info from some playbooks at work if I get stuck. Again, it&apos;s more of a scope of time, than an issue getting it working.</p><hr><p>It&apos;s slow going, mostly due to the time I spend working on it, but the system is building out mostly as planned.</p>]]></content:encoded></item><item><title><![CDATA[New Server and Automating the Install]]></title><description><![CDATA[<p>Last summer (2021) I came into some new hardware to replace my my current home server. With 5x the cores (and 10x the threads), and with nearly 6x the performance, I wanted to rethink of how I leverage the server. I had gotten used to either running things on the</p>]]></description><link>https://blog.jonnagel.us/automating-server-installs/</link><guid isPermaLink="false">61ad0c1ade816b000119c295</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Thu, 01 Sep 2022 19:27:23 GMT</pubDate><content:encoded><![CDATA[<p>Last summer (2021) I came into some new hardware to replace my my current home server. With 5x the cores (and 10x the threads), and with nearly 6x the performance, I wanted to rethink of how I leverage the server. I had gotten used to either running things on the core system, or spinning up a lxc container. I&apos;ve wanted to also start playing with a small kubernetes cluster with out buying new hardware, and while running something simple like k3s is a container is an option, it&apos;s not fully indicative of a full experience. Since I haven&apos;t had <a href="https://blog.jonnagel.us/part-1-of-getting-kubernetes-in-lxd/">much</a> <a href="https://blog.jonnagel.us/part-2-of-kubernetes-in-lxd/">luck</a> in getting any kubernetes running in LXC, I figured it was time for a change.</p><hr><p>So far I&apos;ve been playing with using Fedora Workstation as the base and trying to leverage podman for containerization and playing with QEMU as the virtualization platform. So far it&apos;s been great, incredibly performant and exactly what I&apos;m looking f</p><p>First up is being able to reinstall the system in within an hour and have it configured and ready to go. I decided to take a page out of work&apos;s playbook. Automate the installer with a kickstart file, finish off with salt or ansible for anything that can&apos;t be handled in the %post step.</p><p>Kickstart is a configuration file created by RedHat around the time of RedHat Linux 6.2 (possibly older, but this is the fist reference I can find.) to help automate the installer. Back then, the adding of the kickstart file would create what was called a Kick-Me disk. The setup has gone through a few revisions since then, and most distributions support it, but the concept is the same - include a kickstart file, either on disk, another piece of removable media, or on a webserver. It&apos;s most notably the kickstart configuration is used as part of PXE boot environments. </p><p>In the kickstart file, you can define everything from disk partitioning and network setup to a what packages you want installed by default on first boot. There&apos;s also steps to happen either pre or post install. You can find all the option references in this <a href="https://docs.fedoraproject.org/en-US/fedora/rawhide/install-guide/appendixes/Kickstart_Syntax_Reference/?ref=blog.jonnagel.us">guide</a>. </p><hr><p>The kickstart file I&apos;m going to leverage will be a modified one that was generated by the anaconda installer &#xA0;from my inital setup of the server. To test the configuration, I&apos;ve been creating VMs in QEMU and modifying the iso I want to use as I go. There&apos;s been 2 major revisions so far. The first was a basic config that also tried to install snapper and configure it, the second worked on setting up the partitions exactly without any post s</p><p>Above is the test kickstart file I was using ... Lets break it down.</p><hr><pre><code>url --mirrorlist=&quot;https://mirrors.fedoraproject.org/metalink?repo=fedora-34&amp;arch=x86_64&quot;

# Use graphical install
graphical

# Keyboard layouts
keyboard --xlayouts=&apos;us&apos;
# System language
lang en_US.UTF-8
# System timezone
timezone America/New_York --utc

# Run the Setup Agent on first boot
firstboot --enable</code></pre><p>Instead of pulling from the iso, I chose to pull from the the default mirror for fedora. So I could watch the process in case it got stuck, which happened a lot in the beginning, I left it in graphical mode. Next bit is just your normal language and timezone settings. Finally is the firstboot. This allows you to go through the inital setup after boot (think in KDE environments, you configure your inital account <em>after</em> the install is complete). I&apos;ll be able to remove this in my final product, since the user will be created differently.</p><pre><code># Generated using Blivet version 3.3.3
ignoredisk --only-use=vda
# Partition clearing information
clearpart --all --initlabel
# Disk partitioning information

part btrfs.1 --fstype=&quot;btrfs&quot; --ondisk=vda --size=15360
part btrfs.2 --fstype=&quot;btrfs&quot; --ondisk=vda --size=59389
part /boot --fstype=&quot;ext4&quot; --ondisk=vda --size=2048 --label=boot
part biosboot --fstype=&quot;biosboot&quot; --ondisk=vda --size=2
btrfs /vol_root --label=root_vol btrfs.1
btrfs /home_pool --label=fs_pool btrfs.2
btrfs / --subvol --name=@root LABEL=root_vol
btrfs /home --subvol --name=@home LABEL=fs_pool</code></pre><p>Now we&apos;re getting to the disk setup part. We only want the installer to worry about vda, the disk that I created to install the VM onto. If I&apos;m using a disk over, I&apos;ll want to wipe it by using the clearpart command. This will wipe the disk and reinitalize it for use.</p><p>Now we&apos;re into the juice of the partitioning. I created the VM disk to be 75GB, to test giving a small root partition and a separate home partition on different btrfs volumes for testing snapper rollbacks. Being a vm, i have to setup the biosboot partition. There&apos;s also an issue with setting up /boot on a btrfs volume and interacting with grub currently, especially when snapshots are involved. I decided to expose the core btrfs volumes to make the snapper setup easier <a href="POST TBD">later</a>.</p><pre><code>#Root password
rootpw --lock

user --name=nagel --password=XXXXXXXXXXXXX --groups=wheel</code></pre><p>Simplest section. login to root user is locked. My user is defined. There&apos;s an additional setting to add the sshkey to login to the account that&apos;s created that I&apos;m probably going to add.</p><pre><code>%packages
@^server-product-environment
@headless-management
@container-management
@server-hardware-support
@virtualization
ansible
openssh-server
vim
git
python3-dnf-plugin-snapper
snapper
%end</code></pre><p>Next is the packages section. This allows me to define both what fedora install i want (mostly due to the nature of using url step instead of cdrom), as well as the packages I want included from start. Everything that starts with an <code>@</code> symbol is a group of packages to install. headless-management adds packages to support cockpit, container-management adds all the packages needed for podman, server-hardware-support adds things like lm_sensors and other utils to monitor the health of the server, virtualization installs qemu and supporting packages, and the rest of the packages are things I want as part of the setup of my install. This list will change as I finalize the core install.</p><p>There&apos;s also the option for pre and post install scripts. <code>%pre</code> is usesful if you have specific partion schemes you want setup that are not easily done with the default partitioning commands. <code>%post</code> is there for anything you want in place after the installation is finished. I tried to use it for the snapper config, but even running outside of the chroot, I could not get it to work.</p><hr><p>I&apos;ll keep tweaking the kickstart file as I go, and start building the ansible or salt configuration to finalize the setup before moving to the vm setups and other items.</p>]]></content:encoded></item><item><title><![CDATA[Free or Open Source software, and the 100k lb gorillas in the room]]></title><description><![CDATA[<p></p><p>A couple days ago, I was listening to Coder Radio episode <a href="https://coder.show/448?ref=blog.jonnagel.us">448</a>, and they mentioned the whole faker.js and colors.js fiasco that went down last week. Work&apos;s kept me quite busy this past week or so, so I missed the whole thing as it was unfolding.</p>]]></description><link>https://blog.jonnagel.us/free-or-open-source-software-and-the-100k-lb-gorillas-in-the-room/</link><guid isPermaLink="false">61e443decd16d400019689e6</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sun, 16 Jan 2022 20:23:39 GMT</pubDate><content:encoded><![CDATA[<p></p><p>A couple days ago, I was listening to Coder Radio episode <a href="https://coder.show/448?ref=blog.jonnagel.us">448</a>, and they mentioned the whole faker.js and colors.js fiasco that went down last week. Work&apos;s kept me quite busy this past week or so, so I missed the whole thing as it was unfolding.</p><p>Short recap:</p><p>The developer and creator of 2 popularly used JavaScript libraries updated both in drastically different ways. faker.js he removed the data and replaced it with a readme that says &quot;What really happened with Aaron Swartz?&quot;, with colors.js the version number was bumped to 6.6.6 and entered an infinite loop printing garbage data into the console after 3 lines of &quot;LIBERTY LIBERTY LIBERTY&quot;. NPM reverted the version and hit github account was suspended (access was restored later). News articles from the likes of forbes, and other people in online circles resorted to calling him a terrorist for these actions.</p><hr><p>These actions that he performed on his own code do not make him a terrorist.</p><h2 id="full-stop">FULL STOP.</h2><p>He&apos;s done some things in his past that <em>can</em> classify him as a potential terrorist (if you want to learn what, that&apos;s on you to figure out. This is a story of a larger problem, not a man&apos;s need for specialized help), but it does not make it right to label his current actions as such.</p><hr><p>faker.js and colors.js were packages made my this person, he technically is in his right to do whatever he wants with the code up on GitHub and push to NPM.</p><p>The issue, as I see it, is companies like Amazon have used his projects both internally and included as part of AWS development kits for prototyping things to run on lambda. The issues roll down to the creator and starts to feel burned out on top of other issues going on in his life. </p><p>I&apos;m with him on this. If a project like this is so <strong><em>integral </em></strong>to how they, or their services function, there should be compensation. I don&apos;t think 6-figure contract is fair, but definitely more than a couple thousand thrown their way (if at all).</p><p>The same thing could be said about all the development efforts of log4j. With the 0-day vulnerability that showed up back in December, thousands of companies and applications were made vulnerable. Even the main developer for that project had gotten little in terms of donations prior to the 0-day. Prior to the 0-day being announced, there were <a href="https://web.archive.org/web/20211210170913/https://github.com/sponsors/rgoers">no sponsors</a>, that slowly changed after the release of the CVE on the 10th of december. Now, they have <a href="https://github.com/sponsors/rgoers?ref=blog.jonnagel.us">74 individual sponsers</a>, but that doesn&apos;t help secure the future of an integral piece of code that thousands of companies are using in their applications. In the case of Log4j, there are about 6 companies supporting his efforts on the sponsor page, but that&apos;s not nearly enough. The team I was formerly on supported various different development teams that were using Log4j in the project and while I can&apos;t be sure, it&apos;s a good chance that nobody at the company, or the company itself has made a donation. While <a href="https://github.com/rgoers?ref=blog.jonnagel.us">Ralph Goers</a> is employed as a Software Architect, that <em>should not</em> diminish the fact that it takes time and effort to write software.</p><p>faker.js creator shares a similar story. There&apos;s only 10-12 consistent sponsors prior to the incident on the 7th of January. There&apos;s 48 now, but none of the public sponsors are corporations.</p><hr><h2 id="what-should-you-as-an-individual-do">What should you, as an individual do?</h2><p></p><p>If you can afford to, you should donate to any Opensource developers or projects that are integral to your everyday workflow. Any large video game streamer should seriously consider donating to the <a href="https://obsproject.com/contribute?ref=blog.jonnagel.us">OBS Project</a> on the regular. If you can&apos;t afford to, go out of your way to try and support the developer or project in anyway you can. It could be as simple as a thank you, helping fix a bug, or helping them improve documentation. Start being the change in the cycle.</p><hr><h2 id="what-should-coporations-be-doing">What should coporations be doing?</h2><p></p><p><strong>Donating a lot more than they do now</strong>. Hiring integral contributors and maintainers of projects that their company relies on. Some companies already do this. Facebook employs one of the developers of BTRFS. wolfSSL employs the <em>sole</em> founder and lead developer of cURL. Amazon contributes back to the Rust language. Those are the ones I know off the top of my head. </p><p>As an individual inside large companies, you should try and push for a change on what kind of donations they make. It won&apos;t be easy to change the norm.</p><p>The name Free and Open Source Software is a blessing and a curse at the same time. This model has greatly allowed for people like me to excel in their professional career, allowing us to learn on and off the job and not being locked into a specific company&apos;s way of doing things. But it&apos;s also the bane of anyone who&apos;s written a popular piece of software, even if it&apos;s not integral to a large corporation. Many open-source developers end up experiencing some form of burnout when the projects get too large and users are asking of too much from them.</p><p>Us as users should try and be more respectful of these individuals time and effort that they put into these projects.</p>]]></content:encoded></item><item><title><![CDATA[Discovering limitations of Postgres...]]></title><description><![CDATA[<p>This is probably going to come off more as a rant than anything, apologies now...</p><hr><p>On and off for the last 5 years I&apos;ve had a personal project (eventually I&apos;ll go into detail) I pick up for a bit when i decide it&apos;s time</p>]]></description><link>https://blog.jonnagel.us/discovering-limitations-of-postgres/</link><guid isPermaLink="false">60fdf2bbeeb58900013edd2f</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Mon, 26 Jul 2021 00:27:40 GMT</pubDate><content:encoded><![CDATA[<p>This is probably going to come off more as a rant than anything, apologies now...</p><hr><p>On and off for the last 5 years I&apos;ve had a personal project (eventually I&apos;ll go into detail) I pick up for a bit when i decide it&apos;s time to give it another shot to completion.</p><p>The first rendition was done with a traditional LAMP stack, with codeigniter, twig and a few other things. It took me about 4 months to get the project to 70%, putting in a couple hours here and there. A vast majority of that time was spent learning a lot of the stack to the point I could do something functional with it. Unfortunately, I didn&apos;t have the bandwidth to &#xA0;bring it to completion, as I&apos;m allergic to JavaScript and to do what I wanted was outside of my skill level at the time.</p><p>Fast-Forward to 2019. I&apos;ve started a new job, I&apos;m living on my own, and I find just enough free time to try again but differently. This time I choose the stack to be Python framwork Falcon with a Postgres database, mostly based off of performance benchmarks i&apos;ve seen (<em><a href="https://blog.miguelgrinberg.com/post/ignore-all-web-performance-benchmarks-including-this-one?ref=blog.jonnagel.us">all benchmarks are lies</a>, don&apos;t listen to them</em>), while finally embracing that Docker lifestyle and made it probably 40% of the way to completion. I had inserting and fetching based off of api endpoints working, but little else. This time I decided to only focus on backend work instead of trying to make a partially working frontend as well. This time it was shelved due to personal things keeping me from wanting to explore more of what I was writing.</p><p>That brings us to current day. I&apos;ve been spending a lot of my work time dealing with <a href="https://blog.jonnagel.us/functional-api-documentation">APIs and their design</a> and decided I should practice what I preach and start over. Starting over was mostly because trying to pick up the Falcon framework again after working in Bottle was a pain. And the performance of Bottle had mostly caught up to Falcon, to the point it&apos;s not a big difference. I kept the same backend library I had been writing, so everything was mostly plug and play and got back to the same 40% in less than a day. And that&apos;s when the fire nation attacked........</p><figure class="kg-card kg-image-card"><img src="https://blog.jonnagel.us/content/images/2021/07/tenor-1-.gif" class="kg-image" alt loading="lazy" width="500" height="374"></figure><p>Just kidding. I spent whatever free time and energy I had improving on the backend code, reaching what I would consider feature parody to other similar projects I know of. Part of this work includes a type of expiration table, to be driven by a stored procedure to routinely delete data based on values in the main table. This was <strong><u>always</u></strong> in the design of the project and probably considered crucial in the end product. I hadn&apos;t done the stored procedure as part of the LAMP stack because i was focused on getting the frontend where I wanted it instead of that functionality, but I was familiar of how to do what I wanted, so it was a non-issue.</p><hr><p>This is when I started to hit into my issues. Stored procedure to delete items? Easy Peasy. Option to schedule stored procedure? <strong>404</strong>... </p><p>Wait what? Yes, by default, Postgres doesn&apos;t contain the option to schedule a stored procedure internally. Missing this when I decided to pick Postgres is on me, but in defense, I never dreamed that I&apos;d have to include &quot;can schedule stored procedures&quot; in my look for other RDBMS options outside of MySQL. I had worked with MSSQL and assumed anything in this day and age would allow this (besides sqlite, but that&apos;s expected based on design since it&apos;s not a client-server database).</p><p>I spent the time doing the research for my options if I wanted to stick with Postgres:</p><ul><li>try to bake a cron to run inside the postgres container</li><li>create a function inside my backend code to run on a cronschedule on a different process thread to trigger the stored procedure</li><li>ditch the concept of scheduling, make the application delete the rows on fetch if the citeria is met (slow and expensive)</li><li>install the <a href="https://github.com/citusdata/pg_cron?ref=blog.jonnagel.us">pg_cron extension</a> as part of the postgres image and bake the schedule as part of the inital sql that runs when spunt up with docker-compose</li></ul><p>I decided the most portable option was number 3. If I chose to switch RDBMS platforms it would require the least amount of work to migrate. Honestly it went great until I tried to activate the schedule.</p><figure class="kg-card kg-code-card"><pre><code>2021-07-25 22:29:49.825 UTC [84] ERROR:  can only create extension in database postgres
2021-07-25 22:29:49.825 UTC [84] DETAIL:  Jobs must be scheduled from the database configured in cron.database_name, since the pg_cron background worker reads job descriptions from this database.
2021-07-25 22:29:49.825 UTC [84] HINT:  Add cron.database_name = &apos;database_name&apos; in postgresql.conf to use the current database.
2021-07-25 22:29:49.825 UTC [84] CONTEXT:  PL/pgSQL function inline_code_block line 4 at RAISE
2021-07-25 22:29:49.825 UTC [84] STATEMENT:  CREATE EXTENSION pg_cron;
psql:/docker-entrypoint-initdb.d/init.sql:3: ERROR:  can only create extension in database postgres
DETAIL:  Jobs must be scheduled from the database configured in cron.database_name, since the pg_cron background worker reads job descriptions from this database.
HINT:  Add cron.database_name = &apos;database_name&apos; in postgresql.conf to use the current database.</code></pre><figcaption>error log from starting up the container</figcaption></figure><p>Well ..... Shit.</p><p>So for most installs of postgrest, that doesnt seem to be that big of a deal. Just use the default database as your database in a container, and off to the races, right? Well it used to be bad practice to build your application in the default database, hence me defaulting to using database_name. What about having your inital script create database_name, switch to it and then setup the tables and whatnot?</p><figure class="kg-card kg-image-card"><img src="https://blog.jonnagel.us/content/images/2021/07/tenor-1--1.gif" class="kg-image" alt loading="lazy" width="320" height="320"></figure><p>Postgres doesn&apos;t have or understand the USE command. everything talks about while connecting with psql to do \c database_name and then run the script, or imbed the switch and script you want to run inside another script. </p><p>No thank you... Not after spending 2 hours on getting both a working stored procedure and going down the rabbit hole to find this out.</p><hr><p>I&apos;ll say again, these limitations are things I should have discovered much earlier into the process, and it&apos;s my fault for not doing the footwork until most of my design was set in stone.</p><p>For this project I&apos;ll be giving up Postgres, but only because I don&apos;t want to jump through these hoops to keep using it for this project. I&apos;m sure I&apos;ll have other things I&apos;ll work on where it makes perfect sense. So, for now, goodbye Postgres ... it was kinda fun working with you.</p>]]></content:encoded></item><item><title><![CDATA[Functional API Design and Documentation]]></title><description><![CDATA[<p></p><p>Whether you&apos;re working on a traditional REST API or building something out using AWS Gateway, eventually <em>(read: you should be doing it from the start)</em> you may need to start documenting your functions and endpoints.</p><p>At work we&apos;ve been building out API for standardizing various automation</p>]]></description><link>https://blog.jonnagel.us/functional-api-documentation/</link><guid isPermaLink="false">60c6acf256b4170001739238</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sat, 10 Jul 2021 20:30:37 GMT</pubDate><content:encoded><![CDATA[<p></p><p>Whether you&apos;re working on a traditional REST API or building something out using AWS Gateway, eventually <em>(read: you should be doing it from the start)</em> you may need to start documenting your functions and endpoints.</p><p>At work we&apos;ve been building out API for standardizing various automation activities, and building actions for our internal chatbot. The setup the REST API has mostly been done from a few members of our team &#xA0;as a lift and shift from various modules inside our NodeJS / Hubot framework. Some of the functions are simple, comparing versions of applications with healthcheck endpoints in environments, to stuff that&apos;s more complex, redeploying all applications in an environment.</p><p>As of right now, there&apos;s no standardization on the request or response structures, nor are there any example requests for new team members to pick up working on making changes to the functions, just examples on how to use it from the chat platform. The best solution on how to fix both missing examples and standardize (at most) the requests is to create an REST API Specification document. While there&apos;s a few different REST API specifications out there, the most common is Swagger / OpenAPI. This specification can be easily designed and tested in something like <a href="https://swagger.io/tools/swagger-ui/?ref=blog.jonnagel.us">Swagger UI</a>, <a href="https://insomnia.rest/?ref=blog.jonnagel.us">Insomnia</a>, or other REST design tools.</p><hr><p>What&apos;s required to build out an specification document? &#xA0;A basic one is quite simple to get setup. Lets take a look:</p><pre><code class="language-yaml">openapi: 3.0.3
info:
  title: just a simple api spec
  version: 0.0.1
  description: can be whatever you want, usually i expand on the title
  contact: 
    email: some.email@example.com
servers:
  - url: some.url.example.com
    description: if you are using something like swagger, this will give you a place to test against.
paths: 
  /say_hi
    get:
      description: returns hello
      responses:
        &apos;200&apos;:
          content:
            application/text:
              schema:
                type: string
                example: &quot;hello&quot;</code></pre><p>In this simple example you can see that there is a single endpoint path described, <code>/say_hi</code> that when you hit the url along with the path, (some.url.example.com/say_hi) you could expect to get back &quot;hello&quot;. That&apos;s really all that&apos;s happening here.</p><p>Lets take it a step further, and say you have a function behind our API that could be hit to get the status of a bunch of healthchecks at once from your servers in the NYC datacenter. The response you&apos;d expect would be a json object list where it lists the host name and if it&apos;s up or down as follows:</p><pre><code class="language-json">{
	{&quot;host_1&quot; : &quot;UP&quot;},
	{&quot;host_2&quot; : &quot;UP&quot;},
	{&quot;host_3&quot; : &quot;DOWN&quot;}
}</code></pre><p>Lets see what the path description would be:</p><pre><code class="language-yaml">paths:
  /healthchecks/{datacenter}:
    decscription: return the healthcheck status for the provided datacenter.
    parameters:
      - in: path
        name: datacenter
          schema:
            type: string
        required: true
    responses:
      &apos;200&apos;:
        description: successful call for datacenter healthcheck
        content:
          application/json:
            schema:
              type: object
              properties:
                 hostname:
                   type: string
                   example: &quot;UP&quot;
      &apos;404&apos;:
        description: datacenter was not found
        content:
          application/json:
            schema:
              type: object
              properties:
                datacenter:
                  type: string
                  example: &quot;San_Diego&quot;
                message: 
                  type: string
                  example: &quot;Could not find datacenter&quot;</code></pre><p>It takes some time to plan out what your response objects may be, so don&apos;t be afraid to change your spec as you go along. In this case we&apos;ve also defined what the object might look like if you want a different responce for when the datacenter is not found.</p><hr><p>There&apos;s a few things you need to keep in mind as your are building your API and designing he responses:</p><!--kg-card-begin: markdown--><ul>
<li>
<p>Try to work on the path documentation as your going along. It doesn&apos;t need to be finished right from the start. If you know the post object is going to be a json object, but don&apos;t know how many fields it&apos;s going to have, you can have it commented out and build it as you go along.</p>
</li>
<li>
<p>Your input and output may change! Don&apos;t feel stuck because what you thought your input or output was going to look like isn&apos;t what ended up being what you invisioned. Let your specification grow with your api</p>
<ul>
<li>At work our API was originally designed to drive the complex functions for the chatbot, and thus, the response objects were originally designed to be easier to inject inside the chatbot framework. As we decided it was easier to leverage the API for some jenkins jobs instead of duplicating functionality we&apos;ve had to adapt and are now serving responses based off of header information.</li>
</ul>
</li>
<li>
<p>Descriptions of what the path is for is <strong>immensely</strong> helpful. Someone who is looking at your swagger page, or the actual document itself should be able to figure out exactly what is going on and what is being returned.</p>
</li>
<li>
<p>Just as important, be consistent on your naming conventions for things. If you have a path / function where the input is environment and service, in another that has just environment, don&apos;t have the input be env and env_name. Keep them the same!</p>
</li>
</ul>
<!--kg-card-end: markdown--><p>This is probably a very glossary approach, but this was the process i tried to follow as I created the one for my team.</p>]]></content:encoded></item><item><title><![CDATA[Docker Escalation Privileges and Remediation]]></title><description><![CDATA[<p></p><p>This isn&apos;t exactly the post I planned on writing to break my writers block, but it&apos;s the one I needed.</p><hr><p>I was listening to the podcast <em>Linux Unplugged</em> episode <a href="https://linuxunplugged.com/395?ref=blog.jonnagel.us">#395</a> and they had a little hacker challenge to see who could get root access with a</p>]]></description><link>https://blog.jonnagel.us/docker-escalation-privileges-and-remediation/</link><guid isPermaLink="false">60766179b686e40001491bda</guid><dc:creator><![CDATA[Jon Nagel]]></dc:creator><pubDate>Sun, 06 Jun 2021 03:23:45 GMT</pubDate><content:encoded><![CDATA[<p></p><p>This isn&apos;t exactly the post I planned on writing to break my writers block, but it&apos;s the one I needed.</p><hr><p>I was listening to the podcast <em>Linux Unplugged</em> episode <a href="https://linuxunplugged.com/395?ref=blog.jonnagel.us">#395</a> and they had a little hacker challenge to see who could get root access with a non privileged user the quickest. The winning method was someone who realized the user had misconfigured access to run <code>docker ps -a</code> and mount / into a container to do gain the access to make the changes needed to complete the challenge.</p><p>Listening to this, I realized I&apos;m just as guilty at configuring my docker access the same way. So I wanted to test the escalation method and play with the different methods to secure access to the docker socket (other than switching to Podman). The conventional recommendation to give access to docker is to give them access to the <code>docker</code> group that gets created with Docker&apos;s installation. This should really only be done with users who already have access to <code>sudo</code> and understand the risks that this entails. My team at work is guilty of giving access to the docker group on our sandbox server instead of using a proper method.</p><hr><p>To prepare for this, lets set the scene:</p><p>There&apos;s a server inside a company that runs their company website using nginx and php in docker. There&apos;s a developer named Jeff who has an account on that server, but he has made the cardinal sin... His password is <code>password</code>. Jeff was granted access to the folder /data/website as well as the docker group to manage the website. His personal machine was compromised and a bad actor has used the bad password to gain access to the server.</p><p>The sysadmin for the server has also left a bunch of corporate passwords in a place they feel is safe, /root, but was smart enought to make sure the file was only available to the root user and nobody else.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2021/06/image.png" class="kg-image" alt loading="lazy" width="904" height="88" srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/image.png 600w, https://blog.jonnagel.us/content/images/2021/06/image.png 904w" sizes="(min-width: 720px) 720px"><figcaption>In case you were thinking I was tricking you on the permissions</figcaption></figure><hr><p>Each of the examples will be done and shown on both debian and rhel based systems unless otherwise specified.</p><p>Lets go from easiest to less easy for things the bad actor can do with jeff&apos;s docker access. Lets steal the corporate passwords.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.jonnagel.us/content/images/2021/06/deb-steal-pass.png" width="959" height="866" loading="lazy" alt srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/deb-steal-pass.png 600w, https://blog.jonnagel.us/content/images/2021/06/deb-steal-pass.png 959w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.jonnagel.us/content/images/2021/06/rhel-steal-pass.png" width="959" height="866" loading="lazy" alt srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/rhel-steal-pass.png 600w, https://blog.jonnagel.us/content/images/2021/06/rhel-steal-pass.png 959w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>You&apos;re sweet passwords are now mine</figcaption></figure><p>The bad actor got a tad lucky when mounting root and finding those passwords. They could have easily just mounted <code>/</code> and found that file sitting in /root.</p><p>This behavior is exactly what is expected when you create a container and mount a path. Docker has access to SUID, which grants any container with a mounted folder to read any file.</p><hr><p>Gaining access to the system and mucking around isn&apos;t much more difficult, but depending on your server&apos;s base OS, the behavior is different.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.jonnagel.us/content/images/2021/06/rhel-chroot.png" width="990" height="866" loading="lazy" alt srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/rhel-chroot.png 600w, https://blog.jonnagel.us/content/images/2021/06/rhel-chroot.png 990w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.jonnagel.us/content/images/2021/06/deb-chroot.png" width="990" height="474" loading="lazy" alt srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/deb-chroot.png 600w, https://blog.jonnagel.us/content/images/2021/06/deb-chroot.png 990w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>Interestingly things fail differently on Ubuntu</figcaption></figure><p>By using <code>chroot</code> on the container you&apos;re able to get a root shell into the host system. Interestingly enough (was not aware of this prior to testing) the network stack isn&apos;t properly setup in the chroot shell so depending on things that are cached you may be able to install a package. Though That probably won&apos;t really stop someone trying to compromise your system.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2021/06/one-echo-away.png" class="kg-image" alt loading="lazy" width="1472" height="608" srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/one-echo-away.png 600w, https://blog.jonnagel.us/content/images/size/w1000/2021/06/one-echo-away.png 1000w, https://blog.jonnagel.us/content/images/2021/06/one-echo-away.png 1472w" sizes="(min-width: 720px) 720px"><figcaption>DEB - an echo a day definitely doesn&apos;t keep the bad actors away</figcaption></figure><p>But unfortunately that&apos;s not all someone can do with that chroot session. With it, they&apos;d be able to create themselves new users, or even change the root password to give themselves easier access to the system.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2021/06/rhell-root-own-2.png" class="kg-image" alt loading="lazy" width="717" height="800" srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/rhell-root-own-2.png 600w, https://blog.jonnagel.us/content/images/2021/06/rhell-root-own-2.png 717w"><figcaption>RHEL - I can now become root ... FEAR ME .... <em>chokes</em></figcaption></figure><p><em>See? jeff really didn&apos;t have permission.</em></p><hr><hr><p>Using the docker group, you are given an extremely wide berth to do nefarious things if you know what you are doing. But leveraging docker doesn&apos;t have to be all doom and gloom. There are ways to make modifications to your processs instead of just accepting this is the only way to do it.</p><p>For this next bit, we&apos;re only going to use our RHEL based system. We&apos;re going to remove jeff&apos;s access to the docker group and setup &#xA0;ACL rules for him to interact with the docker socket. Unfortunately, this still has the privilege escalation problem and would allow you to much around with the system. </p><p>Depending on you&apos;re use case, which should always be taken into account when deciding access to tools, you may want to leave docker only available to root and then breaks the ability to user the docker group. This will require people to either be in the sudoers group (either passwordless or regular) to be able to interact with the docker socket. With this restriction, you can also then give specialized access with scripts to specific commands for people like jeff to be able to still work.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jonnagel.us/content/images/2021/06/gimped-jeff.png" class="kg-image" alt loading="lazy" width="1478" height="563" srcset="https://blog.jonnagel.us/content/images/size/w600/2021/06/gimped-jeff.png 600w, https://blog.jonnagel.us/content/images/size/w1000/2021/06/gimped-jeff.png 1000w, https://blog.jonnagel.us/content/images/2021/06/gimped-jeff.png 1478w" sizes="(min-width: 720px) 720px"><figcaption>it&apos;s all work and no play for jeff going forward :(</figcaption></figure><hr><hr><p>Docker now does have the option to run as <a href="https://docs.docker.com/engine/security/rootless/?ref=blog.jonnagel.us">rootless</a>, but it&apos;s a bit more involved to get up and running. The next time you visit, I&apos;ll show how you can migrate your workflow from Docker to Podman, including using docker-compose.</p>]]></content:encoded></item></channel></rss>