<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Devco.Net</title>
    <link>https://www.devco.net/</link>
    <description>Recent content on Devco.Net</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 25 Nov 2025 00:00:00 +0100</lastBuildDate><atom:link href="https://www.devco.net/feed/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Choria Hierarchical Data</title>
      <link>https://www.devco.net/posts/2025/11/30/choria_hierarchical_data/</link>
      <pubDate>Tue, 25 Nov 2025 00:00:00 +0100</pubDate>
      
      <guid>https://www.devco.net/posts/2025/11/30/choria_hierarchical_data/</guid>
      <description>&lt;p&gt;As most are aware, I created the widely used Hiera system in Puppet. I &lt;a href=&#34;https://www.devco.net/archives/2011/06/05/hiera_a_pluggable_hierarchical_data_store.php&#34;&gt;introduced it in 2011&lt;/a&gt;, and it has since become essentially the only way to use Puppet in any meaningful fashion. Given its widespread adoption, I donated the code to Puppet, and it became integrated with Puppet core.&lt;/p&gt;
&lt;p&gt;Unfortunately, during this integration we lost some key values—the command line and the ability to use it in scripts and elsewhere.&lt;/p&gt;
&lt;p&gt;Meanwhile, our world is changing, and we are ever more focussing on small, single purpose compute. I intend to create a new kind of Configuration Management system that is focussed on the small, single, purpose needs. Filling the gap that has always existed - how to manage our applications rather than systems, an area Puppet has always been weak at.&lt;/p&gt;
&lt;p&gt;There is then still the need for hierarchical data and given that I have the flexibility to start completely fresh I am at best taking some inspiration from Hiera.&lt;/p&gt;
&lt;p&gt;So today I&amp;rsquo;ll introduce a new tool called Choria Hierarchical Data - current code name &lt;code&gt;tinyhiera&lt;/code&gt; but that might change.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; This post was first posted &lt;a href=&#34;https://choria.io/blog/post/2025/11/25/hierarchical_data/&#34;&gt;on the Choria blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Read on for the details&lt;/p&gt;
&lt;p&gt;The big difference here is that this new system supports just one data file, you express a data structure that is the complete outcome of the query and then configure in the same file how to override and extend that data file.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s look at it, here I&amp;rsquo;ll show the different sections, in reality it&amp;rsquo;s just one file.&lt;/p&gt;
&lt;p&gt;This is the data we wish to manage:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;data&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;log_level&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;INFO&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;packages&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;httpd&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;web&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;listen_port&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;80&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;tls&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Lets define the hierarchy for overriding this data; this should be familiar to Puppet users:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nn&#34;&gt;---&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;hierarchy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;merge&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;deep&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;c&#34;&gt;# or first&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;order&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;env:{{ lookup(&amp;#39;env&amp;#39;) }}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;role:{{ lookup(&amp;#39;role&amp;#39;) }}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;host:{{ lookup(&amp;#39;hostname&amp;#39;) }}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And finally, we show the overrides, here we just specify how the data will be extended:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;overrides&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;env:prod&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;log_level&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;WARN&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;role:web&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;packages&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;nginx&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;web&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;listen_port&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;443&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;tls&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;host:web01&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;log_level&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;TRACE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;web&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;listen_port&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;8080&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;We can now call our CLI tool to resolve this entire structure.&lt;/p&gt;
&lt;p&gt;We query the data setting the &lt;code&gt;role&lt;/code&gt; fact to &lt;code&gt;web&lt;/code&gt;, the data from the &lt;code&gt;role:web&lt;/code&gt; section is merged into the data from the &lt;code&gt;data&lt;/code&gt; section:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;$ tinyhiera parse data.yaml role=web
{
  &amp;#34;log_level&amp;#34;: &amp;#34;INFO&amp;#34;,
  &amp;#34;packages&amp;#34;: [
    &amp;#34;ca-certificates&amp;#34;,
    &amp;#34;nginx&amp;#34;
  ],
  &amp;#34;web&amp;#34;: {
    &amp;#34;listen_port&amp;#34;: 443,
    &amp;#34;tls&amp;#34;: true
  }
}
$ tinyhiera parse data.yaml role=web --query web.listen_port
443
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;features-and-differences&#34;&gt;Features and Differences&lt;/h2&gt;
&lt;p&gt;This is the basic feature, similar to what you might have used in Puppet except differing in a few areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It&amp;rsquo;s all one file and one data structure, it&amp;rsquo;s not key-lookup orientated. The entire outcome is rendered in one query&lt;/li&gt;
&lt;li&gt;Expressions used in overrides and data lookups are a full query language called &lt;a href=&#34;https://expr-lang.org&#34;&gt;expr-lang&lt;/a&gt;, this means we can call many functions, generate data, derive data and easily extend this over time&lt;/li&gt;
&lt;li&gt;The data is typed, if your facts have rich data the results can also be rich data, if &lt;code&gt;value&lt;/code&gt; is a integer, array or map the result will be the same type &lt;code&gt;x=&amp;quot;{{ lookup(&amp;quot;value&amp;quot;) }}&amp;quot;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The CLI has built in system facts, can take facts on the CLI, read them from JSON and YAML files, parse your environment variables as facts or all at the same time&lt;/li&gt;
&lt;li&gt;The CLI can emit data as environment variables for use in scripts&lt;/li&gt;
&lt;li&gt;There is a Golang library to use it in your own code&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;environment-variable-output&#34;&gt;Environment Variable Output&lt;/h2&gt;
&lt;p&gt;We want this to be usable in scripts, the easiest might be dig into the data and just get a single value:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;PORT=$(tinyhiera parse data.yaml role=web --query web.listen_port)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;But this involves many calls to the command, we can instead emit all the data as variables.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;$ tinyhiera parse data.yaml role=web --env
HIERA_LOG_LEVEL=INFO
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can see the data is emitted as environment variables that you can just source into your shell script, obviously in this use case it benefits to have flat data.&lt;/p&gt;
&lt;h2 id=&#34;built-in-facts&#34;&gt;Built-in facts&lt;/h2&gt;
&lt;p&gt;We have many ways to get facts into it such as files, environment variables, JSON and YAML files. The CLI also includes it&amp;rsquo;s own mini fact source called &lt;code&gt;system&lt;/code&gt; which will return facts about the system it is running on.&lt;/p&gt;
&lt;p&gt;To view the facts fully resolved, we can run the following, I am removing much of the details here but you can see this offers enough to make most required decisions:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;$ tinyhiera facts --system-facts
{
  &amp;#34;host&amp;#34;: {
    &amp;#34;info&amp;#34;: {
      &amp;#34;hostname&amp;#34;: &amp;#34;grime.local&amp;#34;,
      &amp;#34;uptime&amp;#34;: 1843949,
      &amp;#34;bootTime&amp;#34;: 1762258234,
      &amp;#34;procs&amp;#34;: 711,
      &amp;#34;os&amp;#34;: &amp;#34;darwin&amp;#34;,
      &amp;#34;platform&amp;#34;: &amp;#34;darwin&amp;#34;,
      &amp;#34;platformFamily&amp;#34;: &amp;#34;Standalone Workstation&amp;#34;,
      &amp;#34;platformVersion&amp;#34;: &amp;#34;26.1&amp;#34;,
      &amp;#34;kernelVersion&amp;#34;: &amp;#34;25.1.0&amp;#34;,
      &amp;#34;kernelArch&amp;#34;: &amp;#34;arm64&amp;#34;
    }
  },
  &amp;#34;memory&amp;#34;: {
    &amp;#34;swap&amp;#34;: { },
    &amp;#34;virtual&amp;#34;: { }
  },
  &amp;#34;network&amp;#34;: {
    &amp;#34;interfaces&amp;#34;: [ ]
  },
  &amp;#34;partition&amp;#34;: {
    &amp;#34;partitions&amp;#34;: [ ],
    &amp;#34;usage&amp;#34;: [ ]
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;long-term-view&#34;&gt;Long-term view&lt;/h2&gt;
&lt;h3 id=&#34;autonomous-agents&#34;&gt;Autonomous Agents&lt;/h3&gt;
&lt;p&gt;The goal here is to incorporate this into various other places, first we pull it into a &lt;a href=&#34;https://choria.io/docs/autoagents/&#34;&gt;Choria Autonomous Agents&lt;/a&gt;, these agents own the full lifecycle of an application:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deploy dependencies&lt;/li&gt;
&lt;li&gt;Configure the application&lt;/li&gt;
&lt;li&gt;Run the application&lt;/li&gt;
&lt;li&gt;Monitor the application&lt;/li&gt;
&lt;li&gt;Restart the application on failure&lt;/li&gt;
&lt;li&gt;Orchestrate rolling upgrades&lt;/li&gt;
&lt;li&gt;Present APIs for interacting with the management layer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Agents can fetch data from a Key-Value store, but it&amp;rsquo;s kind of all the same data. With Hiera integrated into the KV feature, we get:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;watch_tag&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;type&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;kv&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;interval&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;10s&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;success_transition&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;regional_update&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;state_match&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;l&#34;&gt;RUN, RESTART]&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;properties&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;bucket&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;NATS&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;key&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;config&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;hiera_config&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Here the autonomous agent will check KV every 10 seconds for data, fully resolve it using Hiera and save the resulting data into the Autonomous Agent internal data store. If the KV data chanfes or the referenced facts change to the extent that the data change, the machine will update the stored data and fire the &lt;code&gt;regional_update&lt;/code&gt; event.&lt;/p&gt;
&lt;p&gt;This way we can create role-orientated specific data all in the same Key-Value key.&lt;/p&gt;
&lt;h3 id=&#34;configuration-management&#34;&gt;Configuration Management&lt;/h3&gt;
&lt;p&gt;I want to create a new CM system that takes the model we are used to in Puppet but brings it to scripts and reusable APIs.&lt;/p&gt;
&lt;p&gt;The script use case would essentially be standalone commands:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;$ marionette package ensure zsh --version 1.2.3
$ marionette service ensure ssh-server --enable --running
$ marionette service info httpd
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This should be familiar to Puppet users, we&amp;rsquo;re basically pulling the resources and RAL into standalone commands.  The commands will be fully idempotent like Puppet and support multiple Operating Systems.&lt;/p&gt;
&lt;p&gt;For integration with other languages, you can also deal with JSON-in-JSON-out style:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;# Can be driven by JSON in and returns JSON out instead
$ echo &amp;#39;{&amp;#34;name&amp;#34;:&amp;#34;zsh&amp;#34;, &amp;#34;version&amp;#34;:&amp;#34;1.2.3&amp;#34;}&amp;#39;|marionette package apply
{
....
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I do not want to create something huge like Puppet, but just basically enough that can enable the package-config-service trio pattern, ultimately a manifest would look like this:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;l&#34;&gt;cat &amp;lt;&amp;lt;EOF&amp;gt;manifest.yaml&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;data&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;resources&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;package&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;myapp&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ensure&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;present&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;service&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;myapp&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ensure&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;running&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;enabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;hierarchy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;merge&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;deep&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;order&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;platform:{{ lookup(&amp;#39;host.info.platform&amp;#39;) }}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;hostname:{{ lookup(&amp;#39;host.info.hostname&amp;#39;) }}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;overrides&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;platform:darwin&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;resources&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;package&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;                &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ensure&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;MyApp-1.2.3.dmg&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;l&#34;&gt;EOF&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;l&#34;&gt;$ marionette apply manifest.json --system-facts&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Pull this into Autonomous Agents and you have self-contained, self-managing applications similar to Kubernetes Operators but anywhere.&lt;/p&gt;
&lt;p&gt;Use this in scripts, simple setups, during container creation or orchestration, or even in CI pipelines.&lt;/p&gt;
&lt;p&gt;We should even be able to compile this manifest, or a full autonomous agent, into a single, static, binary that you can just run as &lt;code&gt;./setup&lt;/code&gt; or &lt;code&gt;./manage&lt;/code&gt; and the entire lifecycle is managed with zero dependencies.&lt;/p&gt;
&lt;h2 id=&#34;availability-and-status&#34;&gt;Availability and Status&lt;/h2&gt;
&lt;p&gt;You can get the command on &lt;a href=&#34;https://github.com/choria-io/tinyhiera&#34;&gt;GitHub choria-io/tinyhiera&lt;/a&gt;, we are pretty far along but expect some more breaking changes including a potential name change.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Lab Infra Rebuild Part 6</title>
      <link>https://www.devco.net/posts/2024/07/31/lab-infra-rebuild-6/</link>
      <pubDate>Wed, 31 Jul 2024 01:00:00 +0000</pubDate>
      
      <guid>https://www.devco.net/posts/2024/07/31/lab-infra-rebuild-6/</guid>
      <description>&lt;p&gt;This is the final in a series of posts about rebuilding my lab infrastructure, see the initial post &lt;a href=&#34;https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Today we&amp;rsquo;ll wrap things up with a look at some SaaS tools I use and a general look through some small utilities and
things I use to bring it all together.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been enjoying my summer for the last 3 months hence the hiatus of posts.&lt;/p&gt;
&lt;h2 id=&#34;email&#34;&gt;Email&lt;/h2&gt;
&lt;p&gt;Long ago I used to run my own Zimbra but I gave up on that a few years ago, been with &lt;a href=&#34;https://fastmail.com&#34;&gt;Fastmail&lt;/a&gt;
ever since and really happy with their service.&lt;/p&gt;
&lt;p&gt;Delivering email from my 20+ VMs all over the world though is tricky, getting all the various DKIM and other
settings right for a big set of IP addresses and ranges quickly becomes a maintenance nightmare. But ther&amp;rsquo;s a
constant trickle of stuff from them - cron job, monitoring, backup statusses and more.&lt;/p&gt;
&lt;p&gt;After some looking around at options I found &lt;a href=&#34;https://www.smtp2go.com&#34;&gt;SMTP2GO&lt;/a&gt; who have a very generous 1000 /
month free tier.  This is usually fine for me but I ended up paying them anyway for a annual account. This way I
have just one egress point to consider in my various email policy setups and thus far, for delivering system emails,
this has been a great time saver.&lt;/p&gt;
&lt;p&gt;Read on about DNS, Git, SSO and more.&lt;/p&gt;
&lt;h2 id=&#34;dns&#34;&gt;DNS&lt;/h2&gt;
&lt;p&gt;Till this rebuild I hosted my own DNS, I&amp;rsquo;ve had a set of quite stable DNS servers on the same IP addresses for over
a decade so it&amp;rsquo;s worked well.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve always wanted to be rid of hosting this myself but most services are request based billing and I always felt
this would not go well since you have no way of controlling how many DNS requests you get.&lt;/p&gt;
&lt;p&gt;After some research I found &lt;a href=&#34;https://www.cloudns.net&#34;&gt;ClouDNS&lt;/a&gt; who have a 50 zone / 2000 record plan that allows 200
million queries a month for $5/month. This is plenty and incredibly cheap. They have Geo DNS servers, support is
responsive and knowledgeable and have a decent API.&lt;/p&gt;
&lt;p&gt;For comparison DNSimple is $30/month and charges per zone and 10c per million queries a month. That&amp;rsquo;s crazy.&lt;/p&gt;
&lt;p&gt;My usage is around 20 million queries a month so I am very comfortable in this level of service and for $5/month
there is simply no way to compete with this on any kind of self hosting setup.&lt;/p&gt;
&lt;p&gt;They have a bind zone file import feature and I use their API to do daily backups of all the zones into my local Git
server using a little tool I wrote call &lt;a href=&#34;https://github.com/ripienaar/cloudns-backup&#34;&gt;CloudDNS Backup&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;git&#34;&gt;Git&lt;/h2&gt;
&lt;p&gt;Speaking of Git hosting, I quite like &lt;a href=&#34;https://about.gitea.com&#34;&gt;Gitea&lt;/a&gt; though should probably move to &lt;a href=&#34;https://forgejo.org&#34;&gt;Foregejo&lt;/a&gt;. The
split from Gitea happened just as I was building things up so I am still on that.&lt;/p&gt;
&lt;p&gt;Gitea is pretty great, it does a reasonable job of being a GitHub facsimile and uses &lt;a href=&#34;https://github.com/nektos/act&#34;&gt;act&lt;/a&gt;
(also great for local action testing) to provide reasonably compatible self hosted GitHub Actions.&lt;/p&gt;
&lt;p&gt;Gitea and Foregejo are both single binary/single process tools so really easy to get going, for my needs even their
SQLite support is ideal.&lt;/p&gt;
&lt;h2 id=&#34;sso&#34;&gt;SSO&lt;/h2&gt;
&lt;p&gt;Getting all things authenticated and managing users is a nightmare, I use Okta free tier to front almost all my HTTP
stuff and it&amp;rsquo;s been great. With Apache &lt;code&gt;mod_auth_openidc&lt;/code&gt; it&amp;rsquo;s easy to stick that in-front of most things even
static sites.&lt;/p&gt;
&lt;h2 id=&#34;object-storage&#34;&gt;Object Storage&lt;/h2&gt;
&lt;p&gt;I used to use Digitalocean S3 compatible storage (still do a bit tbh), but I am slowly moving all my use of that
over to a private &lt;a href=&#34;https://minio.io&#34;&gt;Minio&lt;/a&gt; instance.  I do not need it to be super high available but do need the data
on it secure so this runs on my Hetzner Backup server with its 6 disk redundant storage setup.&lt;/p&gt;
&lt;p&gt;To be honest I think Minio is just unusable. It&amp;rsquo;s not that the tool is bad, it&amp;rsquo;s really great, I really want to love
it, it&amp;rsquo;s that the project is just crazy with releases. It&amp;rsquo;s in everyones interest to upgrade and run the latest software
but with 50 releases THIS YEAR ALONE (IT IS AUGUST!), I do not know how anyone use this in any seriousness. This is not
software made to be used by real teams in the real world with real pressures on their time. Further their Release
Notes are of the &lt;code&gt;git log --oneline&lt;/code&gt; variety which does not help, commit logs are developer UX not end user UX.&lt;/p&gt;
&lt;p&gt;So I am probably looking at an alternative soon. This is kind of why my migration to it have stalled until I can
figure out what to do with this.&lt;/p&gt;
&lt;h2 id=&#34;security&#34;&gt;Security&lt;/h2&gt;
&lt;p&gt;We all have to stay on top of security alerts, in the good old days you&amp;rsquo;d just read &lt;a href=&#34;https://en.wikipedia.org/wiki/Bugtraq&#34;&gt;Bugtraq&lt;/a&gt;
and know all there is to know. These days though things are much more complex with the amount of things we run and how
much goes on the internet.&lt;/p&gt;
&lt;p&gt;I use &lt;a href=&#34;https://www.opencve.io&#34;&gt;OpenCVE&lt;/a&gt; to track and alert me of any CVEs on the main tools I care about.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve also for the first time set my Enterprise Linux machines to auto update themselves, it&amp;rsquo;s a bit scary to be
honest and I exclude kernel updates, but so far it&amp;rsquo;s been ok. Once a httpd update messed me around a bit but Puppet
soon fixed that on next run so was a small inconvenience.  On the whole I&amp;rsquo;d strongly recommend using this.&lt;/p&gt;
&lt;h2 id=&#34;monitoring&#34;&gt;Monitoring&lt;/h2&gt;
&lt;p&gt;Apart from the obvious Grafana/Prometheus pair that I do self-host I use a few other things. Graphite to store my
IoT data in, it just seems a bit more suitable though alas it&amp;rsquo;s in a sad state and mostly dying. I might need to
revisit what I do there in time.&lt;/p&gt;
&lt;p&gt;I have deployed &lt;a href=&#34;https://github.com/caronc/apprise&#34;&gt;Apprise&lt;/a&gt; everywhere and it&amp;rsquo;s really great it can notify a huge
list of services and having it on every machine ready to use is great. I have it integrated in a reusable Gitea
action so any failing builds result in alerts.&lt;/p&gt;
&lt;p&gt;Apprise sends some alerts and statuses to a dedicated private Mastodon account others to Slack and others to
Victorops and to Pushover. This is well worth looking into. Geting Slack/Mastadon reach outs from
cron or other tools is really good.&lt;/p&gt;
&lt;p&gt;Victorops I use for my Prometheus alerts and some other things, I like its ability to silence and to give me a clear
view of the current state of things when I am not around computers.  Probably better options now but it&amp;rsquo;s cheap and
just works.&lt;/p&gt;
&lt;p&gt;I use &lt;a href=&#34;https://pulsetic.com&#34;&gt;Pulsetic&lt;/a&gt; to do external checks on my websites, they have a very generous free tier
though I did recently upgrade to a paid plan. They allow you to make great external visible or private dashboard
like this one for my &lt;a href=&#34;https://status.choria.io&#34;&gt;Choria Project Infrastructure&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;letsencrypt&#34;&gt;Letsencrypt&lt;/h2&gt;
&lt;p&gt;I use LE for TLS like more or less everyone else. ClouDNS is supported for a DNS authenticator in &lt;a href=&#34;https://acme.sh&#34;&gt;acme.sh&lt;/a&gt;
so that&amp;rsquo;s a good fit.&lt;/p&gt;
&lt;p&gt;I have a difficult problem in that I want to do a globally redundant hosting of some static files on the same name
but I do not want to pay for a GSLB. So I ended up making an Action that will manage those certificates daily, renew
them and commit them to my Puppet repository from where they get rolled out to the webservers. This works great.&lt;/p&gt;
&lt;h2 id=&#34;awesome-lists&#34;&gt;Awesome Lists&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;ll take liberty and do a plug for my &lt;a href=&#34;https://free-for.dev/#/&#34;&gt;Free-for-dev&lt;/a&gt; list that currently has 1500+ services
listed. It focus on services that provide generous free tiers generally in the devopsey world many of them suited for
home labs. Most of the ones above came from this list, this is literally why I maintain this list. Currently this is the
4th most popular Awesome List with nearly 90 000 stars. Built by 1600+ people collaboratively on GitHub&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re not aware I also strongly recommend subscribing to &lt;a href=&#34;https://selfh.st/newsletter/&#34;&gt;This Week in Self-Hosted&lt;/a&gt;
if home labs is your thing this will be invaluable.&lt;/p&gt;
&lt;p&gt;In general if you are not yet on board with the whole Awesome List movement you really should, check out a giant
list in &lt;a href=&#34;https://github.com/sindresorhus/awesome&#34;&gt;sindresorhus/awesome&lt;/a&gt; and you&amp;rsquo;ll find many tools to track,
discover, search and more.&lt;/p&gt;
&lt;p&gt;These are the new Yahoo at grass roots level, it&amp;rsquo;s amazing and one of the most real and relevant resources out there
today.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Well that about sums it up.&lt;/p&gt;
&lt;p&gt;After this we&amp;rsquo;ll get back into some general blogging, there&amp;rsquo;s a fair bit I did not get into like my recent intro into
3D printing and more but this series related to my Home Lab build has more or less run its course.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Lab Infra Rebuild Part 5</title>
      <link>https://www.devco.net/posts/2024/04/25/lab-infa-rebuild-5/</link>
      <pubDate>Thu, 25 Apr 2024 06:00:00 +0000</pubDate>
      
      <guid>https://www.devco.net/posts/2024/04/25/lab-infa-rebuild-5/</guid>
      <description>&lt;p&gt;This is an ongoing post about rebuilding my lab infrastructure, see the initial post &lt;a href=&#34;https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Today let&amp;rsquo;s look at VMs and Baremetals and Operating Systems.&lt;/p&gt;
&lt;h2 id=&#34;virtual-machines&#34;&gt;Virtual Machines&lt;/h2&gt;
&lt;p&gt;As I mentioned I&amp;rsquo;ve been a Linode customer since essentially their day one and have had many 100s of machines there for personal and client needs.&lt;/p&gt;
&lt;p&gt;I soured off them quite significantly after they botched their Kubernetes release by not having Europe side support past basic triage initially that
led to multiple multi hour outages and so I have been making some changes.&lt;/p&gt;
&lt;p&gt;My most recent incarnation had about 5 or 6 Linode machines - Puppet Server, Choria Repos * 3, 2 x DNS, General Services machine.&lt;/p&gt;
&lt;p&gt;Today I have 2 Linode machines left. I was reluctant to move them as they were DNS servers for many domains but I changed my way of hosting domains
also during this so less of a concern now.&lt;/p&gt;
&lt;p&gt;They are now just RPM/Deb repos for Choria and I will move those elsewhere soon also.  That&amp;rsquo;ll be the first time I am without Linode machines since
basically 2003, such a shame. One is in the US to provide a nearby mirror there, I might keep it and just scale it down to lower spec. But with the
recent changes at Linode it feels a bit like it&amp;rsquo;s time to consider alternatives.&lt;/p&gt;
&lt;p&gt;Previously I had Digital Ocean droplets x 3 for my Kubernetes cluster, as discussed that&amp;rsquo;s all gone now too.&lt;/p&gt;
&lt;p&gt;I used to have quite a selection of Vultr machines, I don&amp;rsquo;t recall why I left them really I think I felt Linode was just the rolls royce of this kind
of Cloud provider and so consolidated to simplify my business accounting etc&lt;/p&gt;
&lt;h2 id=&#34;baremetal&#34;&gt;Baremetal&lt;/h2&gt;
&lt;p&gt;In the previous iteration I had only 1 hosted physical machine and that was my Backups machine running Bacula on a Hetzner SX64 (64GB RAM Ryzen 5 3600)
with 4 x 16 TB SATA Enterprise HDD 7200rpm. I do not need much from this machine, it wakes up, do backups then sleep again till tomorrow.  So the spinning
rust is fine for that, I just need lots of it. I rebuilt on a new one just to get a hardware and OS refresh.&lt;/p&gt;
&lt;p&gt;Of course I do still use Virtual Machines just managed by Cockpit as per the &lt;a href=&#34;https://www.devco.net/posts/2024/03/21/lab-infra-rebuild-2/&#34;&gt;2nd part in this series&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I got a pair of Hetzner AX41-NVMe (64GB RAN Ryzen 5 3600) with 2 x 512 NVMe SSDs each. I was expecting to add a 3rd but really these 2 plus the Ryzen at
my office turns out to be plenty for my needs. They have some upgrades available - more RAM, Extra Disks, SATA can be added etc. I don&amp;rsquo;t know if Hetzner
supports upgrading running machines but this is nice little platform.  At EUR37 per machine that puts them between a 4GB and 8GB shared CPU Linode.
You really can&amp;rsquo;t complain. I might get a 3rd one just for the sake of it and spread my development machines out more. Something for after the summer.&lt;/p&gt;
&lt;p&gt;Performance wise moving from my Droplets and other VMs to these machines have been amazing, for an investment equalling 1 Linode I can run several VMs with
no additional cost to expand to another VM or to shuffle memory allocations - or just to allocate more since 64GB is way more than I need.  This really
is a no brainer for my needs.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been a Hetzner customer also since a very long time, it&amp;rsquo;s not clear how long but it feels like maybe 2008 or so. They&amp;rsquo;ve had their ups and downs and
dodgy datacenters, dodgy connectivity and dodgy hardware, bad english support, but in the past few years I think they&amp;rsquo;re really firmed up quite nicely and
my machines there have not given me trouble so I felt it&amp;rsquo;s safe to lean on them a bit more. A few months in now and I&amp;rsquo;ve not had one minute of problems.&lt;/p&gt;
&lt;p&gt;Read on about Operating Systems and more.&lt;/p&gt;
&lt;h2 id=&#34;operating-system&#34;&gt;Operating System&lt;/h2&gt;
&lt;p&gt;A bit of history is needed I guess.  I started with &lt;a href=&#34;https://en.wikipedia.org/wiki/Softlanding_Linux_System&#34;&gt;Softlanding Linux System&lt;/a&gt; around 1993, after
installing this on a 20MB HDD I removed from a Novell machine the disk promptly died. A bit of hits and misses later I eventually got it quite happy.&lt;/p&gt;
&lt;p&gt;Then I moved to Slackware which was the successor to SLS of course. I tried a few things but once RedHat released their first preview in October 1994 on
Halloween it was pretty much just RedHat all the way from there. I&amp;rsquo;ve used all the versions, even the 5.x series that was a total disaster after they moved
to ELF but kept at it.&lt;/p&gt;
&lt;p&gt;When RedHat removed their free editions I dabbled with Debian but I don&amp;rsquo;t like the inconsistencies, community, approach or really anything at all about Debian.&lt;/p&gt;
&lt;p&gt;Luckily of course CentOS came to the rescue, I even donated hardware to them - a IBM Bladecenter full of Blades. Of course then for some reason they decided
to join RedHat, I never did understand the thinking here.  Needless to say that did go about as well as it was obvious to anyone it would.&lt;/p&gt;
&lt;p&gt;This left a few choices, mainly Rocky Linux and AlmaLinux. I initially went with Rocky Linux it seemed to be on a good track and had the CentOS founder
on board, they did seem to loose steam a bit and had a few fits and starts. Now seems fairly solid so I wouldn&amp;rsquo;t have qualms using them but I settled
on AlmaLinux. AlmaLinux seemed at the time to have a bit more backers, a bit more polish and just worked better for my needs.&lt;/p&gt;
&lt;p&gt;Then of course RedHat removed the source repos from CentOS Stream this left things quite difficult. Rocky Linux, then, stated they are going to get their
SRPMs from a bunch of places to keep the bug-for-bug EL rebuild thing going by means which I really do not think is sustainable. From using paid-for cloud instances
and then getting SRPMs using those or using the UBI container images etc, basically they go spelunking all over the place to find source RPMs and then
somehow promise to be a complete like-for-like distribution of RHEL. I am not convinced.&lt;/p&gt;
&lt;p&gt;Almalinux went another way, they decided to drop the 1:1 RHEL promise and instead moving to &lt;a href=&#34;https://almalinux.org/blog/future-of-almalinux/&#34;&gt;promising ABI/Binary compatible with RHEL&lt;/a&gt;.
They probably also get their SRPMs from some interesting places but they are forging a path of innovation which already resulted in them returning some
old hardware support for example. I am quite happy with my choice.&lt;/p&gt;
&lt;p&gt;To complicate matters a bit more - or simplify them I guess - the &lt;a href=&#34;https://openela.org/&#34;&gt;Open Enterprise Linux Association&lt;/a&gt; was created to be a
source of SRPMs for EL rebuilders like Alma and Rocky. Their mission is to deliver &lt;code&gt;All sources necessary to achieve a 1:1 / bug-for-bug compatible  version of EL which will be distributed via Git, encouraging community collaboration&lt;/code&gt;. Hopefully this will help matters along. This is supported
by Oracle and SUSE. Oracle of course have their own Oracle Enterprise Linux that was dunked into the same problem - and no doubt the source of
all the headaches for RedHat.&lt;/p&gt;
&lt;p&gt;Anyway, so a lot of history, I like EL. I will keep using EL even if its a slightly different EL. I don&amp;rsquo;t think its inherently better or worse
than other alternatives - and I do not care. The best tool for the job is often the one you know best. I have 30 years of EL experience and
that works for me. It&amp;rsquo;s all just SystemdOS now anyway.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;That&amp;rsquo;s about it really for hardware and OS. The consolidation, moving to Baremetals etc have ended up resulting in quite a bit of money saved.
Even if I got a 3rd VM host at Hetzner I&amp;rsquo;d still be more than a $100 down on monthly expenses but got a lot more.&lt;/p&gt;
&lt;p&gt;I have some travel coming up in the next few weeks so the next installment might be a while.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Lab Infra Rebuild Part 4</title>
      <link>https://www.devco.net/posts/2024/04/11/lab-infra-rebuild-4/</link>
      <pubDate>Thu, 11 Apr 2024 09:00:00 +0000</pubDate>
      
      <guid>https://www.devco.net/posts/2024/04/11/lab-infra-rebuild-4/</guid>
      <description>&lt;p&gt;This is an ongoing post about rebuilding my lab infrastructure, see the initial post &lt;a href=&#34;https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Today I&amp;rsquo;ll talk about my physical office and office hardware.&lt;/p&gt;
&lt;h2 id=&#34;office-space&#34;&gt;Office Space&lt;/h2&gt;
&lt;p&gt;When my son started going to school I did not look forward to all the driving so figured a office near his school would be good, I&amp;rsquo;d spend the days there and come home after pick up. I rented a nice place in a town called &lt;a href=&#34;https://en.wikipedia.org/wiki/Mosta&#34;&gt;Mosta&lt;/a&gt;, it was nice and had ample storage and would have made a really great maker space as it had about 4 car garages worth of underground storage that was well lit and ventilated.&lt;/p&gt;
&lt;p&gt;Unfortunately this place was opposite a school and parking was absolute hell. I ended up just not using it for months at a time since I could drive home in 12 minutes or spend 45 minutes finding parking, no thanks. I gave up trying to find a garage to rent around there, it&amp;rsquo;s just crazy.&lt;/p&gt;
&lt;p&gt;When I started looking the 2nd place in &lt;a href=&#34;https://en.wikipedia.org/wiki/%C5%BBebbu%C4%A1&#34;&gt;Żebbuġ&lt;/a&gt; I saw seemed perfect, a fantastic bright 7m x 7m office with a underground 1.5 car garage at a reasonable price. I took that and just recently extended my lease to 5 years. My sister-in-law is also moving her business to the same street and there&amp;rsquo;s a 3D printer shop around the corner, bonus. Parking is always easy even on the street and it&amp;rsquo;s like 5 minutes from my house.&lt;/p&gt;
&lt;p&gt;Here I am able to set up 3 workstations, my main desk, one for guests with a little Desktop for my son and big work station for soldering, assembling IoT projects and such. I also have a nice 2 seater sofa with a coffee table. With the garage I have space to put things like a lazer cutter and more in future - though I am eyeing a small space across the road as a workshop space also for that kind of thing.&lt;/p&gt;
&lt;p&gt;Location wise I could not want more, it&amp;rsquo;s a easy walk or cycle from my house through a beautiful car-free valley and the town is pleasant enough with food options and a small corner shop nearby.&lt;/p&gt;
&lt;p&gt;Read on for more about the hardware.&lt;/p&gt;
&lt;h2 id=&#34;desktop--laptop&#34;&gt;Desktop / Laptop&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;ve been using using Apple desktops and laptops almost exclusively since about 2007. I do like the graphical UI but loathe the BSD based shell, so generally my mantra is: &lt;em&gt;MacOS Shells are for SSH to Linux machines&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve had every iMac since the &lt;a href=&#34;https://www.devco.net/archives/2006/01/18/apple_core_duo_imac.php&#34;&gt;very first plastic white ones&lt;/a&gt;, I really liked that form factor, so I just kept buying each model that came out. I especially liked the time when Apple sold visually complimentary displays for these machines and you could have a quite pleasing dual screen setup.&lt;/p&gt;
&lt;p&gt;Alas those days are gone, now to be honest, every iMac dual screen setup just looks like rubbish, so I just can&amp;rsquo;t with that anymore and so it was time to change. I now have 2 useless Display port old Cinema displays, guess they go to the trash.&lt;/p&gt;
&lt;p&gt;I have a MacBook 16&amp;quot; M2 PRO 32GB/1TB SSD, which I think is pretty much perfect - despite coffee damage knocking out all the ports on the right side, it&amp;rsquo;s still awesome.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve always had a disconnect between Laptop and Desktop speeds and this time round the stars aligned reasonably well that one can almost have parity between the 2, so after some looking around I figured a Mac Mini will give me near enough performance and good enough options for displays.&lt;/p&gt;
&lt;p&gt;To be honest I was not too convinced about moving to a Mini - but the iMacs were never really speed daemons so maybe it would be fine? So I picked up a Mac Mini M2 PRO 32GB/1TB SSD and really moving between the 2 just feels exactly the same performance wise. This has been a bigger deal than I anticipated for my general happiness when switching setup - something I do all the time.&lt;/p&gt;
&lt;p&gt;The mini sitting on my desk is a bit annoying, I&amp;rsquo;m considering options to mount it on the wall behind the display.&lt;/p&gt;
&lt;p&gt;For my display I went with a LG 40&amp;quot; Curved UltraWide 5K2K Nano IPS Monitor. I&amp;rsquo;ve used a 32 inch ultra wide for a few years, this has been a great upgrade.  It&amp;rsquo;s quite pricey around EUR1200, but it&amp;rsquo;s a really nice display and as it&amp;rsquo;s a Thunderbolt monitor it just works flawlessly with the Macs. I did buy &lt;a href=&#34;https://github.com/waydabber/BetterDisplay&#34;&gt;BetterDisplay&lt;/a&gt; to get some more control, well worth it - I&amp;rsquo;d say essential if you have a monitor like this. I do wish it was brighter though.&lt;/p&gt;
&lt;p&gt;I used to have a &lt;a href=&#34;https://moodeaudio.org/&#34;&gt;Moode Audio&lt;/a&gt; system with some high end DACs and Amp paired with JBL monitors for audio, I got those before HomePods existed. Now tossed all that out for just 2 HomePod Minis on my desk, they&amp;rsquo;re fantastic.&lt;/p&gt;
&lt;p&gt;I use a Logitech Brio 4K Ultra HD Webcam - works well on MacOS including it&amp;rsquo;s management software. My previous camera did not handle the sync frequency of my new office lights well and so was instantly useless.&lt;/p&gt;
&lt;p&gt;Today I am replacing my Razer Abyssus (2014) with a new Abyssus that doesn&amp;rsquo;t look like it&amp;rsquo;s been to war.&lt;/p&gt;
&lt;p&gt;Keyboard wise I use the MS Natural 4000 and have 4 more in boxes so I don&amp;rsquo;t need to think about keyboard for years, they last about 3 to 4 years each.&lt;/p&gt;
&lt;h2 id=&#34;storage&#34;&gt;Storage&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;ve been a Qnap user since forever, like maybe 2005 or so. The first Qnap I bought was some weird ARM CPU and they ran a patched Linux Kernel to get big partition support. When the machine died I had quite a bit of trouble to access my files but did get there eventually - but did make me reconsider my backup strategy.&lt;/p&gt;
&lt;p&gt;Their next gen kit was Intel based and ran stock kernels so I gave them another go. What I got then was a 4 bay TS-439. This machine moved with me from London to Malta and now 14 years later it still got a security update just before I retired it, unbelievable.&lt;/p&gt;
&lt;p&gt;Later I got a TS-451+ as the old ones CPU was a bit slow, the old one moved to my office and I synced to it daily as a backup. But as the old one is now 14 years old it seems I am just pushing my luck with this box, so I retired it.  Moved the TS-451+ to the position of backup machine and got a new TS-453E.&lt;/p&gt;
&lt;p&gt;The TS-453E is interesting because it&amp;rsquo;s a kind of LTS hardware that will have full hardware support till 2029 and probably software far past that. I got it with an extended warranty also so this should set me up for 5 years at least on this platform.&lt;/p&gt;
&lt;p&gt;I set the primary NAS up in 2 x RAID-1 configuration and basically perform backups like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Real time RAID-1 mirror between 2 disk for the primary volume&lt;/li&gt;
&lt;li&gt;Daily sync my Dropbox onto the primary NAS&lt;/li&gt;
&lt;li&gt;Daily sync primary to the backup NAS that is running a RAID-5 setup in another location in Malta&lt;/li&gt;
&lt;li&gt;Daily sync the primary to a 5TB external USB drive that I can take to a 3rd offsite when I leave the office for months over summer&lt;/li&gt;
&lt;li&gt;Monthly I do a full bitrot check across all the files on the primary&lt;/li&gt;
&lt;li&gt;Monthly I do a manual sync between the RAID-1 volumes of the primary NAS&lt;/li&gt;
&lt;li&gt;Monthly I sync the full primary NAS to Finland on a machine with a RAID-6&lt;/li&gt;
&lt;li&gt;My primary desktop does TimeMachine backups to the primary regularly&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This achieves the following redundancy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;+1 disk redundancy for any change I made now thanks to the first RAID-1&lt;/li&gt;
&lt;li&gt;daily same-country offsite to a RAID-5 storage&lt;/li&gt;
&lt;li&gt;monthly OS-level integrity checks of all files and the chance to restore any to the primary&lt;/li&gt;
&lt;li&gt;monthly on-site backup for long term recovery should I notice a accidental file deletion or corruption&lt;/li&gt;
&lt;li&gt;monthly backup to another country.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the end every file lands on about 11 disks in duplicate, 3-4 locations, 2 countries, different raid levels and with bitrot detection at multiple RAID levels both standard Linux kernel and NAS kernels and at the OS level. I could make the final Finland archive a 3 month rolling archive since I have enough space there for it.&lt;/p&gt;
&lt;p&gt;The machine I mention in Finland is a Hetzner SX64 (64GB RAM Ryzen 5 3600) with 4 x 16 TB SATA Enterprise HDD 7200rpm drives it runs Bacula for my server backups, but for above I just rsync to it.  It&amp;rsquo;s a slow crappy machine tbh but for the purpose of mostly just keepting files on disks mostly idle, it&amp;rsquo;s perfectly fine and the price is great. I&amp;rsquo;ve had similar machines for a decade or more, I cycle them every 3 years to new hardware.&lt;/p&gt;
&lt;h2 id=&#34;linux-shell&#34;&gt;Linux Shell&lt;/h2&gt;
&lt;p&gt;I mentioned earlier that I live by: &lt;em&gt;MacOS Shells are for SSH to Linux machines&lt;/em&gt;, so I need a Linux machine.&lt;/p&gt;
&lt;p&gt;I used to use the NUC machines but as they were ancient and dying I needed something else. Slack suggested I look at the Asus Mini PC PN51 and what a great suggestion that was.&lt;/p&gt;
&lt;p&gt;I have the Ryzen 7 5700U based machine with 64GB RAM and 1TB M.2 SSD in it, this machine runs CentOS and later Almalinux flawlessly. I used to use it bare but that was a bit of a waste now it runs &lt;a href=&#34;https://www.devco.net/posts/2024/03/21/lab-infra-rebuild-2/&#34;&gt;Cockpit as per my previous post&lt;/a&gt; and I have a 16GB VM there dedicated as my shell box.&lt;/p&gt;
&lt;p&gt;Along with that I have 6-10 VMs elsewhere making up a dev environment.&lt;/p&gt;
&lt;h2 id=&#34;miscellaneous-hardware&#34;&gt;Miscellaneous Hardware&lt;/h2&gt;
&lt;p&gt;At my office I have a few other things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One of my old NUC machines is a Linux Desktop for my son, mainly for minecraft/youtube when he visits&lt;/li&gt;
&lt;li&gt;A Krups Nescafe Dolce Gusto Infinissima Touch Black Automatic Coffee Machine - it&amp;rsquo;s rubbish, don&amp;rsquo;t buy these coffee machines&lt;/li&gt;
&lt;li&gt;A bar fridge&lt;/li&gt;
&lt;li&gt;Xiaomi air purifier&lt;/li&gt;
&lt;li&gt;Xiaomi robot vacuum - the nice one with auto bag empty feature so the place keeps clean when I am away for months at a time over summer&lt;/li&gt;
&lt;li&gt;A Prusa MK4 and it&amp;rsquo;s many accessories (this is a whole blog post to be made)&lt;/li&gt;
&lt;li&gt;A 15 year old Brother printer that is enormous and has finally now probably met it&amp;rsquo;s end, probably fixable, but I want an excuse to buy a smaller printer&lt;/li&gt;
&lt;li&gt;Ubiquity Dream Machine and a 8 port switch&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything is new except the Brother printer, but we&amp;rsquo;ll fix that soon.&lt;/p&gt;
&lt;p&gt;I try to not clutter the place up so I keep things to a minimum here and as mentioned I might move my workshop across the road to another unit and get some more 3D printers, Lazer cutters and so.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;That&amp;rsquo;s about it for the physical location and physical hardware. I&amp;rsquo;ll need to redo my home office as I gutted it and brought most things here but that&amp;rsquo;s for later, it&amp;rsquo;s mainly used now for its real purpose of being a media room with a 100 inch 4K project Dolby Atmos audio setup.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Lab Infra Rebuild Part 3</title>
      <link>https://www.devco.net/posts/2024/04/07/lab-infra-rebuild-3/</link>
      <pubDate>Sun, 07 Apr 2024 09:00:00 +0100</pubDate>
      
      <guid>https://www.devco.net/posts/2024/04/07/lab-infra-rebuild-3/</guid>
      <description>&lt;p&gt;This is an ongoing post about rebuilding my lab infrastructure, see the initial post &lt;a href=&#34;https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Today I&amp;rsquo;ll talk a bit about Configuration Management having previously mentioned I am &lt;a href=&#34;https://www.devco.net/posts/2024/03/21/lab-infra-rebuild-2/&#34;&gt;ditching Kubernetes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;server-management&#34;&gt;Server Management&lt;/h2&gt;
&lt;p&gt;The general state of server management is pretty sad, you have Ansible or Puppet and a long tail of things that just can&amp;rsquo;t work or are under terrible corporate control that you just can&amp;rsquo;t touch them.&lt;/p&gt;
&lt;p&gt;I am, as most people are aware, a very long term Puppet user since almost day 1 and have contributed significant features like Hiera and the design of Data in Modules. I&amp;rsquo;ve not had much need/interest in being involved in that community for ages but I want to like Puppet and I want to keep using it where appropriate.&lt;/p&gt;
&lt;p&gt;I think Puppet is more or less finished - there are bug fixes and stuff of course - but in general core Puppet is stable, mature and does what one wants and have extension points one needs. There&amp;rsquo;s not really any reason not to use if it fits your needs/tastes and should things go pear shaped its a easy fork. One can essentially stay on a current version for years at this point and it&amp;rsquo;s fine. There used to be some issues around packaging but even this is Apache-2 now. If you have the problem it solves it does so very well, but the industry have moved on so not much scope for extending it imo.&lt;/p&gt;
&lt;p&gt;All the action is in the content - modules on the forge. &lt;a href=&#34;https://voxpupuli.org/&#34;&gt;Vox Pupuli&lt;/a&gt; are doing an amazing job, I honestly do not know how they do so much or maintain so many modules, it&amp;rsquo;s really impressive.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s a general state of rot though, modules I used to use are almost all abandoned with a few moved to Vox. I wonder if stats about this is available, but I get the impression that content wise things are also taking a huge dive there and Vox are holding everything afloat with Puppet hoping to make money from selling commercial modules - I doubt it will sustain a business their size but it&amp;rsquo;s a good idea.&lt;/p&gt;
&lt;p&gt;Read on for more about Puppet today in my infra.&lt;/p&gt;
&lt;p&gt;Given the general state of things in the server management world I decided to use Puppet again for this round of infrastructure rebuild. I can&amp;rsquo;t see this lasting alas. Most people are aware that Puppet have been bought by Perforce and have had a huge shift in people and such. It&amp;rsquo;s inevitable that revenue generation is the main push at the moment.&lt;/p&gt;
&lt;p&gt;Unfortunately the way this play out is pretty unpleasant. Here&amp;rsquo;s an example: I have &lt;a href=&#34;https://github.com/ripienaar/monitoring-scripts/blob/master/puppet/check_puppet.rb&#34;&gt;an ancient, and crappy, monitoring script&lt;/a&gt; that runs on the node and checks &lt;code&gt;last_run_summary.yaml&lt;/code&gt; to infer the current status of runs - are they failing etc. It&amp;rsquo;s worked for a very long time and &lt;code&gt;last_run_summary.yaml&lt;/code&gt; is a contract that&amp;rsquo;s to be maintained.
Something in recent Puppet broke this script (the &lt;code&gt;last_run_summary.yaml&lt;/code&gt; now behaves differently) so I thought I&amp;rsquo;d ask on their Slack if there is something newer/maintained before I fixed mine.&lt;/p&gt;
&lt;p&gt;Immediately from Puppet people you get a message saying to use Puppet Enterprise for this. In a community Slack where people are just asking questions about a 200 line script. The suggestion is to move to a price-not-disclosed product vs a 200 line script. Without even so much as a question about needs or environment or if the suggestion would be relevant. It&amp;rsquo;s just corporate enshitification. Eventually I got some good answers from the community, despite Puppets best efforts.&lt;/p&gt;
&lt;p&gt;This outcome is of course entirely predictable, I can only hope Vox doesn&amp;rsquo;t get burned in the inevitable slide into the sewer.&lt;/p&gt;
&lt;h2 id=&#34;managing-puppet&#34;&gt;Managing Puppet&lt;/h2&gt;
&lt;p&gt;I have 2 old EL based machines that could not make the trip to Puppet 8 due some legacy there, the rest got moved to EL9 with a Puppet Server. I am though a big fan of running &lt;code&gt;puppet apply&lt;/code&gt; based builds and will likely move to that instead of the server. Apply based workflows present a few problems though, primarily how you get the code on the nodes and how the workflow is around that.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like a git based flow where I commit a change, CI package it and puts it on a repo and the fleet updates to it. Ideally the fleet updates to it asap. Further I want visibility into the runs, node-side monitoring which fits my event based world-view and allows me central control to trigger runs and do scheduled maintenance.&lt;/p&gt;
&lt;p&gt;So I built a system to orchestrate Puppet that&amp;rsquo;ll release soon called &lt;code&gt;Puppet Control&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides a nice cli for triggering runs, querying runs etc&lt;/li&gt;
&lt;li&gt;Git+CI based flow handles fetching, validating and deploying code to nodes for &lt;code&gt;apply&lt;/code&gt;, including tamper detection&lt;/li&gt;
&lt;li&gt;Has concurrently control built in for fine-grained resource management of file servers used to deliver code bundles&lt;/li&gt;
&lt;li&gt;Includes a run scheduler with concurrency controls that include ability to say for eg: only 1 database server out of all database server can run Puppet at a time but webservers can run 10 concurrently&lt;/li&gt;
&lt;li&gt;Can do on-demand runs as soon as possible subject to concurrency control to ensure the shared infra performs at peak&lt;/li&gt;
&lt;li&gt;Has various ways to find nodes in certain states like &lt;code&gt;pctl nodes failing&lt;/code&gt; to find nodes with failing resources&lt;/li&gt;
&lt;li&gt;Can show real-time events of Puppet runs&lt;/li&gt;
&lt;li&gt;Can have maintenance declared that will stop all scheduled Puppet runs&lt;/li&gt;
&lt;li&gt;Expose run statistics to Prometheus for runtimes, changes, health and more&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There&amp;rsquo;s an optional video below with more details and show the code release flow etc:&lt;/p&gt;
&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;
      &lt;iframe allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen&#34; loading=&#34;eager&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; src=&#34;https://www.youtube.com/embed/Bt9K5J3x3ac?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; title=&#34;YouTube video&#34;&gt;&lt;/iframe&gt;
    &lt;/div&gt;

&lt;p&gt;I&amp;rsquo;ll release this eventually, it&amp;rsquo;s dependent on some work happening in Choria at the moment.&lt;/p&gt;
&lt;p&gt;The concurrency control is a big deal. Scheduling Puppet runs is quite a difficult task. Usually the solution Puppet users do is just to spread the runs in cron by some random time distribution.&lt;/p&gt;
&lt;p&gt;That leaves the problem of fast Puppet runs during maintenance windows though. In the past we had a command &lt;code&gt;mco puppet runall 100&lt;/code&gt; which would discover all the nodes then in a loop ask their state and schedule more as some stopped - the goal being to keep as close to 100 running at a time. The choice of 100 nodes is related to the capacity of the Puppet Server infrastructure.&lt;/p&gt;
&lt;p&gt;This worked fine but it was very resource intensive on the Choria/MCollective network as 1000s of RPC requests have to be made to know the current state and it was not suited to using in an ongoing fashion. With &lt;code&gt;Puppet Control&lt;/code&gt; every run is a run that happens at the desired concurrency, but without a central orchestrator trying to make all the choices. It&amp;rsquo;s significantly cheaper on the network.&lt;/p&gt;
&lt;p&gt;More significantly by allowing the concurrency group name to be configured on a per node basis one can have different policies by type of machine. This is a big deal, let&amp;rsquo;s say we are using &lt;code&gt;puppet apply&lt;/code&gt; but we do not wish to have our 5 database machines all do concurrent runs and potentially restarting at the same time. By creating a concurrency governor for just those machines set to 1 we prevent that.&lt;/p&gt;
&lt;p&gt;With this in place triggering all the nodes to run at their groups configured concurrency is a simple &lt;code&gt;pctl nodes trigger&lt;/code&gt; which takes 2 seconds to complete. From there the nodes will run without overwhelming the Servers.&lt;/p&gt;
&lt;p&gt;Another interesting thing here is that this model maps well onto Ansible local mode as well. So in theory, unexplored theory, this same central controller and scheduler could be made for Ansible.&lt;/p&gt;
&lt;p&gt;This is built on &lt;a href=&#34;https://choria.io/docs/streams/governor/&#34;&gt;Choria Concurrency Governors&lt;/a&gt; which is an amazing distributed system building block.&lt;/p&gt;
&lt;h2 id=&#34;choria&#34;&gt;Choria&lt;/h2&gt;
&lt;p&gt;No surprise that I am using Choria for a large part of this, with a bit of a twist though. Choria, as released on &lt;a href=&#34;https://choria.io&#34;&gt;choria.io&lt;/a&gt;, is actually a distribution of a much larger system that is tailored for Puppet users.  That official Choria release &lt;strong&gt;requires&lt;/strong&gt; Puppet Agent and will not support unofficial builds or unsupported deviations from that.&lt;/p&gt;
&lt;p&gt;With the writing being on the wall though for Puppet this leaves me with a problem, I have no easy to use Public distribution of Choria for non Puppet users. Puppet provides the following to Choria:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deployment of the packages and files to nodes&lt;/li&gt;
&lt;li&gt;Management of policies and plugins on nodes&lt;/li&gt;
&lt;li&gt;Certificate Authority with certs on every node&lt;/li&gt;
&lt;li&gt;Optional discovery source of truth in PuppetDB&lt;/li&gt;
&lt;li&gt;Libraries for managing packages and services&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are actually quite significant hurdles to cross to create a fully Puppet-free Choria distribution.&lt;/p&gt;
&lt;p&gt;That said for a long time Choria have had another life as a &lt;a href=&#34;https://choria.io/docs/concepts/large_scale/&#34;&gt;large scale orchestrator&lt;/a&gt; that is not Puppet related. This implies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It can &lt;a href=&#34;https://choria.io/blog/post/2018/08/13/server-provisioner/&#34;&gt;self-provision at a rate of 1000+ nodes / minute&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Deploy its own plugins at a rate of multiple plugins delivered to millions of nodes in minutes&lt;/li&gt;
&lt;li&gt;Manage its own security both integrated with a CA or using a new JWT+ed25519 based approach&lt;/li&gt;
&lt;li&gt;Integrate with non Puppet data sources using external extension points&lt;/li&gt;
&lt;li&gt;Can upgrade itself in place without any help from Puppet in an Over-The-Air type self-upgrade system&lt;/li&gt;
&lt;li&gt;Has centralised RBAC integrated with tools like Open Policy Agent&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the last 2 years I was on a related contract and all these components have been firmed up and made much more capable, reliable and horizontally scalable and used in anger in the real world in some quite serious mission critical builds.&lt;/p&gt;
&lt;p&gt;Thus, my Choria infrastructure is actually a highbrid between Puppet and Non Puppet. Puppet places the RPMs and plugins that require Puppet (package, service, legacy plugins), but Choria self provisions everything else and owns the life of the agent and more. I am running the new Protocol and with Open Policy Agent based RBAC. I&amp;rsquo;ve made some changes to the various Puppet models to enable this and will start looking for some early adopters.&lt;/p&gt;
&lt;p&gt;This means my many Raspberry PI - from Xmas lights to sensors and HVAC control - are all now managed by Choria as well as the provisioner caters for them since those are without Puppet.&lt;/p&gt;
&lt;p&gt;I wrote about the &lt;a href=&#34;https://choria-io.github.io/go-choria/previews/protov2/index.html&#34;&gt;new protocol in an ADR&lt;/a&gt; and you can see there is scope for integration into TPMs and more. This is the future world view of Choria and already in use on some 100s of thousands of machines.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;So that&amp;rsquo;s a bit about managing the machines without Kubernetes or a ISP managing them.&lt;/p&gt;
&lt;p&gt;As I&amp;rsquo;ve been out of active Puppet use for a few years it&amp;rsquo;s been interesting to come back with some semi-fresh eyes and rethink some of the old things I believed was true when I used it constantly.&lt;/p&gt;
&lt;p&gt;Choria will play a critical role in the path forward as I&amp;rsquo;ll move much into containers managed as per the previous blog post leaving the problem Puppet solves to quite a thin layer. I&amp;rsquo;ve some thoughts on doing something to at least replace the most basic package-config-service trio of CM with something in Choria long term.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Lab Infra Rebuild Part 2</title>
      <link>https://www.devco.net/posts/2024/03/21/lab-infra-rebuild-2/</link>
      <pubDate>Thu, 21 Mar 2024 09:00:00 +0100</pubDate>
      
      <guid>https://www.devco.net/posts/2024/03/21/lab-infra-rebuild-2/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/&#34;&gt;Previously&lt;/a&gt; I blogged about rebuilding my personal infra, focussing on what I had before.&lt;/p&gt;
&lt;p&gt;Today we&amp;rsquo;ll start into what I used to replace the old stuff. It&amp;rsquo;s difficult to know where to start but I think a bit about VM and Container management is as good as any.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes&#34;&gt;Kubernetes&lt;/h2&gt;
&lt;p&gt;My previous build used a 3 node Kubernetes Cluster hosted at Digital Ocean. It hosted:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Public facing websites like this blog (WordPress in the past), Wiki, A few static sites etc&lt;/li&gt;
&lt;li&gt;Monitoring: Prometheus, Grafana, Graphite&lt;/li&gt;
&lt;li&gt;A bridge from &lt;a href=&#34;https://www.thethingsnetwork.org&#34;&gt;The Things Network&lt;/a&gt; for my LoRaWAN devices&lt;/li&gt;
&lt;li&gt;3 x redundant Choria Brokers and AAA&lt;/li&gt;
&lt;li&gt;Container Registry backed by Spaces (Digital Ocean object storage)&lt;/li&gt;
&lt;li&gt;Ingress and Okta integration via Vouch&lt;/li&gt;
&lt;li&gt;Service discovery and automatic generation of configurations for Prom, Ingress etc&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Apart from the core cluster I had about 15 volumes, 3 Spaces, an Ingress load balancer with a static IP and a managed MySQL database.&lt;/p&gt;
&lt;p&gt;I never got around to go full GitOps on this setup, it just seemed too much to do for a one man infra to both deploy all that and maintain the discipline. Of course I am not a stranger to the discipline required being from Puppet world, but something about the whole GitOps setup just seemed like A LOT.&lt;/p&gt;
&lt;p&gt;I quite liked all of this, when Kubernetes works it is a pleasant experience, some highlights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Integration with cloud infra like LBs is an amazing experience&lt;/li&gt;
&lt;li&gt;Integration with volumes to provide movable storage is really great and hard to repeat&lt;/li&gt;
&lt;li&gt;I do not mind YAML and the diffable infrastructure is really great, no surprise there. I hold myself largely to blame for the popularity of YAML in infra tools at large thanks to Hiera etc, so I can&amp;rsquo;t complain.&lt;/li&gt;
&lt;li&gt;Complete abstraction of node complexities is a double-edged sword but I think in the end I come to appreciate it&lt;/li&gt;
&lt;li&gt;I do like the container workflow and it was compatible with some &lt;a href=&#34;https://www.devco.net/archives/2015/03/30/some-thoughts-on-operating-containers.php&#34;&gt;pre-k8s thoughts I had on this&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Easy integration between CI and infrastructure with the &lt;code&gt;kubectl rollout&lt;/code&gt; abstraction&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some things I just did not like, I will try to mention some things that not the usual gripes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access to managed k8s infra is great, but not knowing how its put together for the particular cloud can make debugging things hard. I had some Cilium failures that was a real pain&lt;/li&gt;
&lt;li&gt;API deprecations are constant, production software rely on Beta APIs and will just randomly break. I expected this, but over the 3 years this happened more than I expected. You really have to be on top of all the versions of all the things&lt;/li&gt;
&lt;li&gt;The complimentary management tooling is quite heavy like I mentioned around GitOps. Traditional CM had a quick on-ramp and was suitable at small scale, I miss that&lt;/li&gt;
&lt;li&gt;I had to move from Linode K8s to Digital Ocean K8s. The portability promises of pure kubernetes is lost if you do not take a lot of care&lt;/li&gt;
&lt;li&gt;Logging from the k8s infra is insane, ever-changing, unusable unless you really really are into this stuff like very deep and very on-top of every version change&lt;/li&gt;
&lt;li&gt;Digital Ocean does forced upgrades of the k8s, this is fine. The implication is that all the nodes will be replaced so Prometheus polling source will change with big knock on effect. The way DO does it though involves 2 full upgrades for every 1 upgrade doubling the pain&lt;/li&gt;
&lt;li&gt;It just seem like no-one wants to even match the features Hiera have in terms of customization of data&lt;/li&gt;
&lt;li&gt;Helm&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the end it all just seemed like a lot for my needs and was ever slightly fragile. I went on a 3 month sabbatical last year and the entire infra went to hell twice, all on its own, because I neglected some upgrades during this time and when Digital Ocean landed their upgrade it all broke. It&amp;rsquo;s a big commitment.&lt;/p&gt;
&lt;p&gt;See the full entry for detail of what I am doing instead.&lt;/p&gt;
&lt;h2 id=&#34;container-management&#34;&gt;Container Management&lt;/h2&gt;
&lt;p&gt;So let&amp;rsquo;s look at what I am working on for container management. My current R&amp;amp;D focus in Choria is around Autonomous Agents that can manage one thing forever, one thing like a Container. So I am dog-fooding some of this work where I need containers and will move things into containers as I progress down this path.&lt;/p&gt;
&lt;p&gt;Looking at my likes list from the Kubernetes section above we can imagine where I am focussing with the tooling I am building, lets just jump right in with what I have today:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;containers&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;tally&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;image&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;registry.choria.io/choria/tally&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;image_tag&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;latest&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;syslog&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;kv_update&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;restart_files&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;/etc/tally/config/choria.conf&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;volumes&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;- &lt;span class=&#34;l&#34;&gt;/etc/tally/config:/tally/config&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ports&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;- &lt;span class=&#34;m&#34;&gt;9010&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;m&#34;&gt;8080&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;register_ports&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;protocol&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;http&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;service&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;tally&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ip&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;%{facts.networking.ip}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;cluster&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;%{facts.location}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;port&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;9010&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;priority&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;annotations&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;prometheus.io/scrape&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;This is Puppet Hiera data that defines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A running Container for a Choria related service that passively listens to events and expose metrics to Prometheus&lt;/li&gt;
&lt;li&gt;It will watch a file on the host file system and restart the container if the file changes, Puppet manages the file in question&lt;/li&gt;
&lt;li&gt;It supports rolling upgrades via Key-Value store updates but defaults to &lt;code&gt;latest&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;It exposes the port to service discovery with some port specific annotations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This creates a &lt;a href=&#34;https://choria.io/docs/autoagents/&#34;&gt;Choria Autonomous Agent&lt;/a&gt; that will forever manage the container. If health checks fail the port will not be published to Service Discovery anymore and remediation will kick in etc.&lt;/p&gt;
&lt;p&gt;Of course Puppet is an implementation detail - anything that can press the YAML file and place it into Choria can do this. Choria can also download and deploy these automations as a plugin at runtime via securely signed artifacts. So this supports a fully CI/CD driven GitOps like flow that has no Puppet involvement.&lt;/p&gt;
&lt;p&gt;To replace the &lt;code&gt;kubectl rollout&lt;/code&gt; process I support KV updates like &lt;code&gt;choria kv put HOIST container.tally.tag 0.0.4&lt;/code&gt; (we have Go APIs for this also), the container will listen for this kv update and perform an upgrade. Upgrades support rolling strategies like say 2 at at time out of a cluster of 20 etc.  The &lt;code&gt;kv put&lt;/code&gt; is the only interaction needed to do this and no active orchestration is needed.&lt;/p&gt;
&lt;p&gt;Service discovery is also in a Choria Key-Value bucket and I can query it:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;$ sd find
[1] tally @ ve2 [http://192.168.1.10:9010]
   prometheus.io/port: 9010
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Or generate Prometheus configurations:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-nohighlight&#34; data-lang=&#34;nohighlight&#34;&gt;$ sd prometheus
- targets:
    - 192.168.1.10:9010
  labels:
    __meta_choria_cluster_name: ve2
    __meta_choria_ip: 192.168.1.10
    __meta_choria_port: &amp;#34;9010&amp;#34;
    __meta_choria_priority: &amp;#34;1&amp;#34;
    __meta_choria_protocol: http
    __meta_choria_service: tally
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;So one can imagine how that integrates with Prometheus file based SD or HTTP based. In future I&amp;rsquo;ll add things to manage ingress configurations automatically etc, and of course Service Discovery -&amp;gt; file flow can ge managed using Autonomous Agents also.&lt;/p&gt;
&lt;p&gt;Actual building of containers have not changed much from &lt;a href=&#34;https://www.devco.net/archives/2015/02/24/moving-a-service-from-puppet-to-docker-2.php&#34;&gt;earlier&lt;/a&gt; &lt;a href=&#34;https://www.devco.net/archives/2015/03/30/some-thoughts-on-operating-containers.php&#34;&gt;thoughts&lt;/a&gt; about this and the above system - called Hoist - will focus on strengthening those thoughts.&lt;/p&gt;
&lt;h2 id=&#34;virtualization&#34;&gt;Virtualization&lt;/h2&gt;
&lt;p&gt;Previously I ran some various mixes of KVM things: I had Puppet code to generate &lt;code&gt;libvirt&lt;/code&gt; configuration files, later I got lazy and used the RedHat graphical machines manager - I just don&amp;rsquo;t change my VMs much to be honest so don&amp;rsquo;t need a lot of clever things.&lt;/p&gt;
&lt;p&gt;As I was looking to run 3 or 4 baremetal machines with VMs on-top I wanted something quite nice with a nice UI and started looking around and found numerous Youtubers going on like crazy about Proxmox. I tried Proxmox for a week on some machines and had some thoughts about this experience.&lt;/p&gt;
&lt;p&gt;It is nice with a broad feature set that is quite a good all round product in this space, I can see if done right this will be formidable tool to use and would consider it in future again.&lt;/p&gt;
&lt;p&gt;Is seems like this is a company who has engineers, paid to engineer, and they will engineer things all day long. Ditto product owners etc. There&amp;rsquo;s a lot there and lots of it feels half-baked, awkward or incomplete or in-progress. Combined with there just being A LOT it seemed like a mess. I bet this is a company who just love sprint based work and fully embrace the sprints being quite isolated approach. It shows in obvious ways.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s Debian based. I have used EL from their first Beta. Slackware before. SLS before (1993!). In kubernetes you can get away with not caring so much about your hosts but in a virtualized environment you will need to make sure you have ways to manage updates, backups and such of those baremetals. I do not have any tooling specifically built for Debian so I really do not want that lift. I also just do not care even remotely for Debian or its community.&lt;/p&gt;
&lt;p&gt;Ultimately I do not need integration with Ceph etc, I don&amp;rsquo;t need (oh hell no do I not need) a &lt;code&gt;database-driven file system developed by Proxmox&lt;/code&gt; and for my needs I do not need SDN. All these things are useful but it seemed like it would tick some of the boxes I listed in Kubernetes conns.&lt;/p&gt;
&lt;p&gt;After some looking around at options I came across &lt;a href=&#34;https://cockpit-project.org/&#34;&gt;Cockpit&lt;/a&gt; which comes integrated already into EL based distros and while it&amp;rsquo;s not as full featured as Proxmox I find that to be a feature rather than a shortcoming. It does just the right things and I can easily just not install things I do not want.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devco.net/img/cockpit-vm-create.webp&#34; alt=&#34;Cockpit VM management&#34;&gt;&lt;/p&gt;
&lt;p&gt;I think its firewall management needs a bunch of work still, but that&amp;rsquo;s ok I do not want to manage firewalls in this manner (more later) so that is just fine, I also do not need its package management - no problem, just uninstall the feature.&lt;/p&gt;
&lt;p&gt;Really not much to say or complain about here - and having nothing to say is a huge feature - it makes virtual machines, allow me to edit their configurations, see their consoles etc. Just what I need and no more. One annoying thing with it is that I cannot figure out how to trigger a re-install of a machine, I did not look too deep though.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;EDIT:&lt;/em&gt; I do have a few things to say about it after all:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It took like 2 minutes to install on an EL machine.&lt;/li&gt;
&lt;li&gt;When you are not using it, there is nothing in the process tree. It&amp;rsquo;s fully based on socket activation. No resources wasted.&lt;/li&gt;
&lt;li&gt;It does not take over and camp on everything. It uses the same commands and APIs Puppet/Ansible/You does, you can keep using CLI tools you know. Or progressively learn.&lt;/li&gt;
&lt;li&gt;It uses your normal system users via PAM so not much to deploy regarding authentication etc.&lt;/li&gt;
&lt;li&gt;It support EL/Debian/Ubuntu and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is huge, it just is super lightweight and gets out of your way and does not prescribe much. It does not invent new things and does not invent new terminology. Huge.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;So that is roughly what I am doing for Virtualization and Container management after Kubernetes.  Container management is a work in progress so a lot of my components are now just stuff running on VMs but as I progress to improve Hoist a bit I&amp;rsquo;ll gradually move into it for more things, this has been something I&amp;rsquo;ve wanted to finish for a long time so I am glad to get the chance.&lt;/p&gt;
&lt;p&gt;Further as my Lab is really that after all, a place for R&amp;amp;D, using the area I am focussing on in Choria in anger to solve some real problems has been invaluable and I wanted more of that, this influenced some of these decisions.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Lab Infra Rebuild Part 1</title>
      <link>https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/</link>
      <pubDate>Wed, 20 Mar 2024 09:00:00 +0100</pubDate>
      
      <guid>https://www.devco.net/posts/2024/03/20/lab-infra-rebuild-1/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been posting on socials a bit about rebuilding my lab and some opinions I had on tools, approaches and more. Some people have asked for a way to keep up with my efforts, so I figured it might be time to post here for the first time since 2018!&lt;/p&gt;
&lt;p&gt;In this post I&amp;rsquo;ll focus on what came before, a bit of a recap of my previous setup. Additionally, to a general software refresh I have also been in Malta now 8 years and a lot of my office hardware was purchased around the time of moving here, so we&amp;rsquo;ll also cover replacing NAS servers and more.&lt;/p&gt;
&lt;p&gt;My previous big OS rebuilt was around CentOS 7 days, so that&amp;rsquo;s about 3 years ago now, high time to revisit some choices.&lt;/p&gt;
&lt;p&gt;My infra falls in the following categories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Development machines: usually 10 or so Virtual Machines that I use mainly in developing &lt;a href=&#34;https://choria.io&#34;&gt;Choria.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Office support equipment: NAS, Printers, Desktops, Laptops etc&lt;/li&gt;
&lt;li&gt;Networking equipment: mainly home networking stuff for my locations&lt;/li&gt;
&lt;li&gt;Hosting publicly visible items: This blog, DNS, Choria Package repos etc&lt;/li&gt;
&lt;li&gt;Management infrastructure: Choria, Puppet, etc&lt;/li&gt;
&lt;li&gt;Monitoring infrastructure: Prometheus and friends&lt;/li&gt;
&lt;li&gt;Backups: for everything&lt;/li&gt;
&lt;li&gt;General all-purpose things like source control etc&lt;/li&gt;
&lt;li&gt;My actual office&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Below I&amp;rsquo;ll do a quick run through all the equipment, machines, devices etc. I use regularly. I&amp;rsquo;ve largely replaced it all and will detail that in the following posts. It&amp;rsquo;s not huge infra or anything, all told about 20 to 30 instances in 5 or 6 locations.&lt;/p&gt;
&lt;h2 id=&#34;development-machines&#34;&gt;Development Machines&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;ve had 3 x Intel NUC machines, each with 512GB solid state and 8GB RAM since 2016. They have served me well and really I could see them going for a while longer.&lt;/p&gt;
&lt;p&gt;They were each a KVM host and had on them 3 to 8 CentOS VMs. One had some bigger ones as that was my general shell machine and the other were mainly there to add some servers to my node count for Choria development. Choria being inherently a distributed system having some more nodes help.&lt;/p&gt;
&lt;p&gt;For my needs they were great but unfortunately they have a hardware problem, their BIOS battery die and it is soldered on to the board so not exactly user serviceable.&lt;/p&gt;
&lt;p&gt;Two of these are just retiring - I will remove the SSD and see what it can be used for, more on that later - but otherwise they are done.&lt;/p&gt;
&lt;p&gt;One of them is actually a bit newer so it&amp;rsquo;s finding a new home as a little desktop for my 6 y/o mainly for Scratch and Minecraft etc.&lt;/p&gt;
&lt;p&gt;I have been using RedHat since version 0.9 Halloween Beta release then used CentOS for ages - even donated hardware - and after they seemed to be intent on self-destruction I started moving to Alma Linux.&lt;/p&gt;
&lt;h2 id=&#34;public-hosting-and-management&#34;&gt;Public Hosting and Management&lt;/h2&gt;
&lt;p&gt;I host my blog and a few other public bits myself. When I rebuilt all my things 3 years ago I wanted to get some more real Kubernetes experience so I got 3 x 8GB droplets from Digital Ocean with a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;There I hosted the sites, used their managed MySQL, used their Object store and volumes.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s been fine generally, I don&amp;rsquo;t think I like the complexity for what I needed but it was good experience - more on this later. This cluster started with Linode managed KE but it was early days for them in Kubernetes and so I bailed - their support was terrible at the time wrt Kubernetes.&lt;/p&gt;
&lt;p&gt;This was surprising as I&amp;rsquo;ve been a Linode customer since almost their day one and they have been consistently amazing. But when they launched managed Kubernetes it seemed their EU timezone support people were like level 1 only and I always ended up waiting till US time to get things resolved, several times many hour outages. It&amp;rsquo;s a sad outcome, at the time I moved all I felt comfortable away from them fearing it was a start of a downward slide in quality. Apart from things I am reluctant to move Linode has essentially lost me as a customer.&lt;/p&gt;
&lt;p&gt;So I moved to Digital Ocean. It was fine, I have only really one pain with them - they do forced updates of the Kubernetes version (fine) but the way they do it always result in 2 full upgrades and node replacements. This means my outgoing IPs for things like Prometheus kept changing which was extremely annoying.&lt;/p&gt;
&lt;p&gt;The Kubernetes infra also ran a 3 node Choria Broker cluster, Prometheus+Alert Manager and Graphite. Alerts to Victorops that I pay for.&lt;/p&gt;
&lt;p&gt;Apart from that I had Linode machines for Puppet, Name Servers, Choria Package repos and some general use machines that I should have killed years ago.&lt;/p&gt;
&lt;p&gt;I guess the bills for this came to like $400 a month, helps to have a company to bill this to.&lt;/p&gt;
&lt;h2 id=&#34;office-equipment&#34;&gt;Office Equipment&lt;/h2&gt;
&lt;p&gt;My main office desktop is one of the last Intel iMacs with an old, like 2009 era, Apple Thunderbolt display. I&amp;rsquo;ve had iMacs since the very first one and they were one of my favourite form factors. It&amp;rsquo;s a bit sad about this one as it was a really nice machine but Apple is moving fast to stop supporting Intel so now there are no more OS updates.&lt;/p&gt;
&lt;p&gt;The screen being ridiculously old and Thunderbolt only - no HDMI - is now just useless.&lt;/p&gt;
&lt;p&gt;The current iMacs do look nice but the problem is there is no screen I can put next to them that don&amp;rsquo;t look like crap and I want either a big monitor or 2, it&amp;rsquo;s just not working anymore to use iMacs.&lt;/p&gt;
&lt;p&gt;I have an old tank Brother printer that is maybe 15 to 20 years old. It just refuses to die but I think its paper feed has now dried up so maybe time to go.&lt;/p&gt;
&lt;p&gt;Other than things actually stuck to my office I use a range of new Macbooks - currently a 16 inch M2 Pro with 32GB RAM.&lt;/p&gt;
&lt;p&gt;For file storage I use Dropbox of course but QNAP devices for larger storage. I have an old TS-439 that is now 14 years old and still getting security updates (!!!), this is at my office. At home I have a TS-451+. Each provide around 4TB of usable space.&lt;/p&gt;
&lt;p&gt;At home I also have a 34 inch ultra-wide display that I put my Laptop on when I sit at the desk here. I got this 7 years ago now so getting a bit old.&lt;/p&gt;
&lt;h2 id=&#34;networking-equipment&#34;&gt;Networking Equipment&lt;/h2&gt;
&lt;p&gt;I have 4 locations I work from often with bits of equipment scattered everywhere. Some time ago I had a mix of networking equipment but I&amp;rsquo;ve been standardising on Ubiquiti. I am still sad about Apple retiring their Wi-Fi range. Mikrotik was nice enough but I wanted something a bit more polished.&lt;/p&gt;
&lt;p&gt;In Malta I have 3 Dream Machines (the round tube one) and around 16 switches, APs, etc. My house is 450 year old with 1.5 meter thick walls so every room gets an AP. I like the single vendor system as all have a single management pane and generally things like VPNs etc. just work.&lt;/p&gt;
&lt;p&gt;In Latvia I have a Dream Machine Pro as I have a set of surveillance cameras there, I am considering updating to the Pro at my main house at least also as I&amp;rsquo;d like to add a camera outside.&lt;/p&gt;
&lt;p&gt;In Malta I used to be on Melita for everything, now on Go who have a really great new all fibre to the house network. I have one location left to move and then I&amp;rsquo;ll have 1Gb links everywhere. In Latvia a 4G link that&amp;rsquo;s actually quite surprisingly good (190 Mbps down 11 up).&lt;/p&gt;
&lt;p&gt;Firewalls tended to be a mix of the Ubiquiti devices and iptables.&lt;/p&gt;
&lt;h2 id=&#34;general-infra&#34;&gt;General Infra&lt;/h2&gt;
&lt;p&gt;My mail has been hosted at Fastmail for years and they are amazing. The web ui is a bit of a mess to be honest but at least it doesn&amp;rsquo;t keep changing. Their mail hosting features are rock solid though.&lt;/p&gt;
&lt;p&gt;I used GitHub for everything since they introduced unlimited Private repos. I do pay for GitHub though.&lt;/p&gt;
&lt;h2 id=&#34;backups&#34;&gt;Backups&lt;/h2&gt;
&lt;p&gt;I have a 30TB Hetzner machine that runs Bacula. It does daily backups of everything I have and does a full 3 month rotation of Full backups with 1 month of Incrementals onto a RAID-10 disk setup.&lt;/p&gt;
&lt;p&gt;The main QNAP syncs my Dropbox onto its disks daily.&lt;/p&gt;
&lt;p&gt;The main QNAP is set up as 2 x RAID-1 volumes. The main volume is synchronised daily to the office QNAP and I do monthly manual disk rot checks and sync between the RAID-1 volumes.  This gives me a 1 month old full backup of the entire QNAP at home and daily off-sites. Monthly I also sync the QNAP to the Hetzner machine.&lt;/p&gt;
&lt;p&gt;This means every file hits 9 drives in 3 locations over 2 countries with access to file versions going back 3 months. Ample time to recover from user error like deleting the wrong files and also a lot of redundancy built in.&lt;/p&gt;
&lt;h2 id=&#34;physical-office&#34;&gt;Physical Office&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;ve had an Office in a town called Mosta here for 4 years now, I have not used it much because parking around it was hell. On paper it was 5 minutes from School and I would stay there while the boy is at School - in practise it could take me 40 minutes to get parking and 12 to drive back home. And when I found parking it could be very far from the office - walking along pavements in Summer is no joke here.&lt;/p&gt;
&lt;p&gt;It just never worked, I ended up working from home most of the time. I should have bailed out of it years ago but just never got around to it.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;So that about rounds it up, keep reading to hear what literally everything is being replaced with and what new things are being added to the mix to boot!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Part 2 covers &lt;a href=&#34;https://www.devco.net/posts/2024/03/21/lab-infra-rebuild-2/&#34;&gt;Kubernetes, Virtualization and Containers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 3 covers &lt;a href=&#34;https://www.devco.net/posts/2024/04/07/lab-infra-rebuild-3/&#34;&gt;Server Management, Puppet and Choria&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 4 covers &lt;a href=&#34;https://www.devco.net/posts/2024/04/11/lab-infra-rebuild-4/&#34;&gt;My Office and Office Hardware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 5 covers &lt;a href=&#34;https://www.devco.net/posts/2024/04/25/lab-infa-rebuild-5/&#34;&gt;VMs, Baremetals and Operating Systems&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 6 wraps up the series with a look at &lt;a href=&#34;https://www.devco.net/posts/2024/07/31/lab-infra-rebuild-6/&#34;&gt;SaaS and other tools&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    
  </channel>
</rss>
