<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>kubernetes &amp;mdash; A Bit</title>
    <link>https://baez.link/tag:kubernetes</link>
    <description>A little bit of writing by Alejandro </description>
    <pubDate>Sun, 19 Apr 2026 16:33:52 +0000</pubDate>
    <item>
      <title>What Immutable Linux To Use?</title>
      <link>https://baez.link/what-immutable-linux-to-use?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In more recent years, Linux distributions have become quite interesting. The hypothesis of  immutable Linux have gone from pure thought, to full throttled theory. There exists a plethora of options out in the wild. All from different companies, distributions, and communities.&#xD;&#xA;&#xD;&#xA;Now while many options exists, for me, I been debating on three. Dive so deep I hit bedrock with Nix and NixOS. Accept Kubernetes as the one true OS through Talos. Or drink the orange glowing Kool-Aid of snaps in Ubuntu Core. Bare with me. There&#39;s logic in these. &#xD;&#xA;&#xD;&#xA;!--more--&#xD;&#xA;&#xD;&#xA;The choices &#xD;&#xA;&#xD;&#xA;immutable&#xD;&#xA;&#xD;&#xA;I been debating the choice on what to use as my de-facto immutable OS for a while now. &#xD;&#xA;&#xD;&#xA;iframe src=&#34;https://fosstodon.org/@zeab/109773302609111324/embed&#34; class=&#34;mastodon-embed&#34; style=&#34;max-width: 100%; border: 0&#34; width=&#34;400&#34; allowfullscreen=&#34;allowfullscreen&#34;/iframescript src=&#34;https://fosstodon.org/embed.js&#34; async=&#34;async&#34;/script&#xD;&#xA;&#xD;&#xA;The three I settled (for now) are ones I think make the most sense. To set and forget. Distributions where I can realistically automate the entire thing. Without much fear of the ground falling apart. Because unlike normal Linux distributions, immutable OS versions are the wild west of Linux distros. All choose radically different ways of achieving same goals. Choice is great for ecosystem growth. Terrible for a foundation. &#xD;&#xA;&#xD;&#xA;Ubuntu Core  &#xD;&#xA;&#xD;&#xA;is this linux&#xD;&#xA;&#xD;&#xA;The one I&#39;m sure gets most debate is Ubuntu Core. Even though it&#39;s the most systematically grounded version of an immutable OS. The design is fairly simple. Use snaps. But the genius of this is how Canonical made every part of the OS into an isolated snap. Snaps you can update and change without impacting the &#39;core&#39; of the OS. &#xD;&#xA;&#xD;&#xA;You can update the entire OS with changing the snap version of the core you are based from. So jumping from Ubuntu core 20 to Ubuntu core 22 was doing something like this. And as Canonical has matured with how Ubuntu core operates, they have also made different layers to this design. Using the atomic principal. &#xD;&#xA;&#xD;&#xA;So now there are snaps for the linux kernel. And configuration based ones called gadget snaps. Responsible for handling bootstrapping on a specific hardware. All following the same everything is a snap.&#xD;&#xA;&#xD;&#xA;You can still run any application you want. The difference here is you have to package them differently. Or at least differently from the norm.&#xD;&#xA;&#xD;&#xA;The con with Ubuntu core is that you have to accept their way of packaging software with snaps. There is no alternative. You must have Canonical as your overseer here. The pro is everything stated before benefits your ability to have a stable immutable system. It should not be understated. The ease that comes with having maintainability so low, with ubuntu core, is pretty remarkable. &#xD;&#xA;&#xD;&#xA;Talos&#xD;&#xA;&#xD;&#xA;one does not simply use kubernetes&#xD;&#xA;&#xD;&#xA;Talos differs in a very simple way. The ENTIRE operating system. It is Linux technically because Talos uses the Linux kernel. It immediately deviates from there. You don&#39;t use an OS with Talos. What you use is Kubernetes. &#xD;&#xA;&#xD;&#xA;Kubernetes is infamous at being complex. I even wrote about my annoyance years ago. After years, that article is still true for self hosted Kubernetes cluster. So is Kubernetes worth self hosting? I can wholeheartedly say NO. But with an exception. The exception here is if you are using Talos.&#xD;&#xA;&#xD;&#xA;You see, Talos strips away practically everything of the OS. But it does it in a way that makes a lot of sense. You don&#39;t use ssh, because there is no shell to ssh to! Instead you use their gRPC API component; apid. Interfacing with the operating system. Talos has no systemd. Instead, replaced with their own PID 1. An init system called machined. With the whole purpose running what kubernetes needs and the gRPC interface to define the OS. &#xD;&#xA;&#xD;&#xA;In practice, Talos is actually dead simple to use and administer. It makes using and maintaining kubernetes strikingly easy. New version release? Run talosctl to upgrade: &#xD;&#xA;&#xD;&#xA;talosctl upgrade-k8s --to 1.28.0&#xD;&#xA;&#xD;&#xA;Same is true for updating Talos itself. Because the OS is atomic, there is very little thought process required to handling failures. You rollback like nothing happened. Here, you simply use kubernetes. &#xD;&#xA;&#xD;&#xA;The con with talos isn&#39;t actually the use of talos. It&#39;s the principal of only using kubernetes for everything. If you not comfortable with a job orchestration like it, DO NOT USE. If you somehow like ssh or want to install other things directly on the OS, DO NOT USE. If you don&#39;t want to do literally everything as code, definitely DO NOT USE. &#xD;&#xA;&#xD;&#xA;But if you do value what Talos offers, it&#39;s immensely difficult not to choose it. So many problems are simply non-existent on Talos. The OS makes you question why even bother with the old ways.&#xD;&#xA;&#xD;&#xA;NixOS&#xD;&#xA;&#xD;&#xA;NixOS is the messiah&#xD;&#xA;&#xD;&#xA;The prophecy is written. NixOS is the answer to our Linux administration ways. It will solve all our problems with packaging software. We will know of history as before and after NixOS. And quite frankly, it really does feel this way. &#xD;&#xA;&#xD;&#xA;NixOS shines in the same ways the others in this list shine. It rethinks what a Linux is and could be. If you make absolutely everything atomic, to the core of how you package and run software, then do you even need to care if your OS breaks? NixOS potency is the nix programming language. Not to be confused with the nix as a package manager. Or the OS who is also called nix. &#xD;&#xA;&#xD;&#xA;Unlike the other options on this list, the con with NixOS is quite immediately apparent. It&#39;s the difficulty you first have learning the ways of then learning the ways of using nix. Documentation is quite difficult to come by for nix. Much of your time will be left questioning how anything even functions. There&#39;s also then the full upgrade to nix design called nix flakes. All enough to really put incredible friction. Friction on nix usage and nixos adoption. &#xD;&#xA;&#xD;&#xA;However, the moment you passed the hurdles of learning nix, nothing comes even close to its versatility. I personally have migrated most of my software to run with nix. Or at the very least, build with nix. For work, I have development environments strictly nix based. And the list goes on. &#xD;&#xA;&#xD;&#xA;With NixOS, it&#39;s the same principal. You write your closure and you can be assured it will work. No matter what mess you do to the machine. You can roll back like nothing ever occurred. There&#39;s nothing really like Nix and NixOS. The principal is that you handle the hurdle of defining how your software is built. From then on, it will just run. &#xD;&#xA;&#xD;&#xA;No more conflicts with versions of python. No issues with running two independently different versions of the same software. Because of the closure design, there&#39;s no need to containerize your applications. They &#39;just&#39; work. With zero conflict running on the same host. Optimally, NixOS gets you the closest to the promise of Gentoo, but entirely atomic and immutable. &#xD;&#xA;&#xD;&#xA;So what to choose?&#xD;&#xA;&#xD;&#xA;thinking&#xD;&#xA;&#xD;&#xA;I don&#39;t know yet. The reality is, all three options serve very similar ideas of running an immutable OS. They simply attack the problems differently. Talos packages software in kubernetes manifest essentially. Ubuntu Core is snaps. And NixOS is nix closures. All with different tradeoffs that are far too long to add to this never ending post. &#xD;&#xA;&#xD;&#xA;For security reasons, I would probably go for Talos. Because of the stripped purpose of the OS, there&#39;s a smaller footprint to a security issue. Yes. Even with Kubernetes. &#xD;&#xA;&#xD;&#xA;For maintainability, I think Ubuntu Core is prime. Canonical has been doing Linux distributions for decades now. They know what it means to make something function. Every Ubuntu core release has a maintenance window of up to ten years. Meaning, if I want to just run my thing, with no fuss, this will be it.&#xD;&#xA;&#xD;&#xA;For customization, nothing gets even close to what NixOS promises and delivers. I would be able to take all the Nix flakes I been writing for myself and run straight on NixOS. True &#34;it works on my machine&#34; on all machines.   &#xD;&#xA;&#xD;&#xA;[1]: https://i.snap.as/zZjSmqQW.jpg&#xD;&#xA;[2]: https://i.snap.as/fHhCjwvb.jpg&#xD;&#xA;[3]: https://fosstodon.org/@zeab/109773302609111324&#xD;&#xA;[4]: https://ubuntu.com/core&#xD;&#xA;[5]: https://snapcraft.io/docs/snapcraft&#xD;&#xA;[6]: https://ubuntu.com/core/docs/kernel-building&#xD;&#xA;[7]: https://ubuntu.com/core/docs/gadget-snaps&#xD;&#xA;[8]: https://snapcraft.io/docs&#xD;&#xA;[9]: https://i.snap.as/ihtrKyXX.jpg&#xD;&#xA;[10]: https://www.talos.dev/&#xD;&#xA;[11]: https://kubernetes.io/&#xD;&#xA;[12]: https://baez.link/the-a-z-stack&#xD;&#xA;[13]: https://www.talos.dev/v1.5/learn-more/components/#machined&#xD;&#xA;[14]: https://www.talos.dev/v1.5/learn-more/components/#apid&#xD;&#xA;[15]: https://www.talos.dev/v1.5/kubernetes-guides/upgrading-kubernetes/&#xD;&#xA;[16]: https://i.snap.as/Ps69vAxn.jpg&#xD;&#xA;[17]: https://nixos.org/&#xD;&#xA;[18]: https://zero-to-nix.com/&#xD;&#xA;[19]: https://nixos.org/manual/nix/stable/&#xD;&#xA;[20]: https://search.nixos.org/packages&#xD;&#xA;[21]: https://linuxunplugged.com/524&#xD;&#xA;[22]: https://zero-to-nix.com/concepts/closures&#xD;&#xA;[23]: https://www.gentoo.org/get-started/about/&#xD;&#xA;[24]: https://stackoverflow.com/questions/55130795/what-is-a-kubernetes-manifest&#xD;&#xA;[25]: https://ubuntu.com/about/release-cycle&#xD;&#xA;[26]: https://i.snap.as/dx74mIQc.jpg&#xD;&#xA;&#xD;&#xA;#linux #immutableos #ubuntucore #talos #nixos #atomic #ubuntu #kubernetes&#xD;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>In more recent years, Linux distributions have become quite interesting. The hypothesis of  immutable Linux have gone from pure thought, to full throttled theory. There exists a plethora of options out in the wild. All from different companies, distributions, and communities.</p>

<p>Now while many options exists, for me, I been debating on three. Dive so deep I hit bedrock with Nix and NixOS. Accept Kubernetes as the one true OS through Talos. Or drink the orange glowing Kool-Aid of snaps in Ubuntu Core. Bare with me. There&#39;s logic in these.</p>



<h1 id="the-choices" id="the-choices">The choices</h1>

<p><img src="https://i.snap.as/fHhCjwvb.jpg" alt="immutable"/></p>

<p>I been debating the choice on what to use as my de-facto immutable OS for a while now.</p>

<p><iframe src="https://fosstodon.org/@zeab/109773302609111324/embed" class="mastodon-embed" style="max-width: 100%; border: 0" width="400" allowfullscreen="allowfullscreen"></iframe></p>

<p>The three I settled (for now) are ones I think make the most sense. To set and forget. Distributions where I can realistically automate the entire thing. Without <em>much</em> fear of the ground falling apart. Because unlike normal Linux distributions, immutable OS versions are the wild west of Linux distros. All choose radically different ways of achieving same goals. Choice is great for ecosystem growth. Terrible for a foundation.</p>

<h2 id="ubuntu-core" id="ubuntu-core">Ubuntu Core</h2>

<p><img src="https://i.snap.as/zZjSmqQW.jpg" alt="is this linux"/></p>

<p>The one I&#39;m sure gets most debate is <a href="https://ubuntu.com/core">Ubuntu Core</a>. Even though it&#39;s the most systematically grounded version of an immutable OS. The design is fairly simple. Use <a href="https://snapcraft.io/docs">snaps</a>. But the genius of this is how Canonical made every part of the OS into an isolated snap. Snaps you can update and change without impacting the &#39;core&#39; of the OS.</p>

<p>You can update the entire OS with changing the snap version of the core you are based from. So jumping from Ubuntu core 20 to Ubuntu core 22 was doing something like this. And as Canonical has matured with how Ubuntu core operates, they have also made different layers to this design. Using the atomic principal.</p>

<p>So now there are snaps for the <a href="https://ubuntu.com/core/docs/kernel-building">linux kernel</a>. And configuration based ones called <a href="https://ubuntu.com/core/docs/gadget-snaps">gadget snaps</a>. Responsible for handling bootstrapping on a specific hardware. All following the same everything is a snap.</p>

<p>You can still run any application you want. The difference here is you <a href="https://snapcraft.io/docs/snapcraft">have to package</a> them differently. Or at least differently from the norm.</p>

<p>The con with Ubuntu core is that you have to accept their way of packaging software with snaps. There is no alternative. You <strong>must</strong> have Canonical as your overseer here. The pro is everything stated before benefits your ability to have a stable immutable system. It should not be understated. The ease that comes with having maintainability so low, with ubuntu core, is pretty remarkable.</p>

<h2 id="talos" id="talos">Talos</h2>

<p><img src="https://i.snap.as/ihtrKyXX.jpg" alt="one does not simply use kubernetes"/></p>

<p><a href="https://www.talos.dev/">Talos</a> differs in a very simple way. The <strong>ENTIRE</strong> operating system. It is Linux technically because Talos uses the Linux kernel. It immediately deviates from there. You don&#39;t use an OS with Talos. What you use is <a href="https://kubernetes.io/">Kubernetes</a>.</p>

<p>Kubernetes is infamous at being complex. I even <a href="https://baez.link/the-a-z-stack">wrote about my annoyance</a> years ago. After years, that article is still true for self hosted Kubernetes cluster. So is Kubernetes worth self hosting? I can wholeheartedly say NO. <em>But</em> with an exception. The exception here is if you are using Talos.</p>

<p>You see, Talos strips away practically everything of the OS. But it does it in a way that makes a lot of sense. You don&#39;t use ssh, because there is no shell to ssh to! Instead you use their gRPC API component; <a href="https://www.talos.dev/v1.5/learn-more/components/#apid">apid</a>. Interfacing with the operating system. Talos has no systemd. Instead, replaced with their own PID 1. An init system called <a href="https://www.talos.dev/v1.5/learn-more/components/#machined">machined</a>. With the whole purpose running what kubernetes needs and the gRPC interface to define the OS.</p>

<p>In practice, Talos is actually dead simple to use and administer. It makes using and maintaining kubernetes strikingly easy. New version release? Run <a href="https://www.talos.dev/v1.5/kubernetes-guides/upgrading-kubernetes/">talosctl to upgrade</a>:</p>

<pre><code class="language-bash">talosctl upgrade-k8s --to 1.28.0
</code></pre>

<p>Same is true for updating Talos itself. Because the OS is atomic, there is very little thought process required to handling failures. You rollback like nothing happened. Here, you simply use kubernetes.</p>

<p>The con with talos isn&#39;t actually the use of talos. It&#39;s the principal of only using kubernetes for everything. If you not comfortable with a job orchestration like it, DO NOT USE. If you somehow like ssh or want to install other things directly on the OS, DO NOT USE. If you don&#39;t want to do literally everything as code, definitely DO NOT USE.</p>

<p>But if you do value what Talos offers, it&#39;s immensely difficult not to choose it. So many problems are simply non-existent on Talos. The OS makes you question why even bother with the old ways.</p>

<h2 id="nixos" id="nixos">NixOS</h2>

<p><img src="https://i.snap.as/Ps69vAxn.jpg" alt="NixOS is the messiah"/></p>

<p>The prophecy is written. <a href="https://nixos.org/">NixOS</a> is the answer to our Linux administration ways. It will solve all our problems with packaging software. We will know of history as before and after NixOS. And quite frankly, it really does feel this way.</p>

<p>NixOS shines in the same ways the others in this list shine. It rethinks what a Linux is and could be. If you make absolutely everything atomic, to the core of how you package and run software, then do you even need to care if your OS breaks? NixOS potency is the <a href="https://nixos.org/manual/nix/stable/">nix programming language</a>. Not to be confused with the <a href="https://search.nixos.org/packages">nix as a package manager</a>. Or the OS who is also called nix.</p>

<p>Unlike the other options on this list, the con with NixOS is quite immediately apparent. It&#39;s the difficulty you first have learning the ways of <em>then</em> learning the ways of using nix. Documentation is quite difficult to come by for nix. Much of your time will be left questioning how anything even functions. There&#39;s also then the full upgrade to nix design called <a href="https://zero-to-nix.com/">nix flakes</a>. All enough to really put incredible friction. Friction on nix usage and nixos adoption.</p>

<p><strong>However</strong>, the moment you passed the hurdles of learning nix, nothing comes even close to its versatility. I personally have migrated most of my software to run with nix. Or at the very least, build with nix. For work, I have development environments strictly nix based. And the list goes on.</p>

<p>With NixOS, it&#39;s the same principal. You write your <a href="https://zero-to-nix.com/concepts/closures">closure</a> and you can be assured it will work. No matter what <a href="https://linuxunplugged.com/524">mess you do to the machine</a>. You can roll back like nothing ever occurred. There&#39;s nothing really like Nix and NixOS. The principal is that you handle the hurdle of defining how your software is built. From then on, it will just run.</p>

<p>No more conflicts with versions of python. No issues with running two independently different versions of the same software. Because of the closure design, there&#39;s no need to containerize your applications. They &#39;just&#39; work. With zero conflict running on the same host. Optimally, NixOS gets you the closest to the promise of <a href="https://www.gentoo.org/get-started/about/">Gentoo</a>, but entirely atomic and immutable.</p>

<h1 id="so-what-to-choose" id="so-what-to-choose">So what to choose?</h1>

<p><img src="https://i.snap.as/dx74mIQc.jpg" alt="thinking"/></p>

<p>I don&#39;t know yet. The reality is, all three options serve very similar ideas of running an immutable OS. They simply attack the problems differently. Talos packages software in <a href="https://stackoverflow.com/questions/55130795/what-is-a-kubernetes-manifest">kubernetes manifest</a> essentially. Ubuntu Core is snaps. And NixOS is nix closures. All with different tradeoffs that are far too long to add to this never ending post.</p>

<p>For security reasons, I would probably go for Talos. Because of the stripped purpose of the OS, there&#39;s a smaller footprint to a security issue. Yes. Even with Kubernetes.</p>

<p>For maintainability, I think Ubuntu Core is prime. Canonical has been doing Linux distributions for decades now. They know what it means to make something function. Every Ubuntu core release has a maintenance window of <a href="https://ubuntu.com/about/release-cycle">up to ten years</a>. Meaning, if I want to just run my thing, with no fuss, this will be it.</p>

<p>For customization, nothing gets even close to what NixOS promises and delivers. I would be able to take all the Nix flakes I been writing for myself and run straight on NixOS. True “it works on my machine” on all machines.</p>

<p><a href="https://baez.link/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://baez.link/tag:immutableos" class="hashtag"><span>#</span><span class="p-category">immutableos</span></a> <a href="https://baez.link/tag:ubuntucore" class="hashtag"><span>#</span><span class="p-category">ubuntucore</span></a> <a href="https://baez.link/tag:talos" class="hashtag"><span>#</span><span class="p-category">talos</span></a> <a href="https://baez.link/tag:nixos" class="hashtag"><span>#</span><span class="p-category">nixos</span></a> <a href="https://baez.link/tag:atomic" class="hashtag"><span>#</span><span class="p-category">atomic</span></a> <a href="https://baez.link/tag:ubuntu" class="hashtag"><span>#</span><span class="p-category">ubuntu</span></a> <a href="https://baez.link/tag:kubernetes" class="hashtag"><span>#</span><span class="p-category">kubernetes</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/what-immutable-linux-to-use</guid>
      <pubDate>Fri, 08 Sep 2023 16:22:30 +0000</pubDate>
    </item>
    <item>
      <title>The A-Z Stack</title>
      <link>https://baez.link/the-a-z-stack?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Let&#39;s say you wanted to write your own application platform from scratch in the modern workflow. &#xA;&#xA;!--more--&#xA;&#xA;Not too long ago Kelsey Hightower posted a tweet about what you may require to run an application platform. While his list may look large, it&#39;s not all encompassing. It&#39;s just an example of how ridiculous our abstractions have gotten. Even taking one or two service on each category in the landscape.cncf.io page, would not give the full list of dependencies. So I started thinking. What would a full encompassing list of the &#39;recommended stack&#39; look like?&#xA;&#xA;I&#39;ve been running a few posts now on striking a balance between what you do to run your software in a sustainable and manageable way. This post is not that. The design here is the A-Z stack. A platform of everything I could think of, from on top of my head, you may need for a modern application platform. Please don&#39;t try to implement the stack here at work. If you already have, I&#39;m sorry and I know your pain. There&#39;s consoling we can probably get for our insanity. &#xA;&#xA;First, you need a cloud provider. If we want to be realistic, this would certainly be AWS. A high chance you and your team are already AWS Architects with decades of experience collectively. &#xA;&#xA;Next, you need an infrastructure as code software. The list can get quite large here on implementation. For the purpose of the A-Z stack list, let&#39;s just say you use Hashicorp&#39;s Terraform. With a team of N   1, you most certainly will be working on the code in terraform together. In other words, you need two parts to make terraform work properly here. One, you must be using version control. The famous single options are git and github. The second part is you need to have a backend that&#39;s not local for your terraform state. Since we using AWS, might as well use S3 with DynamoDB for state locking. &#xA;&#xA;With your cloud provider and IaC ready, the next section is the OS. You should be running Linux, let&#39;s say CentOS. Make sure you have SELinux enabled and actually enforcing here. While the trend is to disable, don&#39;t. There are too many reasons to count why not. If you doing it to be fast, you are hurting yourself and your company far more later down the road. Moreover the OS security, you need something to actually provision the Linux of choice. Let&#39;s stay with Red Hat and go with Ansible. The provisioning alone can be its own series on lists of things you need, but we&#39;ll just keep it as is.&#xA;&#xA;Ok, here we go with the container orchestrator. Sticking to the norm and the standard being Red Hat, let&#39;s say openshift. Openshift comes with a lot of batteries included, so the list would be much larger with what isn&#39;t set by default. However, for peace of mind, bringing up cri-o for CRI, Flannel for CNI, and CoreDNS for service discovery. For CSI here means probably Ceph with Rook. Secret management is a must and should be Vault.  &#xA;&#xA;Now comes networking. Already gave Flannel for CNI and CoreDNS, but you still need an ingress controller. The popular one here would probably be ambassador. Don&#39;t forget your services in this platform need to be able to communicate with one another. That means a service mesh is necessary. Here I&#39;m going to cheat and use Linkerd. By cheat, I mean it covers the service proxy sidecar and the controller. Otherwise, we most certainly would then have to add Envoy. However, if your service does have proxy requirements not covered by the  linkered sidecar, then Envoy is still required. &#xA;&#xA;Here comes the home stretch! The actual application requirements. First is the Container registry, harbor. With it comes, Notary and Clair for build image security. Next, you need the deployments to Kubernetes using something structured like Helm. If not already brought up before, you need a key value store like etcd. You also need a CI/CD solution for your applications. So staying with the trends, Argo would suffice. Don&#39;t forget to monitor your application and platform. Most likely means Prometheus for metrics, Loki for logs, and Jaeger for tracing.&#xA;&#xA;Now you can finally begin to write your application. Just so the point is made clear, here&#39;s the list of all that you must know, like the creators of the software themselves, to get this type of modern day application platform working:&#xA;&#xA;Cloud Provider : AWS&#xA;IaC : Terraform&#xA;Version Control: Git&#xA;Version Control Host: Github&#xA;S3 and dynamodb for state locking on terraform&#xA;Linux Distribution: CentOS&#xA;SELinux enabled and enforcing&#xA;Linux Provisioner: Ansible&#xA;Container Orchestrator: Kubernetes, Openshift&#xA;10. CRI: CRI-O&#xA;11. CNI: Flannel&#xA;12. Service Discovery: CoreDNS&#xA;12. CSI: Ceph with Rook&#xA;13. Secret Management: Vault&#xA;14. Ingress Controller: Ambassador&#xA;15. Service Mesh: Linkerd&#xA;16. Service Proxy: Envoy&#xA;17. Container Registry: Harbor&#xA;18. Version Security Management: Notary&#xA;19. Build container image security: Clair&#xA;20. Kubernetes deployment: Helm&#xA;21. Key Value store: etcd&#xA;22. Continuous Integration and Delivery: Argo&#xA;23. Metrics Observability: Prometheus&#xA;24. Logs Observability: Loki&#xA;25. Tracing Observability: Jaeger &#xA; &#xA;&#xA;#100DaysToOffload #Day12 #Kubernetes #PaaS &#xA;&#xA;[1]: https://twitter.com/kelseyhightower/status/1245886920443363329&#xA;[2]: https://landscape.cncf.io/&#xA;[3]: https://baez.link/tag:StrikingABalance&#xA;[4]: https://www.digitalocean.com/&#xA;[5]: https://www.digitalocean.com/&#xA;[6]: https://www.terraform.io/&#xA;[7]: https://git-scm.com/&#xA;[8]: https://github.com/&#xA;[9]: https://www.terraform.io/docs/backends/types/index.html&#xA;[10]: https://www.terraform.io/docs/backends/state.html&#xA;[11]: https://www.terraform.io/docs/backends/types/s3.html&#xA;[12]: https://www.centos.org/&#xA;[13]: https://selinuxproject.org/page/Main_Page&#xA;[14]: https://www.ansible.com/&#xA;[15]: https://www.openshift.com/&#xA;[16]: https://cri-o.io/&#xA;[17]: https://github.com/coreos/flannel&#xA;[18]: https://rook.io/&#xA;[19]: https://www.vaultproject.io/&#xA;[20]: https://coredns.io/&#xA;[21]: https://www.getambassador.io/&#xA;[22]: https://linkerd.io/&#xA;[23]: https://www.envoyproxy.io/&#xA;[24]: https://goharbor.io/&#xA;[25]: https://coreos.com/clair/docs/latest/&#xA;[26]: https://github.com/theupdateframework/notary&#xA;[27]: https://helm.sh/&#xA;[28]: https://github.com/etcd-io&#xA;[29]: https://argoproj.github.io/&#xA;[30]: https://prometheus.io/&#xA;[31]: https://grafana.com/oss/loki/]]&gt;</description>
      <content:encoded><![CDATA[<p>Let&#39;s say you wanted to write your own application platform from scratch in the modern workflow.</p>



<p>Not too long ago Kelsey Hightower posted a tweet about what you may require to run an <a href="https://twitter.com/kelseyhightower/status/1245886920443363329">application platform</a>. While his list may look large, it&#39;s not all encompassing. It&#39;s just an example of how ridiculous our abstractions have gotten. Even taking one or two service on each category in the <a href="https://landscape.cncf.io/">landscape.cncf.io page</a>, would not give the full list of dependencies. So I started thinking. What would a full encompassing list of the &#39;recommended stack&#39; look like?</p>

<p>I&#39;ve been running a few posts now on <a href="https://baez.link/tag:StrikingABalance">striking a balance</a> between what you do to run your software in a sustainable and manageable way. This post is <strong>not</strong> that. The design here is the A-Z stack. A platform of everything I could think of, from on top of my head, you may need for a modern application platform. Please don&#39;t try to implement the stack here at work. If you already have, I&#39;m sorry and I know your pain. There&#39;s consoling we can probably get for our insanity.</p>

<p>First, you need a cloud provider. If we want to be realistic, this would certainly be AWS. A high chance you and your team are already AWS Architects with decades of experience collectively.</p>

<p>Next, you need an infrastructure as code software. The list can get quite large here on implementation. For the purpose of the A-Z stack list, let&#39;s just say you use Hashicorp&#39;s <a href="https://www.terraform.io/">Terraform</a>. With a team of <code>N &gt; 1</code>, you most certainly will be working on the code in terraform together. In other words, you need two parts to make terraform work properly here. One, you must be using version control. The famous single options are <a href="https://git-scm.com/">git</a> and <a href="https://github.com/">github</a>. The second part is you need to have a <a href="https://www.terraform.io/docs/backends/state.html">backend that&#39;s not local</a> for your <a href="https://www.terraform.io/docs/backends/types/index.html">terraform state</a>. Since we using AWS, might as well use <a href="https://www.terraform.io/docs/backends/types/s3.html">S3 with DynamoDB</a> for state locking.</p>

<p>With your cloud provider and IaC ready, the next section is the OS. You should be running Linux, let&#39;s say <a href="https://www.centos.org/">CentOS</a>. Make sure you have SELinux enabled and actually enforcing here. While the trend is to disable, don&#39;t. There are too many reasons to count why not. If you doing it to be fast, you are hurting yourself and your company far more later down the road. Moreover the OS security, you need something to actually provision the Linux of choice. Let&#39;s stay with Red Hat and go with <a href="https://www.ansible.com/">Ansible</a>. The provisioning alone can be its own series on lists of things you need, but we&#39;ll just keep it as is.</p>

<p>Ok, here we go with the container orchestrator. Sticking to the norm and the standard being Red Hat, let&#39;s say <a href="https://www.openshift.com/">openshift</a>. Openshift comes with a lot of batteries included, so the list would be much larger with what isn&#39;t set by default. However, for peace of mind, bringing up <a href="https://cri-o.io/">cri-o</a> for CRI, <a href="https://github.com/coreos/flannel">Flannel</a> for CNI, and <a href="https://coredns.io/">CoreDNS</a> for service discovery. For CSI here means probably <a href="https://rook.io/">Ceph with Rook</a>. Secret management is a must and should be <a href="https://www.vaultproject.io/">Vault</a>.</p>

<p>Now comes networking. Already gave <a href="https://github.com/coreos/flannel">Flannel</a> for CNI and <a href="https://coredns.io/">CoreDNS</a>, but you still need an ingress controller. The popular one here would probably be <a href="https://www.getambassador.io/">ambassador</a>. Don&#39;t forget your services in this platform need to be able to communicate with one another. That means a service mesh is necessary. Here I&#39;m going to cheat and use <a href="https://linkerd.io/">Linkerd</a>. By cheat, I mean it covers the service proxy sidecar and the controller. Otherwise, we most certainly would then have to add <a href="https://www.envoyproxy.io/">Envoy</a>. However, if your service does have proxy requirements not covered by the  linkered sidecar, then Envoy is still required.</p>

<p>Here comes the home stretch! The actual application requirements. First is the Container registry, <a href="https://goharbor.io/">harbor</a>. With it comes, <a href="https://coreos.com/clair/docs/latest/">Notary</a> and <a href="https://github.com/theupdateframework/notary">Clair</a> for build image security. Next, you need the deployments to Kubernetes using something structured like <a href="https://helm.sh/">Helm</a>. If not already brought up before, you need a key value store like <a href="https://github.com/etcd-io">etcd</a>. You also need a CI/CD solution for your applications. So staying with the trends, <a href="https://argoproj.github.io/">Argo</a> would suffice. Don&#39;t forget to monitor your application and platform. Most likely means <a href="https://prometheus.io/">Prometheus</a> for metrics, <a href="https://grafana.com/oss/loki/">Loki</a> for logs, and <a href="https://www.jaegertracing.io/">Jaeger</a> for tracing.</p>

<p>Now you can finally begin to write your application. Just so the point is made clear, here&#39;s the list of all that you must know, like the creators of the software themselves, to get this type of modern day application platform working:</p>
<ol><li>Cloud Provider : AWS</li>
<li>IaC : <a href="https://www.terraform.io/">Terraform</a></li>
<li>Version Control: <a href="https://git-scm.com/">Git</a></li>
<li>Version Control Host: <a href="https://github.com/">Github</a></li>
<li><a href="https://www.terraform.io/docs/backends/types/s3.html">S3 and dynamodb for state locking on terraform</a></li>
<li>Linux Distribution: <a href="https://www.centos.org/">CentOS</a></li>
<li><a href="https://selinuxproject.org/page/Main_Page">SELinux enabled and enforcing</a></li>
<li>Linux Provisioner: <a href="https://www.ansible.com/">Ansible</a></li>
<li>Container Orchestrator: Kubernetes, <a href="https://www.openshift.com/">Openshift</a></li>
<li>CRI: <a href="https://cri-o.io/">CRI-O</a></li>
<li>CNI: <a href="https://github.com/coreos/flannel">Flannel</a></li>
<li>Service Discovery: <a href="https://coredns.io/">CoreDNS</a></li>
<li>CSI: <a href="https://rook.io/">Ceph with Rook</a></li>
<li>Secret Management: <a href="https://www.vaultproject.io/">Vault</a></li>
<li>Ingress Controller: <a href="https://www.getambassador.io/">Ambassador</a></li>
<li>Service Mesh: <a href="https://linkerd.io/">Linkerd</a></li>
<li>Service Proxy: <a href="https://www.envoyproxy.io/">Envoy</a></li>
<li>Container Registry: <a href="https://goharbor.io/">Harbor</a></li>
<li>Version Security Management: <a href="https://github.com/theupdateframework/notary">Notary</a></li>
<li>Build container image security: <a href="https://coreos.com/clair/docs/latest/">Clair</a></li>
<li>Kubernetes deployment: <a href="https://helm.sh/">Helm</a></li>
<li>Key Value store: <a href="https://github.com/etcd-io">etcd</a></li>
<li>Continuous Integration and Delivery: <a href="https://argoproj.github.io/">Argo</a></li>
<li>Metrics Observability: <a href="https://prometheus.io/">Prometheus</a></li>
<li>Logs Observability: <a href="https://grafana.com/oss/loki/">Loki</a></li>
<li>Tracing Observability: <a href="https://www.jaegertracing.io/">Jaeger</a></li></ol>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day12" class="hashtag"><span>#</span><span class="p-category">Day12</span></a> <a href="https://baez.link/tag:Kubernetes" class="hashtag"><span>#</span><span class="p-category">Kubernetes</span></a> <a href="https://baez.link/tag:PaaS" class="hashtag"><span>#</span><span class="p-category">PaaS</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/the-a-z-stack</guid>
      <pubDate>Sun, 10 May 2020 04:14:04 +0000</pubDate>
    </item>
  </channel>
</rss>