<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>StrikingABalance &amp;mdash; A Bit</title>
    <link>https://baez.link/tag:StrikingABalance</link>
    <description>A little bit of writing by Alejandro </description>
    <pubDate>Sun, 03 May 2026 15:10:30 +0000</pubDate>
    <item>
      <title>The Ways Of the Past</title>
      <link>https://baez.link/the-ways-of-the-past?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[If you been living outside of the burn cycle in production, you may not know what is the fad of app containers and container orchestration. However, if you had, you may forget why we use them now. &#xA;&#xA;!--more-- &#xA;&#xA;In the following post, continuing the series of #StrikingABalance, we will explore how we would run a service, in legacy infrastructure. Brightening the shadow we&#39;ve made using app containers and the actual simplicity container orchestration brings.&#xA;&#xA;Container orchestrations are an answer to a problem created from the technological introduction of app containers. App containers are an excellent form of making reproducible artifacts. However, with its use, a requirement for something to run these new artifacts arises. While running locally is quite trivial. Especially with tools like docker-compose,  running a container image in any form on the cloud will most certainly not be. The reason is fairly easy to describe, but not so easy to implement. You need a way to be able to make your deployment ephemeral and idempotent.  &#xA;&#xA;Let&#39;s take the approach of running an app container without a container orchestration. &#xA;&#xA;The first you have to account for is how your app container is going to run. Let&#39;s say you will use a VM instance to run the container. Yet, before you can even answer how you will set up that instance, you need to figure out what is going to run the container image consistently. The easiest approach here probably be to use an init daemon. A simple systemd service unit file to keep the app container running can suffice. Allowing you to retrieve logs and status of the service, fairly quickly for the app container&#39;s runtime. &#xA;&#xA;Now, back to the how of the app container&#39;s runtime will function. You now need to provision the VM instance before you can reach the stage of running the app container on systemd.  A simple BASH script could work here, but remember the end goal here is for something that&#39;s idempotent and ephemeral. If your VM instance shuts down, you need a way to get back the setup you had prior exactly as it was before. Or if you introduce changes, you need a way to have proper configuration drift resolution. Writing an idempotent BASH script is non-trivial. Probably can make any grown man cry at the sight of its existence.&#xA;&#xA;Most certainly, the second complexity introduced is some configuration manager like Ansible, Chef, or Salt to cover the provisioning of the instance. Take note, once you&#39;ve completed your provision, you are now half way to your end goal of running an app container. The next stage here is now how are you going to retrieve the app container image for your runtime. The options can grow quite large. However, to keep it as simple, while skipping massive chunks of the implementation details, you can create a continuous deliver pipeline to run your configuration manager. &#xA;&#xA;The continuous delivery pipeline would run a configuration manager, which fetches your container image, applies the systemd unit file, and starts up the service. One of the requirements to make the systemd runtime work is you need to run a container registry and another VM instance running said container registry. You will also need a CI/CD service as hinted before, if you don&#39;t already have one. &#xA;&#xA;Lastly, you need to manage all of the VM instances you spun up for that one single app container you want to run on the cloud. You are now managing both an entire operating system to run that single app container and a fleet of VM instances to manage that app container runtime. The complexity doesn&#39;t stop there. You also need to track many other portions, like ssh privilege, security group isolation, system resource management, and service management. &#xA;&#xA;The past way worked when we required only a few instances to run for our services. It becomes completely unmanageable when you have full fleet you require to run. Container orchestration allows you to take all of the complexity described here and apply it to a single standard structure on how you define an app container to run. Allowing for better abstractions, but also keeping the level of complexity built prior at a hopeful minimum. &#xA;&#xA;#Day9 #100DaysToOffload #StrikingABalance &#xA;&#xA;[1]: https://baez.link/builds-and-sanity&#xA;[2]: https://docs.docker.com/compose/&#xA;[3]: https://www.mankier.com/5/systemd.unit&#xA;[4]: https://www.ansible.com/&#xA;[5]: https://www.saltstack.com/&#xA;[6]: https://en.wikipedia.org/wiki/Init&#xA;[7]: https://www.chef.io/&#xA;[8]: https://landscape.cncf.io/category=container-registry&amp;format=card-mode&amp;grouping=category&#xA;[9]: https://landscape.cncf.io/category=continuous-integration-delivery&amp;format=card-mode&amp;grouping=category&#xA;[10]: https://blog.newrelic.com/engineering/container-orchestration-explained/]]&gt;</description>
      <content:encoded><![CDATA[<p>If you been living outside of the burn cycle in production, you may not know what is the fad of app containers and container orchestration. However, if you had, you may forget why we use them now.</p>

 

<p>In the following post, continuing the series of <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a>, we will explore how we would run a service, in legacy infrastructure. Brightening the shadow we&#39;ve made using app containers and the actual simplicity container orchestration brings.</p>

<p><a href="https://blog.newrelic.com/engineering/container-orchestration-explained/">Container orchestrations</a> are an answer to a problem created from the technological introduction of <a href="https://baez.link/builds-and-sanity">app containers</a>. App containers are an excellent form of making reproducible artifacts. However, with its use, a requirement for something to run these new artifacts arises. While running locally is quite trivial. Especially with tools like <a href="https://docs.docker.com/compose/">docker-compose</a>,  running a container image in any form on the cloud will most certainly not be. The reason is fairly easy to describe, but not so easy to implement. You need a way to be able to make your deployment ephemeral and idempotent.</p>

<p>Let&#39;s take the approach of running an app container <em>without</em> a container orchestration.</p>

<p>The first you have to account for is <em>how</em> your app container is going to run. Let&#39;s say you will use a VM instance to run the container. Yet, before you can even answer how you will set up that instance, you need to figure out what is going to run the container image consistently. The easiest approach here probably be to use an <a href="https://en.wikipedia.org/wiki/Init">init daemon</a>. A <a href="https://www.mankier.com/5/systemd.unit">simple systemd service unit file</a> to keep the app container running can suffice. Allowing you to retrieve logs and status of the service, fairly quickly for the app container&#39;s runtime.</p>

<p>Now, back to the how of the app container&#39;s runtime will function. You now need to provision the VM instance before you can reach the stage of running the app container on systemd.  A simple BASH script could work here, but remember the end goal here is for something that&#39;s idempotent and ephemeral. If your VM instance shuts down, you need a way to get back the setup you had prior exactly as it was before. Or if you introduce changes, you need a way to have proper configuration drift resolution. Writing an idempotent BASH script is non-trivial. Probably can make any grown man cry at the sight of its existence.</p>

<p>Most certainly, the second complexity introduced is some configuration manager like <a href="https://www.ansible.com/">Ansible</a>, <a href="https://www.chef.io/">Chef</a>, or <a href="https://www.saltstack.com/">Salt</a> to cover the provisioning of the instance. Take note, once you&#39;ve completed your provision, you are now half way to your end goal of running an app container. The next stage here is now how are you going to retrieve the app container image for your runtime. The options can grow quite large. However, to keep it as simple, while skipping massive chunks of the implementation details, you can create a continuous deliver pipeline to run your configuration manager.</p>

<p>The continuous delivery pipeline would run a configuration manager, which fetches your container image, applies the systemd unit file, and starts up the service. One of the requirements to make the systemd runtime work is you need to run a <a href="https://landscape.cncf.io/category=container-registry&amp;format=card-mode&amp;grouping=category">container registry</a> and another VM instance running said container registry. You will also need a <a href="https://landscape.cncf.io/category=continuous-integration-delivery&amp;format=card-mode&amp;grouping=category">CI/CD service</a> as hinted before, if you don&#39;t already have one.</p>

<p>Lastly, you need to manage all of the VM instances you spun up for that one single app container you want to run on the cloud. You are now managing both an entire operating system to run that single app container and a fleet of VM instances to manage that app container runtime. The complexity doesn&#39;t stop there. You also need to <a href="https://baez.link/design-to-fail">track many other portions</a>, like ssh privilege, security group isolation, system resource management, and service management.</p>

<p>The past way worked when we required only a few instances to run for our services. It becomes completely unmanageable when you have full fleet you require to run. Container orchestration allows you to take all of the complexity described here and apply it to a single standard structure on how you define an app container to run. Allowing for better abstractions, but also keeping the level of complexity built prior at a hopeful minimum.</p>

<p><a href="https://baez.link/tag:Day9" class="hashtag"><span>#</span><span class="p-category">Day9</span></a> <a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/the-ways-of-the-past</guid>
      <pubDate>Tue, 05 May 2020 02:55:51 +0000</pubDate>
    </item>
    <item>
      <title>Design To Fail</title>
      <link>https://baez.link/design-to-fail?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Never build your software in the belief it will run correctly. It won&#39;t.&#xA;&#xA;!--more--&#xA;&#xA;Modern day software design always tries to make the software we run close to stable as possible. Deadlines and limitations the teams will always have make it next to impossible to ship something bug free. Most likely than not, a 1.0.0 release will be a 1.0.1 in the same week or even day the version is available. The infrastructure that runs said software is no different. &#xA;&#xA;In this post of #StrikingABalance, I&#39;ll be focusing on the work that goes into the infrastructure design and what can be done to make it more manageable. Infrastructure design should always be thought of as something you want to let crash. The whole idea of treat your services as as cattle rather than pets is with this precise notion. Letting things die, but having automation to bring it back alive. &#xA;&#xA;Designing to fail means designing your software and infrastructure to gracefully manage itself. Ever had the moment that you get a call at 2:00 AM,  because something is on fire on production? If you design your system to fail, that call would come by with the commonality of lightning striking twice.   &#xA;&#xA;So how do you go about in designing to fail? You could go with the route of always adding tests. Similar to unit and integration tests in software design, you add unit and integration tests to your infrastructure design. But the test then need to record what goes wrong. A way track events of your infrastructure&#39;s failures. Those tracked events would then trigger some automation tooling, you most likely have made, to do the response you want it to do for your infrastructure. &#xA;&#xA;At the same time, due to the almost certain countless of moving parts in your infrastructure, you probably cannot test the whole platform in a silo. It may simply take too long, cost too much, or just not be feasible with what you have available. Meaning, you need to learn how to let infrastructure fail in production. You end up having load balancers, canary deployments, roll back releases, injected load tests, structured automated service rerouting, event triggered automation processing, database vertical scaling, horizontal service scaling, and countless other practices and paradigms. &#xA;&#xA;No right way for how to design to fail exists. What you need above all else is to be comfortable with letting the infrastructure crash. Have a way to learn from those crashes. Then, make practices that can catch these crashes and recover before you have to be involved. So when those bugs creep up, you don&#39;t need to be ready, because your infrastructure is built to fail.     &#xA;&#xA;#100DaysToOffload #StrikingABalance #Day5]]&gt;</description>
      <content:encoded><![CDATA[<p>Never build your software in the belief it will run correctly. It won&#39;t.</p>



<p>Modern day software design always tries to make the software we run close to stable as possible. Deadlines and limitations the teams will always have make it next to impossible to ship something bug free. Most likely than not, a <code>1.0.0</code> release will be a <code>1.0.1</code> in the same week or even day the version is available. The infrastructure that runs said software is no different.</p>

<p>In this post of <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a>, I&#39;ll be focusing on the work that goes into the infrastructure design and what can be done to make it more manageable. Infrastructure design should always be thought of as something you want to let crash. The whole idea of treat your services as as cattle rather than pets is with this precise notion. Letting things die, but having automation to bring it back alive.</p>

<p>Designing to fail means designing your software and infrastructure to gracefully manage itself. Ever had the moment that you get a call at 2:00 AM,  because something is on fire on production? If you design your system to fail, that call would come by with the commonality of lightning striking twice.</p>

<p>So how do you go about in designing to fail? You could go with the route of always adding tests. Similar to unit and integration tests in software design, you add unit and integration tests to your infrastructure design. But the test then need to record what goes wrong. A way track events of your infrastructure&#39;s failures. Those tracked events would then trigger some automation tooling, you most likely have made, to do the response you want it to do for your infrastructure.</p>

<p>At the same time, due to the almost certain countless of moving parts in your infrastructure, you probably cannot test the whole platform in a silo. It may simply take too long, cost too much, or just not be feasible with what you have available. Meaning, you need to learn how to let infrastructure fail in production. You end up having load balancers, canary deployments, roll back releases, injected load tests, structured automated service rerouting, event triggered automation processing, database vertical scaling, horizontal service scaling, and countless other practices and paradigms.</p>

<p>No right way for how to design to fail exists. What you need above all else is to be comfortable with letting the infrastructure crash. Have a way to learn from those crashes. Then, make practices that can catch these crashes and recover before you have to be involved. So when those bugs creep up, you don&#39;t need to be ready, because your infrastructure is built to fail.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a> <a href="https://baez.link/tag:Day5" class="hashtag"><span>#</span><span class="p-category">Day5</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/design-to-fail</guid>
      <pubDate>Thu, 30 Apr 2020 02:43:12 +0000</pubDate>
    </item>
    <item>
      <title>Builds And Sanity</title>
      <link>https://baez.link/builds-and-sanity?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The whole tech world is wrapped on the notion we need repeatable builds. There&#39;s sound logic to it, but the how has always become contradictory to what are the needs. &#xD;&#xA;&#xD;&#xA;!--more--&#xD;&#xA;&#xD;&#xA;In the first of the #StrikingABalance series, I&#39;ll discuss over what is the most common ways to do a repeatable build. Then touch a little on what we all could do better, to do more with less.   &#xD;&#xA;&#xD;&#xA;It is almost a given, but the most common way of doing a repeatable build that should be laid out is none other than an app container. An app container in the following context is one described by the open container initiative (OCI) image-spec. I don&#39;t believe I need to explain what they are, with their commonality in the space thanks to Docker and Kubernetes. However, there is one thing I would like to point out in its implementation. With an app container, the way to actually build has stayed using practices of old. Essentially we make snapshots of a chroot environment with a set of bash scripts in the hopes of making our software run how we required it to run. It may sound like its reproducible and it can be, but the trend tends to be far from the goal. &#xD;&#xA;&#xD;&#xA;The interesting part of creating an app container image though, is in what you contain for your app container image. It&#39;s the tools you add to the container image, which will become the container you run. Ideally, you only add applications that are required to run your software and that&#39;s it. But in practice, rarely do individuals take into make an app container, an app container. Instead, trending to garnishing an app container to be no different than that of a system container like LXD. Not to say system containers don&#39;t have their place. They very much do and play infinitesimally more important role as we move further into the cloud and abstractions. But a system container, is not what an app container should be.  &#xD;&#xA; &#xD;&#xA;There&#39;s a reason why members of the world are still reluctant to jump on doing everything in app containers. The benefit don&#39;t necessarily resolve the problem of making a repeatable and isolated build all the time. In many cases, an app container ends up becoming more complex than that of using the works of package management technologies like DEB and RPMs.  &#xD;&#xA;&#xD;&#xA;An app container should only run and contain the software you require it to run and that&#39;s it. Best case scenario is that you build an app container in a way that the source and its dependencies are the source of truth for the app container. A way to describe the app container&#39;s snapshots in a way that are completely identified by the source of the software you are trying to run. There is a trend for this, and it&#39;s definitely growing. &#xD;&#xA;&#xD;&#xA;In the past couple of years, strides of experimentation have been made to get us to a place where the source is the truth for how an app container is built and what it runs within that container. Two projects that have works doing so are Nix and Habitat. Both projects approach building an app container similarly. In essence, making an app container by assigning an artifact from builds, which are instrumented using a source dependencies for said software. In doing so, an app container&#39;s image, holds only the source and libraries required to run the software, while also using extremely advance dependency tree mapping available from a full package manager.    &#xD;&#xA;&#xD;&#xA;We still have time to grow on creating app container images. if used properly we can have our abstractions for builds to be manageable. So we can focus on what matters, the code we want to run. &#xD;&#xA;&#xD;&#xA;&#xD;&#xA;[1]: https://www.docker.com/resources/what-container&#xD;&#xA;[2]: https://nixos.org/nix/&#xD;&#xA;[3]: https://www.habitat.sh/docs/&#xD;&#xA;[4]: https://github.com/opencontainers/image-spec/blob/master/spec.md&#xD;&#xA;[5]: https://en.wikipedia.org/wiki/Chroot&#xD;&#xA;[6]: https://www.docker.com/&#xD;&#xA;[7]: https://kubernetes.io/&#xD;&#xA;&#xD;&#xA;#100DaysToOffload #Day4 #StrikingABalance&#xD;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>The whole tech world is wrapped on the notion we need repeatable builds. There&#39;s sound logic to it, but the how has always become contradictory to what are the needs.</p>



<p>In the first of the <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a> series, I&#39;ll discuss over what is the most common ways to do a repeatable build. Then touch a little on what we all could do better, to do more with less.</p>

<p>It is almost a given, but the most common way of doing a repeatable build that should be laid out is none other than an <a href="https://www.docker.com/resources/what-container">app container</a>. An app container in the following context is one described by the <a href="https://github.com/opencontainers/image-spec/blob/master/spec.md">open container initiative (OCI) image-spec</a>. I don&#39;t believe I need to explain what they are, with their commonality in the space thanks to <a href="https://www.docker.com/">Docker</a> and <a href="https://kubernetes.io/">Kubernetes</a>. However, there is one thing I would like to point out in its implementation. With an app container, the way to actually build has stayed using practices of old. Essentially we make snapshots of a <a href="https://en.wikipedia.org/wiki/Chroot">chroot</a> environment with a set of bash scripts in the hopes of making our software run how we required it to run. It may sound like its reproducible and it can be, but the trend tends to be far from the goal.</p>

<p>The interesting part of creating an app container image though, is in what you contain for your app container image. It&#39;s the tools you add to the container image, which will become the container you run. Ideally, you only add applications that are required to run your software and that&#39;s it. But in practice, rarely do individuals take into make an app container, an app container. Instead, trending to garnishing an app container to be no different than that of a <a href="https://www.docker.com/">system container like LXD</a>. Not to say system containers don&#39;t have their place. They very much do and play infinitesimally more important role as we move further into the cloud and abstractions. But a system container, is not what an app container should be.</p>

<p>There&#39;s a reason why members of the world are still reluctant to jump on doing everything in app containers. The benefit don&#39;t necessarily resolve the problem of making a repeatable and isolated build all the time. In many cases, an app container ends up becoming more complex than that of using the works of package management technologies like DEB and RPMs.</p>

<p>An app container should only run and contain the software you require it to run and that&#39;s it. Best case scenario is that you build an app container in a way that the source and its dependencies are the source of truth for the app container. A way to describe the app container&#39;s snapshots in a way that are completely identified by the source of the software you are trying to run. There is a trend for this, and it&#39;s definitely growing.</p>

<p>In the past couple of years, strides of experimentation have been made to get us to a place where the source is the truth for how an app container is built and what it runs within that container. Two projects that have works doing so are <a href="https://nixos.org/nix/">Nix</a> and <a href="https://www.habitat.sh/docs/">Habitat</a>. Both projects approach building an app container similarly. In essence, making an app container by assigning an artifact from builds, which are instrumented using a source dependencies for said software. In doing so, an app container&#39;s image, holds only the source and libraries required to run the software, while also using extremely advance dependency tree mapping available from a full package manager.</p>

<p>We still have time to grow on creating app container images. if used properly we can have our abstractions for builds to be manageable. So we can focus on what matters, the code we want to run.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day4" class="hashtag"><span>#</span><span class="p-category">Day4</span></a> <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/builds-and-sanity</guid>
      <pubDate>Wed, 29 Apr 2020 04:06:08 +0000</pubDate>
    </item>
    <item>
      <title>Striking A Balance</title>
      <link>https://baez.link/striking-a-balance?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Writing software is hard. Packaging software is harder. Running said software is hardest.&#xA;&#xA;We spend quite a bit of time developing software to run software.&#xA;&#xA;!--more--&#xA;It is is not to say we can&#39;t run software proficiently. There&#39;s a reason why a trillion dollar company like Alphabet spent time writing container orchestration tools like Borg and Kubernetes. But the level of complexity that we adhere to make an abstraction for something we want repeatable and automated, can get ridiculous. &#xA;&#xA;I spend my days writing code to run code. The effort is non-trivial and at times can feel quite daunting. You spend hours or even days optimizing a set of abstractions just to keep things just slightly more sane than the last set of changes. However the practice can also be grandly satisfying. When you do get those optimizations in place, your software tends to become more performant and more stable. Meaning you get to sleep more and enjoy life a little bit more. Always looking for ways to lessen the workload by decreasing complexity, but also increasing abstractions.&#xA;&#xA;I think a strong a balance on what need to run for your software and tooling you write to run that software. With that said, will write a small series focusing mostly on  software development builds and their runtime. &#xA;&#xA;#100DaysToOffload #Day3 #StrikingABalance&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Writing software is hard. Packaging software is harder. Running said software is hardest.</p>

<p>We spend quite a bit of time developing software to run software.</p>



<p>It is is not to say we can&#39;t run software proficiently. There&#39;s a reason why a trillion dollar company like Alphabet spent time writing container orchestration tools like Borg and Kubernetes. But the level of complexity that we adhere to make an abstraction for something we want repeatable and automated, can get ridiculous.</p>

<p>I spend my days writing code to run code. The effort is non-trivial and at times can feel quite daunting. You spend hours or even days optimizing a set of abstractions just to keep things just slightly more sane than the last set of changes. However the practice can also be grandly satisfying. When you do get those optimizations in place, your software tends to become more performant and more stable. Meaning you get to sleep more and enjoy life a little bit more. Always looking for ways to lessen the workload by decreasing complexity, but also increasing abstractions.</p>

<p>I think a strong a balance on what need to run for your software and tooling you write to run that software. With that said, will write a small series focusing mostly on  software development builds and their runtime.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day3" class="hashtag"><span>#</span><span class="p-category">Day3</span></a> <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/striking-a-balance</guid>
      <pubDate>Tue, 28 Apr 2020 03:45:41 +0000</pubDate>
    </item>
  </channel>
</rss>