<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>100DaysToOffload &amp;mdash; A Bit</title>
    <link>https://baez.link/tag:100DaysToOffload</link>
    <description>A little bit of writing by Alejandro </description>
    <pubDate>Thu, 16 Apr 2026 16:10:01 +0000</pubDate>
    <item>
      <title>Write Once for Web Assembly, Run On Everything</title>
      <link>https://baez.link/write-once-for-web-assembly-run-on-everything?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[If you ever heard the phrase &#39;write once and run everywhere&#39; then you know there&#39;s definitely difficulties in writing for multi architectures. Web Assembly may be the actual reality of running everywhere with one target. &#xA;&#xA;!--more--&#xA;&#xA;Anyone who has written software for multiple architectures know it is never plainly, &#34;compile for your target and it will just work.&#34; The reality is, there are always some exception you must take account for. It makes sense why. Any difference in architecture means entirely different ISA or even revision. Which translates to different rules for memory, instruction sets available, operational cost analyst, and much more. So taking your C code and just compiling it for AArch64 wouldn&#39;t cut it.  &#xA;&#xA;The same can be said for a process virtual machine used in Java. In theory, using the virtual machine meant you would target only the virtual machine and not have have to worry about the architecture the virtual machine would run under. However, in practice it has not been the case. Almost always, you still have to be aware of the architecture you are targeting due to bound operational differences like the list given prior.   &#xA;&#xA;This is where Web Assembly (wasm) is starting to shine. Wasm initially started as a way to run bytecode faster on the browser. Allowing for running heavier logic not possible with ECMAScript. To accomplish this, wasm was set up to use a process virtual machine like the JVM, but that targets a very specific ISA. So the VM can execute the ISA differently, but still requires to be able to run without modification. This results in the bytecode having exactly the same assembly definition regardless of physical architecture. Which means the VM can execute the bytecode on how it would be better optimized for the architecture. Giving way to focusing on only writing for a single target and only optimizing for that target. Leaving the VM that will run the wasm binary to take care of the heavy lifting as it should.&#xA;&#xA;Right now, the prospect of wasm use in running on everything is not there just yet. It was only December 2019 the wasm specification was even agreed upon by W3C. Yet, you can see the ability starting to creep up. Projects like WASI from Mozilla for portability and security, Krustlet for kubelet running wasm instead of containers on kubernetes, and Cloudflare&#39;s Workers for running on the edge with GEO distribution using wasm. &#xA;&#xA;I&#39;m excited to see web assembly&#39;s potential. Its progress may result in a saner stack. Maybe even only requiring one runtime and nothing else, web assembly. &#xA;&#xA;#Day14 #100DaysToOffload #Wasm #WebAssembly&#xA;&#xA;[1]: https://en.wikipedia.org/wiki/Instructionsetarchitecture&#xA;[2]: https://en.wikipedia.org/wiki/Virtualmachine#Processvirtual_machines&#xA;[4]: https://en.wikichip.org/wiki/arm/aarch64&#xA;[5]: https://webassembly.org/&#xA;[6]: https://en.wikipedia.org/wiki/ECMAScript&#xA;[7]: https://www.w3.org/TR/wasm-core-1/&#xA;[8]: https://wasi.dev/&#xA;[9]: https://github.com/deislabs/krustlet]]&gt;</description>
      <content:encoded><![CDATA[<p>If you ever heard the phrase &#39;write once and run everywhere&#39; then you know there&#39;s definitely difficulties in writing for multi architectures. Web Assembly may be the actual reality of running everywhere with one target.</p>



<p>Anyone who has written software for multiple architectures know it is <strong>never</strong> plainly, “compile for your target and it will just work.” The reality is, there are always some exception you must take account for. It makes sense why. Any difference in architecture means entirely different <a href="https://en.wikipedia.org/wiki/Instruction_set_architecture">ISA</a> or even revision. Which translates to different rules for memory, instruction sets available, operational cost analyst, and much more. So taking your C code and just compiling it for <a href="https://en.wikichip.org/wiki/arm/aarch64">AArch64</a> wouldn&#39;t cut it.</p>

<p>The same can be said for a <a href="https://en.wikipedia.org/wiki/Virtual_machine#Process_virtual_machines">process virtual machine</a> used in Java. In theory, using the virtual machine meant you would target only the virtual machine and not have have to worry about the architecture the virtual machine would run under. However, in practice it has not been the case. Almost always, you still have to be aware of the architecture you are targeting due to bound operational differences like the list given prior.</p>

<p>This is where <a href="https://webassembly.org/">Web Assembly</a> (wasm) is starting to shine. Wasm initially started as a way to run bytecode faster on the browser. Allowing for running heavier logic not possible with <a href="https://en.wikipedia.org/wiki/ECMAScript">ECMAScript</a>. To accomplish this, wasm was set up to use a process virtual machine like the JVM, but that targets a very specific ISA. So the VM can execute the ISA differently, but still requires to be able to run without modification. This results in the bytecode having exactly the same assembly definition regardless of physical architecture. Which means the VM can execute the bytecode on how it would be better optimized for the architecture. Giving way to focusing on only writing for a single target and only optimizing for that target. Leaving the VM that will run the wasm binary to take care of the heavy lifting as it should.</p>

<p>Right now, the prospect of wasm use in running on everything is not there just yet. It was only <a href="https://www.w3.org/TR/wasm-core-1/">December 2019 the wasm specification</a> was even agreed upon by W3C. Yet, you can see the ability starting to creep up. Projects like <a href="https://wasi.dev/">WASI</a> from Mozilla for portability and security, <a href="https://github.com/deislabs/krustlet">Krustlet</a> for kubelet running wasm instead of containers on kubernetes, and <a href="https://workers.cloudflare.com/">Cloudflare&#39;s Workers</a> for running on the edge with GEO distribution using wasm.</p>

<p>I&#39;m excited to see web assembly&#39;s potential. Its progress may result in a saner stack. Maybe even only requiring one runtime and nothing else, web assembly.</p>

<p><a href="https://baez.link/tag:Day14" class="hashtag"><span>#</span><span class="p-category">Day14</span></a> <a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Wasm" class="hashtag"><span>#</span><span class="p-category">Wasm</span></a> <a href="https://baez.link/tag:WebAssembly" class="hashtag"><span>#</span><span class="p-category">WebAssembly</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/write-once-for-web-assembly-run-on-everything</guid>
      <pubDate>Sun, 17 May 2020 03:56:40 +0000</pubDate>
    </item>
    <item>
      <title>Potential of Infrastructure as Code Without Boilerplate</title>
      <link>https://baez.link/potential-of-infrastructure-as-code-without-boilerplate?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Ever had a case where you look at your infrastructure as code work and think, &#34;why was I crazy enough to try to automate with this junk?&#34;  &#xA;&#xA;!--more--&#xA;&#xA;If you haven&#39;t, then you haven&#39;t had a codebase at the multi thousand lines of boilerplate. You just naturally end up writing so much logic gates, just to get around the limitations of what you are using. This is true in software development, but it is especially true in a Domain Specific Language (DSL) for Infrastructure as Code (IaC). My setup has always been to use the declarative nature of Hashicorp&#39;s Terraform and AWS Cloudformation for their respective jobs. The IaC tools have worked great. I&#39;ve been a heavy user of both. But it can get pretty ridiculous how much you have to do to get one single resource up and running (correctly anyway). &#xA;&#xA;Moreover, if you try to do the same with a provisioner or ochestrator, then you are definitely in for multiples levels of hell. Not to say you can&#39;t go the route of using Kubernetes for your IaC implementation or Ansible in a declarative fashion. The problem is, you end up writing too much boilerplate before you ever get to what you wanted to do. Your goal ends being forever further away from your intentions.&#xA;&#xA;What terraform and cloudformation get right is you need a full set of primitives to your infrastructure. It makes declarative state the goal of what you want, rather than the how you get to that state. If you try to use libraries for cloud providers directly, like boto or godo, with a programming language of choice, you end essentially building an entire in-house IaC provisioner. Requiring no difference in the level of boilerplate you write. Probably more so as you now need code to define basic primitives in a declarative fashion before you can create what you want.&#xA;&#xA;However, using a generic programming language for IaC can have some strong upsides. You can get precise logic and structures that are very well defined to what you need the infrastructure to be. You can also do some classical test driven development to better define your logic. So in the past few months, I have been thinking on how to resolve the problems of having to constantly write so much boilerplate, make maintenance more manageable, and have better abstractions for core primitives I want to create on a cloud provider. Instead of trying to do this again with cloudformation and terraform, I have begun working with pulumi and AWS CDK for the ambitious goal. &#xA;&#xA;I&#39;m still early in my venture, but so far I have learned both provide much simpler definitions for resources to create. The boilerplate is extremely minimal as the tools are designed for you to create modules or packages that you extend for your needs. With both, my codebases have gone done considerably. Making maintenance actually feasible. I&#39;m still discovering of their usage. Still, I&#39;m really liking the ability to use a full generic programming language to do everything I need to in my IaC.   &#xA;&#xA;#100DaysToOffload #Day13 #Infrastructure #IaC #Declarative&#xA;&#xA;[1]: https://www.terraform.io/&#xA;[2]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html&#xA;[3]: https://en.wikipedia.org/wiki/Domain-specificlanguage&#xA;[4]: https://github.com/digitalocean/godo&#xA;[5]: https://github.com/boto/boto3&#xA;[6]: https://www.pulumi.com/&#xA;[7]: https://docs.aws.amazon.com/cdk/latest/guide/home.html&#xA;[8]: https://kops.sigs.k8s.io/&#xA;[9]: https://www.ansible.com/]]&gt;</description>
      <content:encoded><![CDATA[<p>Ever had a case where you look at your infrastructure as code work and think, “why was I crazy enough to try to automate with this junk?”</p>



<p>If you haven&#39;t, then you haven&#39;t had a codebase at the multi thousand lines of boilerplate. You just naturally end up writing so much logic gates, just to get around the limitations of what you are using. This is true in software development, but it is especially true in a <a href="https://en.wikipedia.org/wiki/Domain-specific_language">Domain Specific Language</a> (DSL) for Infrastructure as Code (IaC). My setup has always been to use the declarative nature of <a href="https://www.terraform.io/">Hashicorp&#39;s Terraform</a> and <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html">AWS Cloudformation</a> for their respective jobs. The IaC tools have worked great. I&#39;ve been a heavy user of both. But it can get pretty ridiculous how much you have to do to get one single resource up and running (correctly anyway).</p>

<p>Moreover, if you try to do the same with a provisioner or ochestrator, then you are definitely in for multiples levels of hell. Not to say you can&#39;t go the route of using <a href="https://kops.sigs.k8s.io/">Kubernetes for your IaC</a> implementation or <a href="https://www.ansible.com/">Ansible</a> in a declarative fashion. The problem is, you end up writing too much boilerplate before you ever get to what you wanted to do. Your goal ends being forever further away from your intentions.</p>

<p>What terraform and cloudformation get right is you need a full set of primitives to your infrastructure. It makes declarative state the goal of what you want, rather than the how you get to that state. If you try to use libraries for cloud providers directly, like <a href="https://github.com/boto/boto3">boto</a> or <a href="https://github.com/digitalocean/godo">godo</a>, with a programming language of choice, you end essentially building an entire in-house IaC provisioner. Requiring no difference in the level of boilerplate you write. Probably more so as you now need code to define basic primitives in a declarative fashion before you can create what you want.</p>

<p>However, using a generic programming language for IaC can have some strong upsides. You can get precise logic and structures that are very well defined to what you need the infrastructure to be. You can also do some classical <a href="https://en.wikipedia.org/wiki/Test-driven_development">test driven development</a> to better define your logic. So in the past few months, I have been thinking on how to resolve the problems of having to constantly write so much boilerplate, make maintenance more manageable, and have better abstractions for core primitives I want to create on a cloud provider. Instead of trying to do this again with cloudformation and terraform, I have begun working with <a href="https://www.pulumi.com/">pulumi</a> and <a href="https://docs.aws.amazon.com/cdk/latest/guide/home.html">AWS CDK</a> for the ambitious goal.</p>

<p>I&#39;m still early in my venture, but so far I have learned both provide much simpler definitions for resources to create. The boilerplate is extremely minimal as the tools are designed for you to create modules or packages that you extend for your needs. With both, my codebases have gone done considerably. Making maintenance actually feasible. I&#39;m still discovering of their usage. Still, I&#39;m really liking the ability to use a full generic programming language to do everything I need to in my IaC.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day13" class="hashtag"><span>#</span><span class="p-category">Day13</span></a> <a href="https://baez.link/tag:Infrastructure" class="hashtag"><span>#</span><span class="p-category">Infrastructure</span></a> <a href="https://baez.link/tag:IaC" class="hashtag"><span>#</span><span class="p-category">IaC</span></a> <a href="https://baez.link/tag:Declarative" class="hashtag"><span>#</span><span class="p-category">Declarative</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/potential-of-infrastructure-as-code-without-boilerplate</guid>
      <pubDate>Wed, 13 May 2020 03:45:45 +0000</pubDate>
    </item>
    <item>
      <title>The A-Z Stack</title>
      <link>https://baez.link/the-a-z-stack?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Let&#39;s say you wanted to write your own application platform from scratch in the modern workflow. &#xA;&#xA;!--more--&#xA;&#xA;Not too long ago Kelsey Hightower posted a tweet about what you may require to run an application platform. While his list may look large, it&#39;s not all encompassing. It&#39;s just an example of how ridiculous our abstractions have gotten. Even taking one or two service on each category in the landscape.cncf.io page, would not give the full list of dependencies. So I started thinking. What would a full encompassing list of the &#39;recommended stack&#39; look like?&#xA;&#xA;I&#39;ve been running a few posts now on striking a balance between what you do to run your software in a sustainable and manageable way. This post is not that. The design here is the A-Z stack. A platform of everything I could think of, from on top of my head, you may need for a modern application platform. Please don&#39;t try to implement the stack here at work. If you already have, I&#39;m sorry and I know your pain. There&#39;s consoling we can probably get for our insanity. &#xA;&#xA;First, you need a cloud provider. If we want to be realistic, this would certainly be AWS. A high chance you and your team are already AWS Architects with decades of experience collectively. &#xA;&#xA;Next, you need an infrastructure as code software. The list can get quite large here on implementation. For the purpose of the A-Z stack list, let&#39;s just say you use Hashicorp&#39;s Terraform. With a team of N   1, you most certainly will be working on the code in terraform together. In other words, you need two parts to make terraform work properly here. One, you must be using version control. The famous single options are git and github. The second part is you need to have a backend that&#39;s not local for your terraform state. Since we using AWS, might as well use S3 with DynamoDB for state locking. &#xA;&#xA;With your cloud provider and IaC ready, the next section is the OS. You should be running Linux, let&#39;s say CentOS. Make sure you have SELinux enabled and actually enforcing here. While the trend is to disable, don&#39;t. There are too many reasons to count why not. If you doing it to be fast, you are hurting yourself and your company far more later down the road. Moreover the OS security, you need something to actually provision the Linux of choice. Let&#39;s stay with Red Hat and go with Ansible. The provisioning alone can be its own series on lists of things you need, but we&#39;ll just keep it as is.&#xA;&#xA;Ok, here we go with the container orchestrator. Sticking to the norm and the standard being Red Hat, let&#39;s say openshift. Openshift comes with a lot of batteries included, so the list would be much larger with what isn&#39;t set by default. However, for peace of mind, bringing up cri-o for CRI, Flannel for CNI, and CoreDNS for service discovery. For CSI here means probably Ceph with Rook. Secret management is a must and should be Vault.  &#xA;&#xA;Now comes networking. Already gave Flannel for CNI and CoreDNS, but you still need an ingress controller. The popular one here would probably be ambassador. Don&#39;t forget your services in this platform need to be able to communicate with one another. That means a service mesh is necessary. Here I&#39;m going to cheat and use Linkerd. By cheat, I mean it covers the service proxy sidecar and the controller. Otherwise, we most certainly would then have to add Envoy. However, if your service does have proxy requirements not covered by the  linkered sidecar, then Envoy is still required. &#xA;&#xA;Here comes the home stretch! The actual application requirements. First is the Container registry, harbor. With it comes, Notary and Clair for build image security. Next, you need the deployments to Kubernetes using something structured like Helm. If not already brought up before, you need a key value store like etcd. You also need a CI/CD solution for your applications. So staying with the trends, Argo would suffice. Don&#39;t forget to monitor your application and platform. Most likely means Prometheus for metrics, Loki for logs, and Jaeger for tracing.&#xA;&#xA;Now you can finally begin to write your application. Just so the point is made clear, here&#39;s the list of all that you must know, like the creators of the software themselves, to get this type of modern day application platform working:&#xA;&#xA;Cloud Provider : AWS&#xA;IaC : Terraform&#xA;Version Control: Git&#xA;Version Control Host: Github&#xA;S3 and dynamodb for state locking on terraform&#xA;Linux Distribution: CentOS&#xA;SELinux enabled and enforcing&#xA;Linux Provisioner: Ansible&#xA;Container Orchestrator: Kubernetes, Openshift&#xA;10. CRI: CRI-O&#xA;11. CNI: Flannel&#xA;12. Service Discovery: CoreDNS&#xA;12. CSI: Ceph with Rook&#xA;13. Secret Management: Vault&#xA;14. Ingress Controller: Ambassador&#xA;15. Service Mesh: Linkerd&#xA;16. Service Proxy: Envoy&#xA;17. Container Registry: Harbor&#xA;18. Version Security Management: Notary&#xA;19. Build container image security: Clair&#xA;20. Kubernetes deployment: Helm&#xA;21. Key Value store: etcd&#xA;22. Continuous Integration and Delivery: Argo&#xA;23. Metrics Observability: Prometheus&#xA;24. Logs Observability: Loki&#xA;25. Tracing Observability: Jaeger &#xA; &#xA;&#xA;#100DaysToOffload #Day12 #Kubernetes #PaaS &#xA;&#xA;[1]: https://twitter.com/kelseyhightower/status/1245886920443363329&#xA;[2]: https://landscape.cncf.io/&#xA;[3]: https://baez.link/tag:StrikingABalance&#xA;[4]: https://www.digitalocean.com/&#xA;[5]: https://www.digitalocean.com/&#xA;[6]: https://www.terraform.io/&#xA;[7]: https://git-scm.com/&#xA;[8]: https://github.com/&#xA;[9]: https://www.terraform.io/docs/backends/types/index.html&#xA;[10]: https://www.terraform.io/docs/backends/state.html&#xA;[11]: https://www.terraform.io/docs/backends/types/s3.html&#xA;[12]: https://www.centos.org/&#xA;[13]: https://selinuxproject.org/page/Main_Page&#xA;[14]: https://www.ansible.com/&#xA;[15]: https://www.openshift.com/&#xA;[16]: https://cri-o.io/&#xA;[17]: https://github.com/coreos/flannel&#xA;[18]: https://rook.io/&#xA;[19]: https://www.vaultproject.io/&#xA;[20]: https://coredns.io/&#xA;[21]: https://www.getambassador.io/&#xA;[22]: https://linkerd.io/&#xA;[23]: https://www.envoyproxy.io/&#xA;[24]: https://goharbor.io/&#xA;[25]: https://coreos.com/clair/docs/latest/&#xA;[26]: https://github.com/theupdateframework/notary&#xA;[27]: https://helm.sh/&#xA;[28]: https://github.com/etcd-io&#xA;[29]: https://argoproj.github.io/&#xA;[30]: https://prometheus.io/&#xA;[31]: https://grafana.com/oss/loki/]]&gt;</description>
      <content:encoded><![CDATA[<p>Let&#39;s say you wanted to write your own application platform from scratch in the modern workflow.</p>



<p>Not too long ago Kelsey Hightower posted a tweet about what you may require to run an <a href="https://twitter.com/kelseyhightower/status/1245886920443363329">application platform</a>. While his list may look large, it&#39;s not all encompassing. It&#39;s just an example of how ridiculous our abstractions have gotten. Even taking one or two service on each category in the <a href="https://landscape.cncf.io/">landscape.cncf.io page</a>, would not give the full list of dependencies. So I started thinking. What would a full encompassing list of the &#39;recommended stack&#39; look like?</p>

<p>I&#39;ve been running a few posts now on <a href="https://baez.link/tag:StrikingABalance">striking a balance</a> between what you do to run your software in a sustainable and manageable way. This post is <strong>not</strong> that. The design here is the A-Z stack. A platform of everything I could think of, from on top of my head, you may need for a modern application platform. Please don&#39;t try to implement the stack here at work. If you already have, I&#39;m sorry and I know your pain. There&#39;s consoling we can probably get for our insanity.</p>

<p>First, you need a cloud provider. If we want to be realistic, this would certainly be AWS. A high chance you and your team are already AWS Architects with decades of experience collectively.</p>

<p>Next, you need an infrastructure as code software. The list can get quite large here on implementation. For the purpose of the A-Z stack list, let&#39;s just say you use Hashicorp&#39;s <a href="https://www.terraform.io/">Terraform</a>. With a team of <code>N &gt; 1</code>, you most certainly will be working on the code in terraform together. In other words, you need two parts to make terraform work properly here. One, you must be using version control. The famous single options are <a href="https://git-scm.com/">git</a> and <a href="https://github.com/">github</a>. The second part is you need to have a <a href="https://www.terraform.io/docs/backends/state.html">backend that&#39;s not local</a> for your <a href="https://www.terraform.io/docs/backends/types/index.html">terraform state</a>. Since we using AWS, might as well use <a href="https://www.terraform.io/docs/backends/types/s3.html">S3 with DynamoDB</a> for state locking.</p>

<p>With your cloud provider and IaC ready, the next section is the OS. You should be running Linux, let&#39;s say <a href="https://www.centos.org/">CentOS</a>. Make sure you have SELinux enabled and actually enforcing here. While the trend is to disable, don&#39;t. There are too many reasons to count why not. If you doing it to be fast, you are hurting yourself and your company far more later down the road. Moreover the OS security, you need something to actually provision the Linux of choice. Let&#39;s stay with Red Hat and go with <a href="https://www.ansible.com/">Ansible</a>. The provisioning alone can be its own series on lists of things you need, but we&#39;ll just keep it as is.</p>

<p>Ok, here we go with the container orchestrator. Sticking to the norm and the standard being Red Hat, let&#39;s say <a href="https://www.openshift.com/">openshift</a>. Openshift comes with a lot of batteries included, so the list would be much larger with what isn&#39;t set by default. However, for peace of mind, bringing up <a href="https://cri-o.io/">cri-o</a> for CRI, <a href="https://github.com/coreos/flannel">Flannel</a> for CNI, and <a href="https://coredns.io/">CoreDNS</a> for service discovery. For CSI here means probably <a href="https://rook.io/">Ceph with Rook</a>. Secret management is a must and should be <a href="https://www.vaultproject.io/">Vault</a>.</p>

<p>Now comes networking. Already gave <a href="https://github.com/coreos/flannel">Flannel</a> for CNI and <a href="https://coredns.io/">CoreDNS</a>, but you still need an ingress controller. The popular one here would probably be <a href="https://www.getambassador.io/">ambassador</a>. Don&#39;t forget your services in this platform need to be able to communicate with one another. That means a service mesh is necessary. Here I&#39;m going to cheat and use <a href="https://linkerd.io/">Linkerd</a>. By cheat, I mean it covers the service proxy sidecar and the controller. Otherwise, we most certainly would then have to add <a href="https://www.envoyproxy.io/">Envoy</a>. However, if your service does have proxy requirements not covered by the  linkered sidecar, then Envoy is still required.</p>

<p>Here comes the home stretch! The actual application requirements. First is the Container registry, <a href="https://goharbor.io/">harbor</a>. With it comes, <a href="https://coreos.com/clair/docs/latest/">Notary</a> and <a href="https://github.com/theupdateframework/notary">Clair</a> for build image security. Next, you need the deployments to Kubernetes using something structured like <a href="https://helm.sh/">Helm</a>. If not already brought up before, you need a key value store like <a href="https://github.com/etcd-io">etcd</a>. You also need a CI/CD solution for your applications. So staying with the trends, <a href="https://argoproj.github.io/">Argo</a> would suffice. Don&#39;t forget to monitor your application and platform. Most likely means <a href="https://prometheus.io/">Prometheus</a> for metrics, <a href="https://grafana.com/oss/loki/">Loki</a> for logs, and <a href="https://www.jaegertracing.io/">Jaeger</a> for tracing.</p>

<p>Now you can finally begin to write your application. Just so the point is made clear, here&#39;s the list of all that you must know, like the creators of the software themselves, to get this type of modern day application platform working:</p>
<ol><li>Cloud Provider : AWS</li>
<li>IaC : <a href="https://www.terraform.io/">Terraform</a></li>
<li>Version Control: <a href="https://git-scm.com/">Git</a></li>
<li>Version Control Host: <a href="https://github.com/">Github</a></li>
<li><a href="https://www.terraform.io/docs/backends/types/s3.html">S3 and dynamodb for state locking on terraform</a></li>
<li>Linux Distribution: <a href="https://www.centos.org/">CentOS</a></li>
<li><a href="https://selinuxproject.org/page/Main_Page">SELinux enabled and enforcing</a></li>
<li>Linux Provisioner: <a href="https://www.ansible.com/">Ansible</a></li>
<li>Container Orchestrator: Kubernetes, <a href="https://www.openshift.com/">Openshift</a></li>
<li>CRI: <a href="https://cri-o.io/">CRI-O</a></li>
<li>CNI: <a href="https://github.com/coreos/flannel">Flannel</a></li>
<li>Service Discovery: <a href="https://coredns.io/">CoreDNS</a></li>
<li>CSI: <a href="https://rook.io/">Ceph with Rook</a></li>
<li>Secret Management: <a href="https://www.vaultproject.io/">Vault</a></li>
<li>Ingress Controller: <a href="https://www.getambassador.io/">Ambassador</a></li>
<li>Service Mesh: <a href="https://linkerd.io/">Linkerd</a></li>
<li>Service Proxy: <a href="https://www.envoyproxy.io/">Envoy</a></li>
<li>Container Registry: <a href="https://goharbor.io/">Harbor</a></li>
<li>Version Security Management: <a href="https://github.com/theupdateframework/notary">Notary</a></li>
<li>Build container image security: <a href="https://coreos.com/clair/docs/latest/">Clair</a></li>
<li>Kubernetes deployment: <a href="https://helm.sh/">Helm</a></li>
<li>Key Value store: <a href="https://github.com/etcd-io">etcd</a></li>
<li>Continuous Integration and Delivery: <a href="https://argoproj.github.io/">Argo</a></li>
<li>Metrics Observability: <a href="https://prometheus.io/">Prometheus</a></li>
<li>Logs Observability: <a href="https://grafana.com/oss/loki/">Loki</a></li>
<li>Tracing Observability: <a href="https://www.jaegertracing.io/">Jaeger</a></li></ol>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day12" class="hashtag"><span>#</span><span class="p-category">Day12</span></a> <a href="https://baez.link/tag:Kubernetes" class="hashtag"><span>#</span><span class="p-category">Kubernetes</span></a> <a href="https://baez.link/tag:PaaS" class="hashtag"><span>#</span><span class="p-category">PaaS</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/the-a-z-stack</guid>
      <pubDate>Sun, 10 May 2020 04:14:04 +0000</pubDate>
    </item>
    <item>
      <title>Mercurial is simply too good</title>
      <link>https://baez.link/mercurial-is-simply-too-good?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[It has always bothered me how much our ways of being are based on copying and using what&#39;s popular, rather than learning and doing what actually works. &#xA;&#xA;!--more--&#xA;&#xA;Back in &#39;05 when Linus Torvalds decided to release git for the Linux kernel, a week later mercurial was released with the same purpose. Git was built in a way that you would expect a man like Linus Torvalds would make version control system. Something utterly complex, but absolutely brilliant if you understand everything about its internals. Mercurial was designed instead to be user friendly first and simple enough to to get the job done.&#xA;&#xA;Mercurial most certainly lost to git due to its own simplicity. You can get up and running faster on mercurial than you ever could/can with git. It has very intuitive verb commands that do precisely what you would believe they  would do. Mercurial also has extremely powerful search capabilities with revsets expressions. It has built in safeguards, which makes sure you don&#39;t shoot yourself in the foot in every command. Mercurial even has its own web server baked in. And that&#39;s the problem. &#xA;&#xA;The tool was so correctly designed, it didn&#39;t give way for something like github to be required for it to work. You only needed someone with an IP and port, and you can get a running host. Have a team near you? Run hg serve  and you can collaborate immediately. No need to sign up to some site now owned by a trillion dollar company. You can just work and do what you need to do. This is precisely the problem. The tool actually solved the problems you needed. So there just simply was no real reason why you would need to spend money on a web host or forge for mercurial. &#xA;&#xA;So now we are all stuck with three options: git (github), git (gitlab), and git (bitbucket). Good job, mercurial. You beat git so well, you kicked yourself out the fight. &#xA;&#xA;However, there is some hope. In recent years, there&#39;s been a slow but steady resurgence of the tool. Facebook still very much uses it for their terabyte sized repository. A full rust rewrite of mercurial&#39;s internals have been in the works to fully make it just that much better. New hosting forges like heptapod and sourcehut have come into the picture. &#xA;&#xA;Mercurial will probably never reach the heights of git. But for people who want to get their work done, rather than fight with their tool, it will find a nice and sane home.   &#xA;&#xA;#Mercurial #Git #100DaysToOffload #Day11&#xA;&#xA;[1]: https://www.mercurial-scm.org/&#xA;[2]: https://www.mercurial-scm.org/doc/hg.1.html#commands&#xA;[3]: https://www.mercurial-scm.org/doc/hg.1.html#specifying-revisions&#xA;[4]: https://www.mercurial-scm.org/doc/hg.1.html#serve&#xA;[5]: https://www.mercurial-scm.org/repo/hg/file/tip/rust&#xA;[6]: https://heptapod.net/]]&gt;</description>
      <content:encoded><![CDATA[<p>It has always bothered me how much our ways of being are based on copying and using what&#39;s popular, rather than learning and doing what actually works.</p>



<p>Back in &#39;05 when Linus Torvalds decided to release git for the Linux kernel, a week later <a href="https://www.mercurial-scm.org/">mercurial</a> was released with the same purpose. Git was built in a way that you would expect a man like Linus Torvalds would make version control system. Something utterly complex, but absolutely brilliant if you understand everything about its internals. Mercurial was designed instead to be user friendly first and simple enough to to get the job done.</p>

<p>Mercurial most certainly lost to git due to its own simplicity. You can get up and running faster on mercurial than you ever could/can with git. It has very <a href="https://www.mercurial-scm.org/doc/hg.1.html#commands">intuitive verb commands</a> that do precisely what you would believe they  would do. Mercurial also has extremely powerful <a href="https://www.mercurial-scm.org/doc/hg.1.html#specifying-revisions">search capabilities with revsets expressions</a>. It has built in safeguards, which makes sure you don&#39;t shoot yourself in the foot in every command. Mercurial even has its <a href="https://www.mercurial-scm.org/doc/hg.1.html#serve">own web server baked in</a>. And that&#39;s the problem.</p>

<p>The tool was so correctly designed, it didn&#39;t give way for something like github to be required for it to work. You only needed someone with an IP and port, and you can get a running host. Have a team near you? Run <code>hg serve</code> and you can collaborate immediately. No need to sign up to some site now owned by a trillion dollar company. You can just work and do what you need to do. This is precisely the problem. The tool actually solved the problems you needed. So there just simply was no real reason why you would need to spend money on a web host or forge for mercurial.</p>

<p>So now we are all stuck with three options: git (github), git (gitlab), and git (bitbucket). Good job, mercurial. You beat git so well, you kicked yourself out the fight.</p>

<p>However, there is some hope. In recent years, there&#39;s been a slow but steady resurgence of the tool. Facebook still very much uses it for their terabyte sized repository. A full <a href="https://www.mercurial-scm.org/repo/hg/file/tip/rust">rust rewrite</a> of mercurial&#39;s internals have been in the works to fully make it just that much better. New hosting forges like <a href="https://heptapod.net/">heptapod</a> and <a href="https://hg.sr.ht/">sourcehut</a> have come into the picture.</p>

<p>Mercurial will probably never reach the heights of git. But for people who want to get their work done, rather than fight with their tool, it will find a nice and sane home.</p>

<p><a href="https://baez.link/tag:Mercurial" class="hashtag"><span>#</span><span class="p-category">Mercurial</span></a> <a href="https://baez.link/tag:Git" class="hashtag"><span>#</span><span class="p-category">Git</span></a> <a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day11" class="hashtag"><span>#</span><span class="p-category">Day11</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/mercurial-is-simply-too-good</guid>
      <pubDate>Fri, 08 May 2020 03:29:33 +0000</pubDate>
    </item>
    <item>
      <title>Use Asynchronuous Messaging</title>
      <link>https://baez.link/use-asynchronuous-messaging?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The fad of moving to instant messaging chat tools gives the impression they are the only way to communicate on the internet. Instantaneous form of communication can be anxiety inducing, stressful, and overall time consuming. However, if you instead like to use your time to relax and compose your thoughts, think of using asynchronous messaging.  &#xA;&#xA;!--more--&#xA;&#xA;Asynchronous messaging is everywhere and you most certainly already are familiar with. Especially with the most common one, your email. While, there are many examples of its abuse (looking at you Google), email is still prevalent everywhere for a reason. Not only is email decentralized and thankfully not yet owned by single a entity (again, looking at you Google), but it is also excellent for working remote. With an email, you can  reply to conversations at your own leisure, rather than someone&#39;s else. That empowering moment to reflect of what you will say is very much important to how you collect your thoughts.&#xA;&#xA;Even with the way people communicate with one another on the internet changes due to these delays between messages. That&#39;s not to say asynchronous messaging is the only route to use. Sometimes you do need that instant mode of sensory input. But those type of modes of communication should be used after you&#39;ve expressed a thread of a conversation in an async manner. Which can also mean your instantaneous mode of communication become more fruitful, as both you and your colleagues would have more insight into what the conversation would be about.&#xA;&#xA;The best part of asynchronous messaging only shines brighter when you have a team that is distributed. The further you are between co-workers, the more it is vital you understand that your teammates are not available at your time. Meaning, you can recognize their time is just as precious as your own. They are only available, when its their work hours hopefully. Sometimes not even the same region or even a near timezone as you. So being mindful and expressive in your mode of communication is key.  So you can go and spend less time trying communicate what you need and actually communicate what you need.   &#xA;&#xA;So here&#39;s to using asynchronous messaging. Use it first when you can. Only after do you then leave the spam that is instant messaging to come in, if needed.&#xA;&#xA;#100DaysToOffload #Day10 #Email&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>The fad of moving to instant messaging chat tools gives the impression they are the only way to communicate on the internet. Instantaneous form of communication can be anxiety inducing, stressful, and overall time consuming. However, if you instead like to use your time to relax and compose your thoughts, think of using asynchronous messaging.</p>



<p>Asynchronous messaging is everywhere and you most certainly already are familiar with. Especially with the most common one, your email. While, there are <em>many</em> examples of its abuse (looking at you Google), email is still prevalent everywhere for a reason. Not only is email decentralized and thankfully not yet owned by single a entity (again, looking at you Google), but it is also excellent for working remote. With an email, you can  reply to conversations at your own leisure, rather than someone&#39;s else. That empowering moment to reflect of what you will say is very much important to how you collect your thoughts.</p>

<p>Even with the way people communicate with one another on the internet changes due to these delays between messages. That&#39;s not to say asynchronous messaging is the only route to use. Sometimes you do need that instant mode of sensory input. But those type of modes of communication should be used after you&#39;ve expressed a thread of a conversation in an async manner. Which can also mean your instantaneous mode of communication become more fruitful, as both you and your colleagues would have more insight into what the conversation would be about.</p>

<p>The best part of asynchronous messaging only shines brighter when you have a team that is distributed. The further you are between co-workers, the more it is vital you understand that your teammates are not available at your time. Meaning, you can recognize their time is just as precious as your own. They are only available, when its their work hours hopefully. Sometimes not even the same region or even a near timezone as you. So being mindful and expressive in your mode of communication is key.  So you can go and spend less time trying communicate what you need and actually communicate what you need.</p>

<p>So here&#39;s to using asynchronous messaging. Use it first when you can. Only after do you then leave the spam that is instant messaging to come in, if needed.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day10" class="hashtag"><span>#</span><span class="p-category">Day10</span></a> <a href="https://baez.link/tag:Email" class="hashtag"><span>#</span><span class="p-category">Email</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/use-asynchronuous-messaging</guid>
      <pubDate>Thu, 07 May 2020 03:00:05 +0000</pubDate>
    </item>
    <item>
      <title>The Ways Of the Past</title>
      <link>https://baez.link/the-ways-of-the-past?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[If you been living outside of the burn cycle in production, you may not know what is the fad of app containers and container orchestration. However, if you had, you may forget why we use them now. &#xA;&#xA;!--more-- &#xA;&#xA;In the following post, continuing the series of #StrikingABalance, we will explore how we would run a service, in legacy infrastructure. Brightening the shadow we&#39;ve made using app containers and the actual simplicity container orchestration brings.&#xA;&#xA;Container orchestrations are an answer to a problem created from the technological introduction of app containers. App containers are an excellent form of making reproducible artifacts. However, with its use, a requirement for something to run these new artifacts arises. While running locally is quite trivial. Especially with tools like docker-compose,  running a container image in any form on the cloud will most certainly not be. The reason is fairly easy to describe, but not so easy to implement. You need a way to be able to make your deployment ephemeral and idempotent.  &#xA;&#xA;Let&#39;s take the approach of running an app container without a container orchestration. &#xA;&#xA;The first you have to account for is how your app container is going to run. Let&#39;s say you will use a VM instance to run the container. Yet, before you can even answer how you will set up that instance, you need to figure out what is going to run the container image consistently. The easiest approach here probably be to use an init daemon. A simple systemd service unit file to keep the app container running can suffice. Allowing you to retrieve logs and status of the service, fairly quickly for the app container&#39;s runtime. &#xA;&#xA;Now, back to the how of the app container&#39;s runtime will function. You now need to provision the VM instance before you can reach the stage of running the app container on systemd.  A simple BASH script could work here, but remember the end goal here is for something that&#39;s idempotent and ephemeral. If your VM instance shuts down, you need a way to get back the setup you had prior exactly as it was before. Or if you introduce changes, you need a way to have proper configuration drift resolution. Writing an idempotent BASH script is non-trivial. Probably can make any grown man cry at the sight of its existence.&#xA;&#xA;Most certainly, the second complexity introduced is some configuration manager like Ansible, Chef, or Salt to cover the provisioning of the instance. Take note, once you&#39;ve completed your provision, you are now half way to your end goal of running an app container. The next stage here is now how are you going to retrieve the app container image for your runtime. The options can grow quite large. However, to keep it as simple, while skipping massive chunks of the implementation details, you can create a continuous deliver pipeline to run your configuration manager. &#xA;&#xA;The continuous delivery pipeline would run a configuration manager, which fetches your container image, applies the systemd unit file, and starts up the service. One of the requirements to make the systemd runtime work is you need to run a container registry and another VM instance running said container registry. You will also need a CI/CD service as hinted before, if you don&#39;t already have one. &#xA;&#xA;Lastly, you need to manage all of the VM instances you spun up for that one single app container you want to run on the cloud. You are now managing both an entire operating system to run that single app container and a fleet of VM instances to manage that app container runtime. The complexity doesn&#39;t stop there. You also need to track many other portions, like ssh privilege, security group isolation, system resource management, and service management. &#xA;&#xA;The past way worked when we required only a few instances to run for our services. It becomes completely unmanageable when you have full fleet you require to run. Container orchestration allows you to take all of the complexity described here and apply it to a single standard structure on how you define an app container to run. Allowing for better abstractions, but also keeping the level of complexity built prior at a hopeful minimum. &#xA;&#xA;#Day9 #100DaysToOffload #StrikingABalance &#xA;&#xA;[1]: https://baez.link/builds-and-sanity&#xA;[2]: https://docs.docker.com/compose/&#xA;[3]: https://www.mankier.com/5/systemd.unit&#xA;[4]: https://www.ansible.com/&#xA;[5]: https://www.saltstack.com/&#xA;[6]: https://en.wikipedia.org/wiki/Init&#xA;[7]: https://www.chef.io/&#xA;[8]: https://landscape.cncf.io/category=container-registry&amp;format=card-mode&amp;grouping=category&#xA;[9]: https://landscape.cncf.io/category=continuous-integration-delivery&amp;format=card-mode&amp;grouping=category&#xA;[10]: https://blog.newrelic.com/engineering/container-orchestration-explained/]]&gt;</description>
      <content:encoded><![CDATA[<p>If you been living outside of the burn cycle in production, you may not know what is the fad of app containers and container orchestration. However, if you had, you may forget why we use them now.</p>

 

<p>In the following post, continuing the series of <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a>, we will explore how we would run a service, in legacy infrastructure. Brightening the shadow we&#39;ve made using app containers and the actual simplicity container orchestration brings.</p>

<p><a href="https://blog.newrelic.com/engineering/container-orchestration-explained/">Container orchestrations</a> are an answer to a problem created from the technological introduction of <a href="https://baez.link/builds-and-sanity">app containers</a>. App containers are an excellent form of making reproducible artifacts. However, with its use, a requirement for something to run these new artifacts arises. While running locally is quite trivial. Especially with tools like <a href="https://docs.docker.com/compose/">docker-compose</a>,  running a container image in any form on the cloud will most certainly not be. The reason is fairly easy to describe, but not so easy to implement. You need a way to be able to make your deployment ephemeral and idempotent.</p>

<p>Let&#39;s take the approach of running an app container <em>without</em> a container orchestration.</p>

<p>The first you have to account for is <em>how</em> your app container is going to run. Let&#39;s say you will use a VM instance to run the container. Yet, before you can even answer how you will set up that instance, you need to figure out what is going to run the container image consistently. The easiest approach here probably be to use an <a href="https://en.wikipedia.org/wiki/Init">init daemon</a>. A <a href="https://www.mankier.com/5/systemd.unit">simple systemd service unit file</a> to keep the app container running can suffice. Allowing you to retrieve logs and status of the service, fairly quickly for the app container&#39;s runtime.</p>

<p>Now, back to the how of the app container&#39;s runtime will function. You now need to provision the VM instance before you can reach the stage of running the app container on systemd.  A simple BASH script could work here, but remember the end goal here is for something that&#39;s idempotent and ephemeral. If your VM instance shuts down, you need a way to get back the setup you had prior exactly as it was before. Or if you introduce changes, you need a way to have proper configuration drift resolution. Writing an idempotent BASH script is non-trivial. Probably can make any grown man cry at the sight of its existence.</p>

<p>Most certainly, the second complexity introduced is some configuration manager like <a href="https://www.ansible.com/">Ansible</a>, <a href="https://www.chef.io/">Chef</a>, or <a href="https://www.saltstack.com/">Salt</a> to cover the provisioning of the instance. Take note, once you&#39;ve completed your provision, you are now half way to your end goal of running an app container. The next stage here is now how are you going to retrieve the app container image for your runtime. The options can grow quite large. However, to keep it as simple, while skipping massive chunks of the implementation details, you can create a continuous deliver pipeline to run your configuration manager.</p>

<p>The continuous delivery pipeline would run a configuration manager, which fetches your container image, applies the systemd unit file, and starts up the service. One of the requirements to make the systemd runtime work is you need to run a <a href="https://landscape.cncf.io/category=container-registry&amp;format=card-mode&amp;grouping=category">container registry</a> and another VM instance running said container registry. You will also need a <a href="https://landscape.cncf.io/category=continuous-integration-delivery&amp;format=card-mode&amp;grouping=category">CI/CD service</a> as hinted before, if you don&#39;t already have one.</p>

<p>Lastly, you need to manage all of the VM instances you spun up for that one single app container you want to run on the cloud. You are now managing both an entire operating system to run that single app container and a fleet of VM instances to manage that app container runtime. The complexity doesn&#39;t stop there. You also need to <a href="https://baez.link/design-to-fail">track many other portions</a>, like ssh privilege, security group isolation, system resource management, and service management.</p>

<p>The past way worked when we required only a few instances to run for our services. It becomes completely unmanageable when you have full fleet you require to run. Container orchestration allows you to take all of the complexity described here and apply it to a single standard structure on how you define an app container to run. Allowing for better abstractions, but also keeping the level of complexity built prior at a hopeful minimum.</p>

<p><a href="https://baez.link/tag:Day9" class="hashtag"><span>#</span><span class="p-category">Day9</span></a> <a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/the-ways-of-the-past</guid>
      <pubDate>Tue, 05 May 2020 02:55:51 +0000</pubDate>
    </item>
    <item>
      <title>Add Recovery To Your Pop!_OS</title>
      <link>https://baez.link/add-recovery-to-your-pop-_os?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[For anyone who already has a none default install of Pop!OS, there exist a way for you to get the recovery partition of System76&#39;s distribution.&#xA;&#xA;!--more-- &#xA;&#xA;I didn&#39;t know about the recovery recovery partition until I ended up purchasing a System76 laptop and using the clean install procedure. Pop!OS has its own Recovery mode. The recovery mode is essentially a partition with a live install version of the beautiful distribution. It allows you to do full install changes, upgrades, and any other recovery requirements you may have. &#xA;&#xA;If you like me, fellow reader, you may be using a more custom partition setup. Unfortunately, anything other than the default clean install on Pop!OS means you do not get the automatically generated recovery partition of goodness. Here&#39;s a small guide over how to get the setup on an already existing install. &#xA;&#xA;First, make sure you have all the tools required here:&#xA;&#xA;mkfs.vfat : required for creating the recovery filesystem.&#xA;parted or gdisk :  resize and create partition.&#xA;pop-upgrade : available by default on POP!OS.&#xA;a text editor of choice.&#xA;&#xA;Make the recovery partition&#xA;&#xA;You now need to make sure you have 4.4 GiB of unallocated storage available on your primary disk drive. The unallocated storage space will be used to make a new FAT32 partition on your partition table.&#xA;&#xA;Note, if you are using btrfs, make sure to resize the filesystem before shrinking the storage available. A required step as you may have allocations used in locations you will be removing. Hence a small rebalance for those data blocks:&#xA;&#xA;btrfs filesystem resize -4.4g /&#xA;Use your favorite tool to resize the disk. Personally, I tend to just use gdisk, but you can use parted or any other tool for the task. Once you have resized, make the 4.4 GiB partition, if you haven&#39;t already, and create the FAT32 filesystem on the new partition: &#xA;&#xA;the label RECOVERY is to make easier to define later&#xA;mkfs.vfat -n RECOVERY /dev/{{ recoveryparitionid }}&#xA;Next, you need the new partition mounted on /recovery. &#xA;&#xA;mount -L RECOVERY /recovery&#xA;Now comes the tricky part. You need to run pop-upgrade tool to install the Pop!OS ISO into the recovery partition. Before you do, make sure you have open on a tail of the logs for pop-upgrade: &#xA;&#xA;journalctl -flu pop-upgrade&#xA;The pop-upgrade tool has a tendency to fail and the error messages are not exactly descriptive. Having the logs available can be quite helpful in debugging what went wrong.  Once you do have the logs tailed, on a separate terminal run the following: &#xA;&#xA;use 20.04 as of this writing for Focal version&#xA;pop-upgrade recovery upgrade from-release 20.04&#xA;If you have a /tmp directory with less storage available than the full ISO image size of POP!OS, you will have troubles installing. Before this becomes an issue, make sure you have available at least 2.5 GiB of disk storage on /tmp. If you don&#39;t, you can always do a bind mount to filesystem that does temporarily. Allowing you to download the ISO. Once pop-upgrade procedure is complete, simply unbind the bind mount.&#xA;&#xA;If by any chance your recovery installation fails on pop-upgrade, head over to your /etc/fstab and comment out the auto generated entry for you /recovery partition. The tool currently has an issue where it will fail if the entry exists on your filesystem table file.&#xA;&#xA;Add the recovery.conf file&#xA;&#xA;Ok, so now that you have the recovery partition, you need to add the recovery.conf file. First run the command below to create the file in /recovery/recovery.conf filepath with the following template:&#xA;&#xA; EFIUUID={{ bootefiuuid }}&#xA;HOSTNAME=pop-os&#xA;KBDLAYOUT=us&#xA;KBDMODEL=&#xA;KBDVARIANT=&#xA;LANG=enUS.UTF-8&#xA;LUKSUUID=&#xA;OEMMODE=0&#xA;RECOVERYUUID={{ poprecoveryuuid }}&#xA;ROOTUUID=UUID={{ poprootuuid }}&#xA;UPGRADE=1&#xA;Edit all the {{ }} with the correct partition&#39;s UUID. To keep this easier to identify, use the PARTUUID of your partitions for all of the entries like the example below:&#xA;&#xA;ROOTUUID=PARTUUID=6fee8edb-1e18-485b-95aa-4e36f1abaa4e&#xA;Create the systemd-boot entry&#xA;&#xA;Congrats, you now up to the final stretch. You only need two things here, the actual ID for your recovery partition and the partuuid of the partition. Make an entry on your systemd-boot loader entry: &#xA;&#xA;title Pop!OS recovery&#xA;linux /EFI/Recovery-{{ recoverypartitionuuid }}/vmlinuz.efi&#xA;initrd /EFI/Recovery-{{ recoverypartitionuuid }}/initrd.gz&#xA;options boot=casper hostname=recovery userfullname=Recovery username=recovery live-media-path=/casper-{{ recoverypartitionuuid }} live-media=/dev/disk/by-partuuid/{{ recoverypartitionpartuuid }} noprompt&#xA;All that&#39;s left is to test the new boot entry. Simply do a restart and see if you can actually boot into the new Pop!OS recovery entry. If all went well, you now you should have a fully functioning Pop!OS recovery mode available on boot. If you had any troubles during this setup, you can look through the journal logs for pop-upgrade to see if you can resolve.   &#xA;&#xA;While this may seem like a lot, it is due to the fact pop-upgrade tool doesn&#39;t yet support setting up post install. If you had the luxury of doing your custom setup before any of this, you can simply add a 4.4 GiB FAT32 partition and Pop!OS install will take care of the rest. &#xA;&#xA;[1]: https://support.system76.com/articles/pop-recovery/&#xA;[2]: https://baez.link/whats-in-your-etc-fstab&#xA;[3]: https://system76.com/&#xA;[4]: https://pop.system76.com/&#xA;[5]: https://www.mankier.com/8/parted&#xA;[6]: https://www.mankier.com/8/gdisk&#xA;[7]: https://btrfs.wiki.kernel.org/index.php/Main_Page&#xA;[8]: https://www.mankier.com/8/mkfs.fat&#xA;[9]: https://github.com/pop-os/upgrade&#xA;[10]: https://www.mankier.com/5/loader.conf&#xA;&#xA;#100DaysToOffload #Day8 #PopOS #System76 ]]&gt;</description>
      <content:encoded><![CDATA[<p>For anyone who already has a none default install of Pop!_OS, there exist a way for you to get the recovery partition of System76&#39;s distribution.</p>

 

<p>I didn&#39;t know about the recovery recovery partition until I ended up purchasing a <a href="https://system76.com/">System76</a> laptop <em>and</em> using the clean install procedure. <a href="https://pop.system76.com/">Pop!_OS</a> has its own <a href="https://support.system76.com/articles/pop-recovery/">Recovery mode</a>. The recovery mode is essentially a partition with a live install version of the beautiful distribution. It allows you to do full install changes, upgrades, and any other recovery requirements you may have.</p>

<p>If you like me, fellow reader, you may be using a <a href="https://baez.link/whats-in-your-etc-fstab">more custom partition setup</a>. Unfortunately, anything other than the default clean install on Pop!_OS means you do not get the automatically generated recovery partition of goodness. Here&#39;s a small guide over how to get the setup on an already existing install.</p>

<p>First, make sure you have all the tools required here:</p>
<ul><li><a href="https://www.mankier.com/8/mkfs.fat">mkfs.vfat</a> : required for creating the recovery filesystem.</li>
<li><a href="https://www.mankier.com/8/parted">parted</a> or <a href="https://www.mankier.com/8/gdisk">gdisk</a> :  resize and create partition.</li>
<li><a href="https://github.com/pop-os/upgrade">pop-upgrade</a> : available by default on POP!_OS.</li>
<li>a text editor of choice.</li></ul>

<h2 id="make-the-recovery-partition" id="make-the-recovery-partition">Make the recovery partition</h2>

<p>You now need to make sure you have 4.4 GiB of unallocated storage available on your primary disk drive. The unallocated storage space will be used to make a new FAT32 partition on your partition table.</p>

<p>Note, if you are using <a href="https://btrfs.wiki.kernel.org/index.php/Main_Page">btrfs</a>, make sure to resize the filesystem before shrinking the storage available. A required step as you may have allocations used in locations you will be removing. Hence a small rebalance for those data blocks:</p>

<pre><code>btrfs filesystem resize -4.4g /
</code></pre>

<p>Use your favorite tool to resize the disk. Personally, I tend to just use <a href="https://www.mankier.com/8/gdisk">gdisk</a>, but you can use <a href="https://www.mankier.com/8/parted">parted</a> or any other tool for the task. Once you have resized, make the 4.4 GiB partition, if you haven&#39;t already, and create the FAT32 filesystem on the new partition:</p>

<pre><code># the label RECOVERY is to make easier to define later
mkfs.vfat -n RECOVERY /dev/{{ recovery_parition_id }}
</code></pre>

<p>Next, you need the new partition mounted on <code>/recovery</code>.</p>

<pre><code>mount -L RECOVERY /recovery
</code></pre>

<p>Now comes the tricky part. You need to run <code>pop-upgrade</code> tool to install the Pop!_OS ISO into the recovery partition. Before you do, make sure you have open on a tail of the logs for <code>pop-upgrade</code>:</p>

<pre><code>journalctl -flu pop-upgrade
</code></pre>

<p>The <code>pop-upgrade</code> tool has a tendency to fail and the error messages are not exactly descriptive. Having the logs available can be quite helpful in debugging what went wrong.  Once you do have the logs tailed, on a separate terminal run the following:</p>

<pre><code># use 20.04 as of this writing for Focal version
pop-upgrade recovery upgrade from-release 20.04
</code></pre>

<p>If you have a <code>/tmp</code> directory with less storage available than the full ISO image size of POP!_OS, you will have troubles installing. Before this becomes an issue, make sure you have available at least 2.5 GiB of disk storage on <code>/tmp</code>. If you don&#39;t, you can always do a bind mount to filesystem that does temporarily. Allowing you to download the ISO. Once <code>pop-upgrade</code> procedure is complete, simply unbind the bind mount.</p>

<p>If by any chance your recovery installation fails on <code>pop-upgrade</code>, head over to your <code>/etc/fstab</code> and comment out the auto generated entry for you <code>/recovery</code> partition. The tool currently has an issue where it will fail if the entry exists on your filesystem table file.</p>

<h2 id="add-the-recovery-conf-file" id="add-the-recovery-conf-file">Add the recovery.conf file</h2>

<p>Ok, so now that you have the recovery partition, you need to add the <code>recovery.conf</code> file. First run the command below to create the file in <code>/recovery/recovery.conf</code> filepath with the following template:</p>

<pre><code> EFI_UUID={{ boot_efi_uuid }}
HOSTNAME=pop-os
KBD_LAYOUT=us
KBD_MODEL=
KBD_VARIANT=
LANG=en_US.UTF-8
LUKS_UUID=
OEM_MODE=0
RECOVERY_UUID={{ pop_recovery_uuid }}
ROOT_UUID=UUID={{ pop_root_uuid }}
UPGRADE=1
</code></pre>

<p>Edit all the <code>{{ }}</code> with the correct partition&#39;s UUID. To keep this easier to identify, use the <code>PARTUUID</code> of your partitions for all of the entries like the example below:</p>

<pre><code>ROOT_UUID=PARTUUID=6fee8edb-1e18-485b-95aa-4e36f1abaa4e
</code></pre>

<h2 id="create-the-systemd-boot-entry" id="create-the-systemd-boot-entry">Create the systemd-boot entry</h2>

<p>Congrats, you now up to the final stretch. You only need two things here, the actual ID for your recovery partition and the partuuid of the partition. Make an entry on your <a href="https://www.mankier.com/5/loader.conf">systemd-boot loader entry</a>:</p>

<pre><code>title Pop!_OS recovery
linux /EFI/Recovery-{{ recovery_partition_uuid }}/vmlinuz.efi
initrd /EFI/Recovery-{{ recovery_partition_uuid }}/initrd.gz
options boot=casper hostname=recovery userfullname=Recovery username=recovery live-media-path=/casper-{{ recovery_partition_uuid }} live-media=/dev/disk/by-partuuid/{{ recovery_partition_partuuid }} noprompt
</code></pre>

<p>All that&#39;s left is to test the new boot entry. Simply do a restart and see if you can actually boot into the new Pop!<em>OS recovery entry. If all went well, you now you should have a fully functioning Pop!</em>OS recovery mode available on boot. If you had any troubles during this setup, you can look through the journal logs for <code>pop-upgrade</code> to see if you can resolve.</p>

<p>While this may seem like a lot, it is due to the fact <code>pop-upgrade</code> tool doesn&#39;t yet support setting up post install. If you had the luxury of doing your custom setup <em>before</em> any of this, you can simply add a 4.4 GiB FAT32 partition and Pop!_OS install will take care of the rest.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day8" class="hashtag"><span>#</span><span class="p-category">Day8</span></a> <a href="https://baez.link/tag:PopOS" class="hashtag"><span>#</span><span class="p-category">PopOS</span></a> <a href="https://baez.link/tag:System76" class="hashtag"><span>#</span><span class="p-category">System76</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/add-recovery-to-your-pop-_os</guid>
      <pubDate>Sun, 03 May 2020 04:10:35 +0000</pubDate>
    </item>
    <item>
      <title>Tiling Managers Are Life</title>
      <link>https://baez.link/tiling-managers-are-life?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I can&#39;t believe I thought I could go back to the window manager ways. &#xA;&#xA;!--more--&#xA;&#xA;I don&#39;t even know how I convinced myself to try to use a machine without a Tiling manager, but that was the case back when I first tried out System76&#39;s Pop!OS. Before I get to the why for Pop, let me give a little back story. &#xA;&#xA;I been using Tiling Managers for almost a full decade. It all started with an obsession of a small, but definitely strong, embedded language called Lua.&#xA;&#xA;Since the moment I learned of the language, I became absolutely obsessed with it. So much so, I wanted my whole software suite to be based on Lua. Everything from what I wrote personally, to what I wrote professionally, was in this language in one shape or another. It became such a large part of my way of being, to this day, Lua is the first thing I have installed on every machine that I can install it to. Be it work, home, servers, or things that compute, Lua is always there ready to help. &#xA;&#xA;So when I was looking for some sort of desktop environment using the language, I learned of a tiling manager written in Lua; Awesome. The Awesome tiling manager is what would happen if you tried to mix ease of use with customization you can die for. You end up being in a place where you can make your desktop experience actually function as your desktop experience. A truly custom tailored tiling manager to your precise needs. &#xA;&#xA;I was happy with Awesome. But being a polyglot by nature, I became in love with a different language in the coming years; Rust. Very much like my fascination with Lua, I ended up having a Rust written something, everywhere. Thus, when System76 announced Pop!OS, they also announced it would have parts of its internals written in Rust. Nuff said. I ended up installing it on my machine and wait for it... actually like Gnome 3. &#xA;&#xA;With the new love, my heart still ached from missing the tiling manager life of old. Ended up trying all the Gnome tiling manager extensions, but they just didn&#39;t feel right. Still, I stuck with gTile as the extension of choice for imitative Tiling management on Gnome 3.&#xA;&#xA;However, with Pop&#39;s new release of 20.04, a full integrator level extension to gnome 3 has been made by the lovely lovely team in system76 for tiling management. Now it&#39;s not like Awesome, but I mean  how could it be? Awesome is awesome for a reason. No, this released extension for Pop!OS is based on i3. But boy is it outstanding. I definitely have a home with System76&#39;s work for years to come. Still learning the key bindings and tricks, yet you can very much see there was so much thought put in the tiling manager work. For crying out loud, the crazy team are even making a keyboard designed slightly different to augment the tiling manager use with their keyboard shortcuts. I&#39;m absolutely sold. &#xA;&#xA;If you still deciding whether you want to run a Tiling manager or not, stop what you doing. Install Pop!OS. You&#39;ll Thank me later.&#xA;&#xA;#100DaysToOffload #Day7 &#xA;&#xA;[1]: https://system76.com/&#xA;[2]: https://pop.system76.com/&#xA;[3]: http://www.lua.org/&#xA;[4]: https://www.rust-lang.org/&#xA;[5]: https://extensions.gnome.org/extension/28/gtile/&#xA;[6]: https://www.youtube.com/watch?v=-fltwBKsMY0&#xA;[7]: https://i3wm.org/&#xA;[8]: https://blog.system76.com/post/612874398967513088/making-a-keyboard-the-system76-approach]]&gt;</description>
      <content:encoded><![CDATA[<p>I can&#39;t believe I thought I could go back to the window manager ways.</p>



<p>I don&#39;t even know how I convinced myself to try to use a machine without a Tiling manager, but that was the case back when I first tried out <a href="https://system76.com/">System76</a>&#39;s <a href="https://pop.system76.com/">Pop!_OS</a>. Before I get to the why for Pop, let me give a little back story.</p>

<p>I been using Tiling Managers for almost a full decade. It all started with an obsession of a small, but definitely strong, embedded language called <a href="http://www.lua.org/">Lua</a>.</p>

<p>Since the moment I learned of the language, I became absolutely obsessed with it. So much so, I wanted my whole software suite to be based on Lua. Everything from what I wrote personally, to what I wrote professionally, was in this language in one shape or another. It became such a large part of my way of being, to this day, Lua is the first thing I have installed on every machine that I can install it to. Be it work, home, servers, or things that compute, Lua is always there ready to help.</p>

<p>So when I was looking for some sort of desktop environment using the language, I learned of a tiling manager written in Lua; <a href="https://www.rust-lang.org/">Awesome</a>. The Awesome tiling manager is what would happen if you tried to mix ease of use with customization you can die for. You end up being in a place where you can make your desktop experience actually function as your desktop experience. A truly custom tailored tiling manager to your precise needs.</p>

<p>I was happy with Awesome. But being a polyglot by nature, I became in love with a different language in the coming years; <a href="https://www.rust-lang.org/">Rust</a>. Very much like my fascination with Lua, I ended up having a Rust written something, <strong>everywhere</strong>. Thus, when System76 announced Pop!_OS, they also announced it would have parts of its internals written in Rust. Nuff said. I ended up installing it on my machine and wait for it... actually <em>like</em> Gnome 3.</p>

<p>With the new love, my heart still ached from missing the tiling manager life of old. Ended up trying all the Gnome tiling manager extensions, but they just didn&#39;t feel right. Still, I stuck with <a href="https://extensions.gnome.org/extension/28/gtile/">gTile</a> as the extension of choice for imitative Tiling management on Gnome 3.</p>

<p>However, with Pop&#39;s new release of 20.04, a full integrator level extension to gnome 3 has been made by the lovely lovely team in system76 for <a href="https://www.youtube.com/watch?v=-fltwBKsMY0">tiling management</a>. Now it&#39;s not like Awesome, but I mean  how could it be? Awesome is awesome for a reason. No, this released extension for Pop!_OS is based on <a href="https://i3wm.org/">i3</a>. But boy is it outstanding. I definitely have a home with System76&#39;s work for years to come. Still learning the key bindings and tricks, yet you can very much see there was so much thought put in the tiling manager work. For crying out loud, the crazy team are even <a href="https://blog.system76.com/post/612874398967513088/making-a-keyboard-the-system76-approach">making a keyboard</a> designed slightly different to augment the tiling manager use with their <a href="https://www.youtube.com/watch?v=aqj0cRTZaVE">keyboard shortcuts</a>. I&#39;m absolutely sold.</p>

<p>If you still deciding whether you want to run a Tiling manager or not, stop what you doing. Install Pop!_OS. You&#39;ll Thank me later.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day7" class="hashtag"><span>#</span><span class="p-category">Day7</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/tiling-managers-are-life</guid>
      <pubDate>Sat, 02 May 2020 03:37:29 +0000</pubDate>
    </item>
    <item>
      <title>Take Your Rest And Sleep</title>
      <link>https://baez.link/take-your-rest-and-sleep?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[One of the most essential parts of your day is the time you spend sleeping.&#xA;&#xA;!--more--&#xA;&#xA;On average, humans spend about 26 years sleeping. However, that&#39;s only the average. Not necessarily the norm. In fields with high stress work loads, sleep deprivation tends to escalate, along with degradation of time and focus. Sometimes these environments&#39; workload may put you, in a position to remove your moments of rest, to finish work set out for the day. You not only are sacrificing the circadian rhythm for sleep, you end up becoming void of your memory of that day. &#xA;&#xA;It turns out sleep plays a forever growing role in the ability to record memory and learn. When you remove your time of rest, you end up removing your ability to learn and most importantly remember.&#xA;&#xA;From an anecdotal perspective, I have been able to adapt and learn more with succumbing to sleep than when I&#39;ve stripped away. The realization of its importance has made me appreciate the necessity of the hours I lay on my bed.&#xA;&#xA;The interesting part of sleep is that there are multiple different patterns towards getting a good rest in the night. It&#39;s not all just one block of eight hours in your bed. Learning above the different types, I have become adapted to the biphasal segmented sleep pattern. Essentially the idea is you sleep in two 3.5 hour cycles. For each cycle, you wake up and do something of light brain activity. So when you enter the next core sleep cycle, you end allowing yourself to have four cycles total of uninterrupted REM sleep for the night. In doing so, waking up gives sort of jolt of energy for the day ahead. &#xA;&#xA;Biphasal segmented sleep has helped me immensely absorb, learn, and recover from the activity I&#39;ve had in the day past. Yet, that&#39;s not to say the pattern is for everyone. Others may end having better rest with single blocks of sleep. Some  may even require to have a siesta in their day to truly feel energized.  &#xA;&#xA;Whatever is the pattern for your sleep, it is what helps you actually achieve the goals you set out to do. Not the overwork or deprivation of bodily necessities. &#xA;&#xA;So take the rest as important as you should, and sleep. &#xA;&#xA;[1]: https://www.reference.com/science/many-years-life-spend-sleeping-8f04fc7719fa8eb3&#xA;[2]: https://en.wikipedia.org/wiki/Circadianrhythm&#xA;[3]: https://en.wikipedia.org/wiki/Sleepandmemory&#xA;[4]: https://en.wikipedia.org/wiki/Biphasicandpolyphasicsleep#SegmentedSchedule&#xA;[5]: https://en.wikipedia.org/wiki/Slow-wavesleep&#xA;[6]: https://en.wikipedia.org/wiki/Rapideyemovement_sleep&#xA;[7]: https://en.wikipedia.org/wiki/Siesta&#xA;&#xA;#100DaysToOffload #Day6]]&gt;</description>
      <content:encoded><![CDATA[<p>One of the most essential parts of your day is the time you spend sleeping.</p>



<p>On average, humans spend about <a href="https://www.reference.com/science/many-years-life-spend-sleeping-8f04fc7719fa8eb3">26 years sleeping</a>. However, that&#39;s only the average. Not necessarily the norm. In fields with high stress work loads, sleep deprivation tends to escalate, along with degradation of time and focus. Sometimes these environments&#39; workload may put you, in a position to remove your moments of rest, to finish work set out for the day. You not only are sacrificing the <a href="https://en.wikipedia.org/wiki/Circadian_rhythm">circadian rhythm</a> for sleep, you end up becoming void of your memory of that day.</p>

<p>It turns out sleep plays a forever growing role in the <a href="https://en.wikipedia.org/wiki/Sleep_and_memory">ability to record memory and learn</a>. When you remove your time of rest, you end up removing your ability to learn and most importantly remember.</p>

<p>From an anecdotal perspective, I have been able to adapt and learn more with succumbing to sleep than when I&#39;ve stripped away. The realization of its importance has made me appreciate the necessity of the hours I lay on my bed.</p>

<p>The interesting part of sleep is that there are multiple different patterns towards getting a good rest in the night. It&#39;s not all just one block of eight hours in your bed. Learning above the different types, I have become adapted to the <a href="https://en.wikipedia.org/wiki/Biphasic_and_polyphasic_sleep#Segmented_Schedule">biphasal segmented sleep pattern</a>. Essentially the idea is you sleep in two 3.5 hour cycles. For each cycle, you wake up and do something of light brain activity. So when you enter the next core sleep cycle, you end allowing yourself to have four cycles total of uninterrupted <a href="https://en.wikipedia.org/wiki/Rapid_eye_movement_sleep">REM sleep</a> for the night. In doing so, waking up gives sort of jolt of energy for the day ahead.</p>

<p>Biphasal segmented sleep has helped me immensely absorb, learn, and recover from the activity I&#39;ve had in the day past. Yet, that&#39;s not to say the pattern is for everyone. Others may end having better rest with single blocks of sleep. Some  may even require to have a <a href="https://en.wikipedia.org/wiki/Siesta">siesta</a> in their day to truly feel energized.</p>

<p>Whatever is the pattern for your sleep, it is what helps you actually achieve the goals you set out to do. Not the overwork or deprivation of bodily necessities.</p>

<p>So take the rest as important as you should, and sleep.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day6" class="hashtag"><span>#</span><span class="p-category">Day6</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/take-your-rest-and-sleep</guid>
      <pubDate>Fri, 01 May 2020 03:30:41 +0000</pubDate>
    </item>
    <item>
      <title>Design To Fail</title>
      <link>https://baez.link/design-to-fail?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Never build your software in the belief it will run correctly. It won&#39;t.&#xA;&#xA;!--more--&#xA;&#xA;Modern day software design always tries to make the software we run close to stable as possible. Deadlines and limitations the teams will always have make it next to impossible to ship something bug free. Most likely than not, a 1.0.0 release will be a 1.0.1 in the same week or even day the version is available. The infrastructure that runs said software is no different. &#xA;&#xA;In this post of #StrikingABalance, I&#39;ll be focusing on the work that goes into the infrastructure design and what can be done to make it more manageable. Infrastructure design should always be thought of as something you want to let crash. The whole idea of treat your services as as cattle rather than pets is with this precise notion. Letting things die, but having automation to bring it back alive. &#xA;&#xA;Designing to fail means designing your software and infrastructure to gracefully manage itself. Ever had the moment that you get a call at 2:00 AM,  because something is on fire on production? If you design your system to fail, that call would come by with the commonality of lightning striking twice.   &#xA;&#xA;So how do you go about in designing to fail? You could go with the route of always adding tests. Similar to unit and integration tests in software design, you add unit and integration tests to your infrastructure design. But the test then need to record what goes wrong. A way track events of your infrastructure&#39;s failures. Those tracked events would then trigger some automation tooling, you most likely have made, to do the response you want it to do for your infrastructure. &#xA;&#xA;At the same time, due to the almost certain countless of moving parts in your infrastructure, you probably cannot test the whole platform in a silo. It may simply take too long, cost too much, or just not be feasible with what you have available. Meaning, you need to learn how to let infrastructure fail in production. You end up having load balancers, canary deployments, roll back releases, injected load tests, structured automated service rerouting, event triggered automation processing, database vertical scaling, horizontal service scaling, and countless other practices and paradigms. &#xA;&#xA;No right way for how to design to fail exists. What you need above all else is to be comfortable with letting the infrastructure crash. Have a way to learn from those crashes. Then, make practices that can catch these crashes and recover before you have to be involved. So when those bugs creep up, you don&#39;t need to be ready, because your infrastructure is built to fail.     &#xA;&#xA;#100DaysToOffload #StrikingABalance #Day5]]&gt;</description>
      <content:encoded><![CDATA[<p>Never build your software in the belief it will run correctly. It won&#39;t.</p>



<p>Modern day software design always tries to make the software we run close to stable as possible. Deadlines and limitations the teams will always have make it next to impossible to ship something bug free. Most likely than not, a <code>1.0.0</code> release will be a <code>1.0.1</code> in the same week or even day the version is available. The infrastructure that runs said software is no different.</p>

<p>In this post of <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a>, I&#39;ll be focusing on the work that goes into the infrastructure design and what can be done to make it more manageable. Infrastructure design should always be thought of as something you want to let crash. The whole idea of treat your services as as cattle rather than pets is with this precise notion. Letting things die, but having automation to bring it back alive.</p>

<p>Designing to fail means designing your software and infrastructure to gracefully manage itself. Ever had the moment that you get a call at 2:00 AM,  because something is on fire on production? If you design your system to fail, that call would come by with the commonality of lightning striking twice.</p>

<p>So how do you go about in designing to fail? You could go with the route of always adding tests. Similar to unit and integration tests in software design, you add unit and integration tests to your infrastructure design. But the test then need to record what goes wrong. A way track events of your infrastructure&#39;s failures. Those tracked events would then trigger some automation tooling, you most likely have made, to do the response you want it to do for your infrastructure.</p>

<p>At the same time, due to the almost certain countless of moving parts in your infrastructure, you probably cannot test the whole platform in a silo. It may simply take too long, cost too much, or just not be feasible with what you have available. Meaning, you need to learn how to let infrastructure fail in production. You end up having load balancers, canary deployments, roll back releases, injected load tests, structured automated service rerouting, event triggered automation processing, database vertical scaling, horizontal service scaling, and countless other practices and paradigms.</p>

<p>No right way for how to design to fail exists. What you need above all else is to be comfortable with letting the infrastructure crash. Have a way to learn from those crashes. Then, make practices that can catch these crashes and recover before you have to be involved. So when those bugs creep up, you don&#39;t need to be ready, because your infrastructure is built to fail.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:StrikingABalance" class="hashtag"><span>#</span><span class="p-category">StrikingABalance</span></a> <a href="https://baez.link/tag:Day5" class="hashtag"><span>#</span><span class="p-category">Day5</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/design-to-fail</guid>
      <pubDate>Thu, 30 Apr 2020 02:43:12 +0000</pubDate>
    </item>
  </channel>
</rss>