<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>A Bit</title>
    <link>https://baez.link/</link>
    <description>A little bit of writing by Alejandro </description>
    <pubDate>Sun, 05 Apr 2026 20:35:33 +0000</pubDate>
    <item>
      <title>My Default Apps at the End of 2024</title>
      <link>https://baez.link/my-default-apps-at-the-end-of-2024?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Here&#39;s a list of what apps I used as my defaults on 2024.&#xA;&#xA;!--more--&#xA;&#xA;Both Kev and Mike posted recently about their app defaults for the year. And it got me thinking. I want do that too! Here&#39;s a little of dust cleaning on this blog. Sharing the apps I been using this year.&#xA;&#xA;✉️ Mail service: Fastmail&#xA;📫️ Mail Client: Fastmail and Thunderbird&#xA;📝 Notes: Logseq&#xA;✅ Todo: Todoist and LogSeq&#xA;🗓️ Calendar App: Fastmail&#xA;📅 Calendar Organizer: Morgen&#xA;👤 Contacts: Fastmail&#xA;📖 RSS reader: Readwise &#xA;☁️ Cloud Storage: Syncthing and DigitalOcean spaces &#xA;🖼️ Photo Library: Flickr&#xA;🌐 Web Browser: Firefox&#xA;🗨️ Chat: Delta.chat and Line&#xA;🔖 Bookmarks: Firefox&#xA;📖 Read Later: Readwise&#xA;📕 Reading: Readwise&#xA;📜 Word Processor: LaTex&#xA;📈 Spreadsheets: Airtable&#xA;🛒 Shopping Lists: Todoist&#xA;💰️ Personal Finance: YNAB&#xA;🎵 Music: Amazon Music&#xA;🎤 Podcast: Pocket Casts&#xA;🔐 Password Manager: 1password&#xA;🧑‍🚒 Social Media: Fediverse&#xA;🔎 Search: DuckDuckGo&#xA;🤖 AI search: Kagi&#xA;⌨️ Code Editor: Helix&#xA;&#xA;[1]: https://fosstodon.org/@kev/113742766808830378&#xA;[2]: https://fosstodon.org/@mike/113743488196308966&#xA;[3]: https://www.fastmail.com/&#xA;[4]: https://www.thunderbird.net/en-US/&#xA;[5]: https://logseq.com/&#xA;[6]: https://todoist.com&#xA;[7]: https://www.morgen.so/&#xA;[8]: https://read.readwise.io/&#xA;[9]: https://syncthing.net/&#xA;[10]: https://www.digitalocean.com/products/spaces&#xA;[12]: https://flickr.com/&#xA;[13]: http://firefox.com/&#xA;[14]: https://delta.chat&#xA;[15]: https://www.line.me&#xA;[16]: https://www.latex-project.org/get/&#xA;[17]: https://www.airtable.com/&#xA;[18]: https://www.ynab.com/&#xA;[19]: https://www.amazon.com/music&#xA;[20]: https://pocketcasts.com/&#xA;[21]: https://1password.com&#xA;[22]: https://fedidb.org/software&#xA;[23]: https://duckduckgo.com/&#xA;[24]: https://kagi.com/&#xA;[25]: https://helix-editor.com/&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Here&#39;s a list of what apps I used as my defaults on 2024.</p>



<p>Both <a href="https://fosstodon.org/@kev/113742766808830378">Kev</a> and <a href="https://fosstodon.org/@mike/113743488196308966">Mike</a> posted recently about their app defaults for the year. And it got me thinking. I want do that too! Here&#39;s a little of dust cleaning on this blog. Sharing the apps I been using this year.</p>
<ul><li>✉️ Mail service: <a href="https://www.fastmail.com/">Fastmail</a></li>
<li>📫️ Mail Client: <a href="https://www.fastmail.com/">Fastmail</a> and <a href="https://www.thunderbird.net/en-US/">Thunderbird</a></li>
<li>📝 Notes: <a href="https://logseq.com/">Logseq</a></li>
<li>✅ Todo: <a href="https://todoist.com">Todoist</a> and <a href="https://logseq.com/">LogSeq</a></li>
<li>🗓️ Calendar App: <a href="https://www.fastmail.com/">Fastmail</a></li>
<li>📅 Calendar Organizer: <a href="https://www.morgen.so/">Morgen</a></li>
<li>👤 Contacts: <a href="https://www.fastmail.com/">Fastmail</a></li>
<li>📖 RSS reader: <a href="https://read.readwise.io/">Readwise</a></li>
<li>☁️ Cloud Storage: <a href="https://syncthing.net/">Syncthing</a> and <a href="https://www.digitalocean.com/products/spaces">DigitalOcean spaces</a></li>
<li>🖼️ Photo Library: <a href="https://flickr.com/">Flickr</a></li>
<li>🌐 Web Browser: <a href="http://firefox.com/">Firefox</a></li>
<li>🗨️ Chat: <a href="https://delta.chat">Delta.chat</a> and <a href="https://www.line.me">Line</a></li>
<li>🔖 Bookmarks: <a href="http://firefox.com/">Firefox</a></li>
<li>📖 Read Later: <a href="https://read.readwise.io/">Readwise</a></li>
<li>📕 Reading: <a href="https://read.readwise.io/">Readwise</a></li>
<li>📜 Word Processor: <a href="https://www.latex-project.org/get/">LaTex</a></li>
<li>📈 Spreadsheets: <a href="https://www.airtable.com/">Airtable</a></li>
<li>🛒 Shopping Lists: <a href="https://todoist.com">Todoist</a></li>
<li>💰️ Personal Finance: <a href="https://www.ynab.com/">YNAB</a></li>
<li>🎵 Music: <a href="https://www.amazon.com/music">Amazon Music</a></li>
<li>🎤 Podcast: <a href="https://pocketcasts.com/">Pocket Casts</a></li>
<li>🔐 Password Manager: <a href="https://1password.com">1password</a></li>
<li>🧑‍🚒 Social Media: <a href="https://fedidb.org/software">Fediverse</a></li>
<li>🔎 Search: <a href="https://duckduckgo.com/">DuckDuckGo</a></li>
<li>🤖 AI search: <a href="https://kagi.com/">Kagi</a></li>
<li>⌨️ Code Editor: <a href="https://helix-editor.com/">Helix</a></li></ul>
]]></content:encoded>
      <guid>https://baez.link/my-default-apps-at-the-end-of-2024</guid>
      <pubDate>Tue, 31 Dec 2024 02:39:46 +0000</pubDate>
    </item>
    <item>
      <title>I&#39;m Choosing Ubuntu Core, Let Me Explain</title>
      <link>https://baez.link/im-choosing-ubuntu-core-let-me-explain?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[What benefits do I get with 4-5 different distributions, being one human managing and orchestrating it all? Turns out, no benefit. Just pain.&#xD;&#xA;&#xD;&#xA;Ever since the whole re-licensing from Hashicorp, I been looking at what to self-host. I started with rethinking what&#39;s the OS I&#39;m using. And why. &#xD;&#xA;&#xD;&#xA;So after thinking about the choices, I finally settled on a victor; Ubuntu Core.&#xD;&#xA;&#xD;&#xA;!--more--&#xD;&#xA;&#xD;&#xA;ubuntu core choice&#xD;&#xA;&#xD;&#xA;Full Disclaimer, the choice is solely my own. It does not mean all of the many many many options out there are not viable. Far from it. I wrote a post debating the options for &#39;just&#39; three and couldn&#39;t decide then. This OS simply seems to be working for me. &#xD;&#xA;&#xD;&#xA;Canonical&#39;s Ubuntu Core became preferred for a primary reason. I don&#39;t want to care of the host OS. Long are the days I have time to customize everything. Creating custom kernels to get higher clock speeds. Now? I barely get to modify my editor. &#xD;&#xA;&#xD;&#xA;From past usage, Ubuntu Core stayed out of my way. Basically, it was there for running LXD system containers of Hashicorp&#39;s Nomad, Consul, and Vault. All LXD containers. Easily built using Hashicorp&#39;s Packer. &#xD;&#xA;&#xD;&#xA;My use case spent little on Ubuntu Core itself. And that&#39;s precisely the point. I noticed I almost never thought of the OS I was running. I did think of it every once in while. But anything configuration related was of how I could strip LXD completely. Building snaps instead for all the Hashicorp tools. Yet the setup with LXD worked too darn good, to fix what wasn&#39;t broken.&#xD;&#xA;&#xD;&#xA;I spent most of my time focusing not on the host OS. Not on the LXD cluster. Nope, not even &#34;that much&#34; on Hashicorp services and tools. &#xD;&#xA;&#xD;&#xA;My time was on applications. I was not worried about what Ubuntu Core did. Including updates. Everything actually worked automatically. Fully hands off when it came to the host OS. It was quite brilliant. &#xD;&#xA;&#xD;&#xA;That same was not so for regular distributions. And surprisingly, not true with other immutable OS offerings. I spent a lot of time worrying about the host OS then. How to configure my startup settings to get things absurdly working. What to do about updates. Then, how to handle changes of updates when things broke. More time was spent provisioning and managing the host OS than on what was running the applications. Let alone the applications themselves. &#xD;&#xA;&#xD;&#xA;I&#39;ll try Ubuntu Core for a few. If it ends up working out like I think it will, then great. If not, I can always wave back at the others. &#xD;&#xA;&#xD;&#xA;#linux #UbuntuCore #distro&#xD;&#xA;&#xD;&#xA;&#xD;&#xA;[1]: https://ubuntu.com/core&#xD;&#xA;[2]: https://nixos.org/&#xD;&#xA;[3]: https://www.talos.dev/&#xD;&#xA;[4]: https://baez.link/what-immutable-linux-to-use&#xD;&#xA;[5]: https://i.snap.as/w05hZ601.jpg&#xD;&#xA;[6]: https://ubuntu.com/lxd&#xD;&#xA;[7]: https://www.nomadproject.io/&#xD;&#xA;[8]: https://www.consul.io/&#xD;&#xA;[9]: https://www.vaultproject.io/&#xD;&#xA;[10]: https://www.packer.io/&#xD;&#xA;[11]: https://www.talos.dev/&#xD;&#xA;[12]: https://kubernetes.io/&#xD;&#xA;[13]: https://snapcraft.io/&#xD;&#xA;[14]: https://i.snap.as/z5NaZGQh.jpeg&#xD;&#xA;[15]: https://landscape.cncf.io/&#xD;&#xA;[16]: https://developer.hashicorp.com/nomad/docs/drivers&#xD;&#xA;[17]: https://nixos.org/manual/nixpkgs/stable/#sec-pkgs-snapTools&#xD;&#xA;[18]: https://zero-to-nix.com/&#xD;&#xA;[19]: https://github.com/canonical/microcloud&#xD;&#xA;[20]: https://snapcraft.io/microceph&#xD;&#xA;[21]: https://canonical-microovn.readthedocs-hosted.com/en/latest/&#xD;&#xA;[22]: https://snapcraft.io/docs/the-snap-format&#xD;&#xA;[23]: https://developer.hashicorp.com/nomad/docs/drivers/exec&#xD;&#xA;[24]: https://developer.hashicorp.com/nomad/docs/drivers/raw_exec&#xD;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>What benefits do I get with 4-5 different distributions, being one human managing and orchestrating it all? Turns out, no benefit. Just pain.</p>

<p>Ever since the whole re-licensing from Hashicorp, I been looking at what to self-host. I started with rethinking what&#39;s the OS I&#39;m using. And why.</p>

<p>So after thinking about the choices, I finally settled on a victor; Ubuntu Core.</p>



<p><img src="https://i.snap.as/w05hZ601.jpg" alt="ubuntu core choice"/></p>

<p>Full Disclaimer, the choice is solely my own. It does not mean all of the many many many options out there are not viable. Far from it. I wrote a <a href="https://baez.link/what-immutable-linux-to-use">post debating the options</a> for &#39;just&#39; three and couldn&#39;t decide then. This OS simply seems to be working for me.</p>

<p>Canonical&#39;s <a href="https://ubuntu.com/core">Ubuntu Core</a> became preferred for a primary reason. I don&#39;t want to care of the host OS. Long are the days I have time to customize everything. Creating custom kernels to get higher clock speeds. Now? I barely get to modify my editor.</p>

<p>From past usage, Ubuntu Core stayed out of my way. Basically, it was there for running <a href="https://ubuntu.com/lxd">LXD system containers</a> of <a href="https://www.nomadproject.io/">Hashicorp&#39;s Nomad</a>, <a href="https://www.consul.io/">Consul</a>, and <a href="https://www.vaultproject.io/">Vault</a>. All LXD containers. Easily built using <a href="https://www.packer.io/">Hashicorp&#39;s Packer</a>.</p>

<p>My use case spent little on Ubuntu Core itself. And that&#39;s precisely the point. I noticed I almost never thought of the OS I was running. I did think of it every once in while. But anything configuration related was of how I could strip LXD completely. Building <a href="https://snapcraft.io/">snaps</a> instead for all the Hashicorp tools. Yet the setup with LXD worked too darn good, to fix what wasn&#39;t broken.</p>

<p>I spent most of my time focusing <em>not</em> on the host OS. Not on the LXD cluster. Nope, not even “that much” on Hashicorp services and tools.</p>

<p>My time was on applications. I was not worried about what Ubuntu Core did. Including updates. Everything actually worked automatically. Fully hands off when it came to the host OS. It was quite brilliant.</p>

<p>That same was not so for regular distributions. And surprisingly, not true with other immutable OS offerings. I spent a lot of time worrying about the host OS then. How to configure my startup settings to get things absurdly working. What to do about updates. Then, how to handle changes of updates when things broke. More time was spent provisioning and managing the host OS than on what was running the applications. Let alone the applications themselves.</p>

<p>I&#39;ll try Ubuntu Core for a few. If it ends up working out like I think it will, then great. If not, I can always wave back at the others.</p>

<p><a href="https://baez.link/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://baez.link/tag:UbuntuCore" class="hashtag"><span>#</span><span class="p-category">UbuntuCore</span></a> <a href="https://baez.link/tag:distro" class="hashtag"><span>#</span><span class="p-category">distro</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/im-choosing-ubuntu-core-let-me-explain</guid>
      <pubDate>Mon, 25 Sep 2023 16:11:49 +0000</pubDate>
    </item>
    <item>
      <title>What Immutable Linux To Use?</title>
      <link>https://baez.link/what-immutable-linux-to-use?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In more recent years, Linux distributions have become quite interesting. The hypothesis of  immutable Linux have gone from pure thought, to full throttled theory. There exists a plethora of options out in the wild. All from different companies, distributions, and communities.&#xD;&#xA;&#xD;&#xA;Now while many options exists, for me, I been debating on three. Dive so deep I hit bedrock with Nix and NixOS. Accept Kubernetes as the one true OS through Talos. Or drink the orange glowing Kool-Aid of snaps in Ubuntu Core. Bare with me. There&#39;s logic in these. &#xD;&#xA;&#xD;&#xA;!--more--&#xD;&#xA;&#xD;&#xA;The choices &#xD;&#xA;&#xD;&#xA;immutable&#xD;&#xA;&#xD;&#xA;I been debating the choice on what to use as my de-facto immutable OS for a while now. &#xD;&#xA;&#xD;&#xA;iframe src=&#34;https://fosstodon.org/@zeab/109773302609111324/embed&#34; class=&#34;mastodon-embed&#34; style=&#34;max-width: 100%; border: 0&#34; width=&#34;400&#34; allowfullscreen=&#34;allowfullscreen&#34;/iframescript src=&#34;https://fosstodon.org/embed.js&#34; async=&#34;async&#34;/script&#xD;&#xA;&#xD;&#xA;The three I settled (for now) are ones I think make the most sense. To set and forget. Distributions where I can realistically automate the entire thing. Without much fear of the ground falling apart. Because unlike normal Linux distributions, immutable OS versions are the wild west of Linux distros. All choose radically different ways of achieving same goals. Choice is great for ecosystem growth. Terrible for a foundation. &#xD;&#xA;&#xD;&#xA;Ubuntu Core  &#xD;&#xA;&#xD;&#xA;is this linux&#xD;&#xA;&#xD;&#xA;The one I&#39;m sure gets most debate is Ubuntu Core. Even though it&#39;s the most systematically grounded version of an immutable OS. The design is fairly simple. Use snaps. But the genius of this is how Canonical made every part of the OS into an isolated snap. Snaps you can update and change without impacting the &#39;core&#39; of the OS. &#xD;&#xA;&#xD;&#xA;You can update the entire OS with changing the snap version of the core you are based from. So jumping from Ubuntu core 20 to Ubuntu core 22 was doing something like this. And as Canonical has matured with how Ubuntu core operates, they have also made different layers to this design. Using the atomic principal. &#xD;&#xA;&#xD;&#xA;So now there are snaps for the linux kernel. And configuration based ones called gadget snaps. Responsible for handling bootstrapping on a specific hardware. All following the same everything is a snap.&#xD;&#xA;&#xD;&#xA;You can still run any application you want. The difference here is you have to package them differently. Or at least differently from the norm.&#xD;&#xA;&#xD;&#xA;The con with Ubuntu core is that you have to accept their way of packaging software with snaps. There is no alternative. You must have Canonical as your overseer here. The pro is everything stated before benefits your ability to have a stable immutable system. It should not be understated. The ease that comes with having maintainability so low, with ubuntu core, is pretty remarkable. &#xD;&#xA;&#xD;&#xA;Talos&#xD;&#xA;&#xD;&#xA;one does not simply use kubernetes&#xD;&#xA;&#xD;&#xA;Talos differs in a very simple way. The ENTIRE operating system. It is Linux technically because Talos uses the Linux kernel. It immediately deviates from there. You don&#39;t use an OS with Talos. What you use is Kubernetes. &#xD;&#xA;&#xD;&#xA;Kubernetes is infamous at being complex. I even wrote about my annoyance years ago. After years, that article is still true for self hosted Kubernetes cluster. So is Kubernetes worth self hosting? I can wholeheartedly say NO. But with an exception. The exception here is if you are using Talos.&#xD;&#xA;&#xD;&#xA;You see, Talos strips away practically everything of the OS. But it does it in a way that makes a lot of sense. You don&#39;t use ssh, because there is no shell to ssh to! Instead you use their gRPC API component; apid. Interfacing with the operating system. Talos has no systemd. Instead, replaced with their own PID 1. An init system called machined. With the whole purpose running what kubernetes needs and the gRPC interface to define the OS. &#xD;&#xA;&#xD;&#xA;In practice, Talos is actually dead simple to use and administer. It makes using and maintaining kubernetes strikingly easy. New version release? Run talosctl to upgrade: &#xD;&#xA;&#xD;&#xA;talosctl upgrade-k8s --to 1.28.0&#xD;&#xA;&#xD;&#xA;Same is true for updating Talos itself. Because the OS is atomic, there is very little thought process required to handling failures. You rollback like nothing happened. Here, you simply use kubernetes. &#xD;&#xA;&#xD;&#xA;The con with talos isn&#39;t actually the use of talos. It&#39;s the principal of only using kubernetes for everything. If you not comfortable with a job orchestration like it, DO NOT USE. If you somehow like ssh or want to install other things directly on the OS, DO NOT USE. If you don&#39;t want to do literally everything as code, definitely DO NOT USE. &#xD;&#xA;&#xD;&#xA;But if you do value what Talos offers, it&#39;s immensely difficult not to choose it. So many problems are simply non-existent on Talos. The OS makes you question why even bother with the old ways.&#xD;&#xA;&#xD;&#xA;NixOS&#xD;&#xA;&#xD;&#xA;NixOS is the messiah&#xD;&#xA;&#xD;&#xA;The prophecy is written. NixOS is the answer to our Linux administration ways. It will solve all our problems with packaging software. We will know of history as before and after NixOS. And quite frankly, it really does feel this way. &#xD;&#xA;&#xD;&#xA;NixOS shines in the same ways the others in this list shine. It rethinks what a Linux is and could be. If you make absolutely everything atomic, to the core of how you package and run software, then do you even need to care if your OS breaks? NixOS potency is the nix programming language. Not to be confused with the nix as a package manager. Or the OS who is also called nix. &#xD;&#xA;&#xD;&#xA;Unlike the other options on this list, the con with NixOS is quite immediately apparent. It&#39;s the difficulty you first have learning the ways of then learning the ways of using nix. Documentation is quite difficult to come by for nix. Much of your time will be left questioning how anything even functions. There&#39;s also then the full upgrade to nix design called nix flakes. All enough to really put incredible friction. Friction on nix usage and nixos adoption. &#xD;&#xA;&#xD;&#xA;However, the moment you passed the hurdles of learning nix, nothing comes even close to its versatility. I personally have migrated most of my software to run with nix. Or at the very least, build with nix. For work, I have development environments strictly nix based. And the list goes on. &#xD;&#xA;&#xD;&#xA;With NixOS, it&#39;s the same principal. You write your closure and you can be assured it will work. No matter what mess you do to the machine. You can roll back like nothing ever occurred. There&#39;s nothing really like Nix and NixOS. The principal is that you handle the hurdle of defining how your software is built. From then on, it will just run. &#xD;&#xA;&#xD;&#xA;No more conflicts with versions of python. No issues with running two independently different versions of the same software. Because of the closure design, there&#39;s no need to containerize your applications. They &#39;just&#39; work. With zero conflict running on the same host. Optimally, NixOS gets you the closest to the promise of Gentoo, but entirely atomic and immutable. &#xD;&#xA;&#xD;&#xA;So what to choose?&#xD;&#xA;&#xD;&#xA;thinking&#xD;&#xA;&#xD;&#xA;I don&#39;t know yet. The reality is, all three options serve very similar ideas of running an immutable OS. They simply attack the problems differently. Talos packages software in kubernetes manifest essentially. Ubuntu Core is snaps. And NixOS is nix closures. All with different tradeoffs that are far too long to add to this never ending post. &#xD;&#xA;&#xD;&#xA;For security reasons, I would probably go for Talos. Because of the stripped purpose of the OS, there&#39;s a smaller footprint to a security issue. Yes. Even with Kubernetes. &#xD;&#xA;&#xD;&#xA;For maintainability, I think Ubuntu Core is prime. Canonical has been doing Linux distributions for decades now. They know what it means to make something function. Every Ubuntu core release has a maintenance window of up to ten years. Meaning, if I want to just run my thing, with no fuss, this will be it.&#xD;&#xA;&#xD;&#xA;For customization, nothing gets even close to what NixOS promises and delivers. I would be able to take all the Nix flakes I been writing for myself and run straight on NixOS. True &#34;it works on my machine&#34; on all machines.   &#xD;&#xA;&#xD;&#xA;[1]: https://i.snap.as/zZjSmqQW.jpg&#xD;&#xA;[2]: https://i.snap.as/fHhCjwvb.jpg&#xD;&#xA;[3]: https://fosstodon.org/@zeab/109773302609111324&#xD;&#xA;[4]: https://ubuntu.com/core&#xD;&#xA;[5]: https://snapcraft.io/docs/snapcraft&#xD;&#xA;[6]: https://ubuntu.com/core/docs/kernel-building&#xD;&#xA;[7]: https://ubuntu.com/core/docs/gadget-snaps&#xD;&#xA;[8]: https://snapcraft.io/docs&#xD;&#xA;[9]: https://i.snap.as/ihtrKyXX.jpg&#xD;&#xA;[10]: https://www.talos.dev/&#xD;&#xA;[11]: https://kubernetes.io/&#xD;&#xA;[12]: https://baez.link/the-a-z-stack&#xD;&#xA;[13]: https://www.talos.dev/v1.5/learn-more/components/#machined&#xD;&#xA;[14]: https://www.talos.dev/v1.5/learn-more/components/#apid&#xD;&#xA;[15]: https://www.talos.dev/v1.5/kubernetes-guides/upgrading-kubernetes/&#xD;&#xA;[16]: https://i.snap.as/Ps69vAxn.jpg&#xD;&#xA;[17]: https://nixos.org/&#xD;&#xA;[18]: https://zero-to-nix.com/&#xD;&#xA;[19]: https://nixos.org/manual/nix/stable/&#xD;&#xA;[20]: https://search.nixos.org/packages&#xD;&#xA;[21]: https://linuxunplugged.com/524&#xD;&#xA;[22]: https://zero-to-nix.com/concepts/closures&#xD;&#xA;[23]: https://www.gentoo.org/get-started/about/&#xD;&#xA;[24]: https://stackoverflow.com/questions/55130795/what-is-a-kubernetes-manifest&#xD;&#xA;[25]: https://ubuntu.com/about/release-cycle&#xD;&#xA;[26]: https://i.snap.as/dx74mIQc.jpg&#xD;&#xA;&#xD;&#xA;#linux #immutableos #ubuntucore #talos #nixos #atomic #ubuntu #kubernetes&#xD;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>In more recent years, Linux distributions have become quite interesting. The hypothesis of  immutable Linux have gone from pure thought, to full throttled theory. There exists a plethora of options out in the wild. All from different companies, distributions, and communities.</p>

<p>Now while many options exists, for me, I been debating on three. Dive so deep I hit bedrock with Nix and NixOS. Accept Kubernetes as the one true OS through Talos. Or drink the orange glowing Kool-Aid of snaps in Ubuntu Core. Bare with me. There&#39;s logic in these.</p>



<h1 id="the-choices" id="the-choices">The choices</h1>

<p><img src="https://i.snap.as/fHhCjwvb.jpg" alt="immutable"/></p>

<p>I been debating the choice on what to use as my de-facto immutable OS for a while now.</p>

<p><iframe src="https://fosstodon.org/@zeab/109773302609111324/embed" class="mastodon-embed" style="max-width: 100%; border: 0" width="400" allowfullscreen="allowfullscreen"></iframe></p>

<p>The three I settled (for now) are ones I think make the most sense. To set and forget. Distributions where I can realistically automate the entire thing. Without <em>much</em> fear of the ground falling apart. Because unlike normal Linux distributions, immutable OS versions are the wild west of Linux distros. All choose radically different ways of achieving same goals. Choice is great for ecosystem growth. Terrible for a foundation.</p>

<h2 id="ubuntu-core" id="ubuntu-core">Ubuntu Core</h2>

<p><img src="https://i.snap.as/zZjSmqQW.jpg" alt="is this linux"/></p>

<p>The one I&#39;m sure gets most debate is <a href="https://ubuntu.com/core">Ubuntu Core</a>. Even though it&#39;s the most systematically grounded version of an immutable OS. The design is fairly simple. Use <a href="https://snapcraft.io/docs">snaps</a>. But the genius of this is how Canonical made every part of the OS into an isolated snap. Snaps you can update and change without impacting the &#39;core&#39; of the OS.</p>

<p>You can update the entire OS with changing the snap version of the core you are based from. So jumping from Ubuntu core 20 to Ubuntu core 22 was doing something like this. And as Canonical has matured with how Ubuntu core operates, they have also made different layers to this design. Using the atomic principal.</p>

<p>So now there are snaps for the <a href="https://ubuntu.com/core/docs/kernel-building">linux kernel</a>. And configuration based ones called <a href="https://ubuntu.com/core/docs/gadget-snaps">gadget snaps</a>. Responsible for handling bootstrapping on a specific hardware. All following the same everything is a snap.</p>

<p>You can still run any application you want. The difference here is you <a href="https://snapcraft.io/docs/snapcraft">have to package</a> them differently. Or at least differently from the norm.</p>

<p>The con with Ubuntu core is that you have to accept their way of packaging software with snaps. There is no alternative. You <strong>must</strong> have Canonical as your overseer here. The pro is everything stated before benefits your ability to have a stable immutable system. It should not be understated. The ease that comes with having maintainability so low, with ubuntu core, is pretty remarkable.</p>

<h2 id="talos" id="talos">Talos</h2>

<p><img src="https://i.snap.as/ihtrKyXX.jpg" alt="one does not simply use kubernetes"/></p>

<p><a href="https://www.talos.dev/">Talos</a> differs in a very simple way. The <strong>ENTIRE</strong> operating system. It is Linux technically because Talos uses the Linux kernel. It immediately deviates from there. You don&#39;t use an OS with Talos. What you use is <a href="https://kubernetes.io/">Kubernetes</a>.</p>

<p>Kubernetes is infamous at being complex. I even <a href="https://baez.link/the-a-z-stack">wrote about my annoyance</a> years ago. After years, that article is still true for self hosted Kubernetes cluster. So is Kubernetes worth self hosting? I can wholeheartedly say NO. <em>But</em> with an exception. The exception here is if you are using Talos.</p>

<p>You see, Talos strips away practically everything of the OS. But it does it in a way that makes a lot of sense. You don&#39;t use ssh, because there is no shell to ssh to! Instead you use their gRPC API component; <a href="https://www.talos.dev/v1.5/learn-more/components/#apid">apid</a>. Interfacing with the operating system. Talos has no systemd. Instead, replaced with their own PID 1. An init system called <a href="https://www.talos.dev/v1.5/learn-more/components/#machined">machined</a>. With the whole purpose running what kubernetes needs and the gRPC interface to define the OS.</p>

<p>In practice, Talos is actually dead simple to use and administer. It makes using and maintaining kubernetes strikingly easy. New version release? Run <a href="https://www.talos.dev/v1.5/kubernetes-guides/upgrading-kubernetes/">talosctl to upgrade</a>:</p>

<pre><code class="language-bash">talosctl upgrade-k8s --to 1.28.0
</code></pre>

<p>Same is true for updating Talos itself. Because the OS is atomic, there is very little thought process required to handling failures. You rollback like nothing happened. Here, you simply use kubernetes.</p>

<p>The con with talos isn&#39;t actually the use of talos. It&#39;s the principal of only using kubernetes for everything. If you not comfortable with a job orchestration like it, DO NOT USE. If you somehow like ssh or want to install other things directly on the OS, DO NOT USE. If you don&#39;t want to do literally everything as code, definitely DO NOT USE.</p>

<p>But if you do value what Talos offers, it&#39;s immensely difficult not to choose it. So many problems are simply non-existent on Talos. The OS makes you question why even bother with the old ways.</p>

<h2 id="nixos" id="nixos">NixOS</h2>

<p><img src="https://i.snap.as/Ps69vAxn.jpg" alt="NixOS is the messiah"/></p>

<p>The prophecy is written. <a href="https://nixos.org/">NixOS</a> is the answer to our Linux administration ways. It will solve all our problems with packaging software. We will know of history as before and after NixOS. And quite frankly, it really does feel this way.</p>

<p>NixOS shines in the same ways the others in this list shine. It rethinks what a Linux is and could be. If you make absolutely everything atomic, to the core of how you package and run software, then do you even need to care if your OS breaks? NixOS potency is the <a href="https://nixos.org/manual/nix/stable/">nix programming language</a>. Not to be confused with the <a href="https://search.nixos.org/packages">nix as a package manager</a>. Or the OS who is also called nix.</p>

<p>Unlike the other options on this list, the con with NixOS is quite immediately apparent. It&#39;s the difficulty you first have learning the ways of <em>then</em> learning the ways of using nix. Documentation is quite difficult to come by for nix. Much of your time will be left questioning how anything even functions. There&#39;s also then the full upgrade to nix design called <a href="https://zero-to-nix.com/">nix flakes</a>. All enough to really put incredible friction. Friction on nix usage and nixos adoption.</p>

<p><strong>However</strong>, the moment you passed the hurdles of learning nix, nothing comes even close to its versatility. I personally have migrated most of my software to run with nix. Or at the very least, build with nix. For work, I have development environments strictly nix based. And the list goes on.</p>

<p>With NixOS, it&#39;s the same principal. You write your <a href="https://zero-to-nix.com/concepts/closures">closure</a> and you can be assured it will work. No matter what <a href="https://linuxunplugged.com/524">mess you do to the machine</a>. You can roll back like nothing ever occurred. There&#39;s nothing really like Nix and NixOS. The principal is that you handle the hurdle of defining how your software is built. From then on, it will just run.</p>

<p>No more conflicts with versions of python. No issues with running two independently different versions of the same software. Because of the closure design, there&#39;s no need to containerize your applications. They &#39;just&#39; work. With zero conflict running on the same host. Optimally, NixOS gets you the closest to the promise of <a href="https://www.gentoo.org/get-started/about/">Gentoo</a>, but entirely atomic and immutable.</p>

<h1 id="so-what-to-choose" id="so-what-to-choose">So what to choose?</h1>

<p><img src="https://i.snap.as/dx74mIQc.jpg" alt="thinking"/></p>

<p>I don&#39;t know yet. The reality is, all three options serve very similar ideas of running an immutable OS. They simply attack the problems differently. Talos packages software in <a href="https://stackoverflow.com/questions/55130795/what-is-a-kubernetes-manifest">kubernetes manifest</a> essentially. Ubuntu Core is snaps. And NixOS is nix closures. All with different tradeoffs that are far too long to add to this never ending post.</p>

<p>For security reasons, I would probably go for Talos. Because of the stripped purpose of the OS, there&#39;s a smaller footprint to a security issue. Yes. Even with Kubernetes.</p>

<p>For maintainability, I think Ubuntu Core is prime. Canonical has been doing Linux distributions for decades now. They know what it means to make something function. Every Ubuntu core release has a maintenance window of <a href="https://ubuntu.com/about/release-cycle">up to ten years</a>. Meaning, if I want to just run my thing, with no fuss, this will be it.</p>

<p>For customization, nothing gets even close to what NixOS promises and delivers. I would be able to take all the Nix flakes I been writing for myself and run straight on NixOS. True “it works on my machine” on all machines.</p>

<p><a href="https://baez.link/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://baez.link/tag:immutableos" class="hashtag"><span>#</span><span class="p-category">immutableos</span></a> <a href="https://baez.link/tag:ubuntucore" class="hashtag"><span>#</span><span class="p-category">ubuntucore</span></a> <a href="https://baez.link/tag:talos" class="hashtag"><span>#</span><span class="p-category">talos</span></a> <a href="https://baez.link/tag:nixos" class="hashtag"><span>#</span><span class="p-category">nixos</span></a> <a href="https://baez.link/tag:atomic" class="hashtag"><span>#</span><span class="p-category">atomic</span></a> <a href="https://baez.link/tag:ubuntu" class="hashtag"><span>#</span><span class="p-category">ubuntu</span></a> <a href="https://baez.link/tag:kubernetes" class="hashtag"><span>#</span><span class="p-category">kubernetes</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/what-immutable-linux-to-use</guid>
      <pubDate>Fri, 08 Sep 2023 16:22:30 +0000</pubDate>
    </item>
    <item>
      <title>Starting anew without using Hashicorp products</title>
      <link>https://baez.link/starting-anew-without-using-hashicorp-products?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[End of summer 2023 is like many others. Hot. With a spice of drama and outrage. For software communities, the trend unfortunately is the same. &#xD;&#xA;&#xD;&#xA;In mid August, Hashicorp dropped an infinitesimal bomb. All their software would be stripped from their open source license. Replaced with business source license for four years to a release.&#xD;&#xA;&#xD;&#xA;!--more--&#xD;&#xA;&#xD;&#xA;I won&#39;t go into what the license entails. The thousands of community posts throughout the internet likely fill that void.&#xD;&#xA;&#xD;&#xA;Instead, reflecting what a delay in open source means. To collect my thoughts. And I came up with essentially Hashicorp decision is their own. While I can sympathize with their reasoning, it doesn&#39;t mean I agree. Nor do I need to agree with the company.&#xD;&#xA;&#xD;&#xA;For me personally, the license means a time of change. Moving away from almost exclusively Hashicorp software. There are literal thousands of alternatives available. Maybe not as seamlessly interconnected. But none the less, the options exist.&#xD;&#xA;&#xD;&#xA;I already have my eyes set on a bunch of shinny things I wanted to try. But been dragging my feat. Precisely due to ease of use Hashicorp products have provided. Though now, going to actually try something different.&#xD;&#xA;&#xD;&#xA;This time around, I&#39;m going to write out my thoughts on what I end up building. So maybe, the built infrastructure can help others.&#xD;&#xA;&#xD;&#xA;#hashicorp #license #software&#xD;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>End of summer 2023 is like many others. Hot. With a spice of drama and outrage. For software communities, the trend unfortunately is the same.</p>

<p>In mid August, Hashicorp dropped an infinitesimal bomb. All their software would be stripped from their open source license. Replaced with business source license for four years to a release.</p>



<p>I won&#39;t go into what the license entails. The thousands of community posts throughout the internet likely fill that void.</p>

<p>Instead, reflecting what a delay in open source means. To collect my thoughts. And I came up with essentially Hashicorp decision is their own. While I can sympathize with their reasoning, it doesn&#39;t mean I agree. Nor do I need to agree with the company.</p>

<p>For me personally, the license means a time of change. Moving away from almost exclusively Hashicorp software. There are literal thousands of alternatives available. Maybe not as seamlessly interconnected. But none the less, the options exist.</p>

<p>I already have my eyes set on a bunch of shinny things I wanted to try. But been dragging my feat. Precisely due to ease of use Hashicorp products have provided. Though now, going to actually try something different.</p>

<p>This time around, I&#39;m going to write out my thoughts on what I end up building. So maybe, the built infrastructure can help others.</p>

<p><a href="https://baez.link/tag:hashicorp" class="hashtag"><span>#</span><span class="p-category">hashicorp</span></a> <a href="https://baez.link/tag:license" class="hashtag"><span>#</span><span class="p-category">license</span></a> <a href="https://baez.link/tag:software" class="hashtag"><span>#</span><span class="p-category">software</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/starting-anew-without-using-hashicorp-products</guid>
      <pubDate>Wed, 06 Sep 2023 17:22:34 +0000</pubDate>
    </item>
    <item>
      <title>Absorbing Information In The Spam Notifications Age</title>
      <link>https://baez.link/absorbing-information-in-the-spam-notifications-age?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Tools helping you absorb information are necessities now. Not simply gathering information. Gone are the days you can use only RSS and call it a day. Almost every service under the sun wants to notify you about 14 times a day. Only to make sure you are forever spending your time on them. Mailbrew is the tool I been using to stop the onslaught.&#xA;&#xA;!--more--&#xA;&#xA;I&#39;ve used RSS for a long time. Absorbing was the same as gathering information then. In the past, all it took was to subscribe to a few feeds of some sites or blogs with summaries. Then going about and sitting down to read through those feeds of interests. RSS worked immensely well for the time. However, as more content ended up being produced, even RSS has become very unmanageable. You end up with multiple sources and feeds that are simply never ending. Requiring you to continuously spend your time gathering information rather than absorbing.  &#xA;&#xA;Worse still is with how much social media has changed where you have to gather. It is not enough to read through your RSS. You also now need to go through twitter, reddit, hacker news, countless of mailing lists, and all those blogs you follow. Gathering information can get exhausting. Especially if your focus is to absorb, learn, and move on with your day. Simply put, too much content to gather and too little time to absorb it all.&#xA;&#xA;The way I been tackling this problem is having to reluctantly move away from RSS. Instead using a proprietary tool called Mailbrew. Somewhere around last year, DHH wrote a post on not using twitter directly. And the tool, he ended up focusing on to achieve that was the very same; Mailbrew. So I&#39;ve gone sort of the same channel of thought, but taken it a bit further. I don&#39;t use Mailbrew only for twitter. I use it to gather all of the news I use to absorb. &#xA;&#xA;With Mailbrew, I&#39;m able to use multiple different sources. So I can still have all the RSS feeds I had before. But now, I can also add social media to the mix. So yes, twitter, reddit, and all the rest. All gathering information in different digests Mailbrew calls &#34;brews&#34;. Each brew of mine are centered on themes. For example, I have one for all software development news I tend to look for I call Active Software News. The list focuses solely on software development and gathers information from a few source types like hacker news, RSS, and twitter. Another would then be for everything I want to know of web3 I call Digitial Financials. Again same idea of gathering informational themes. &#xA;&#xA;If only gathering information was enough, Mailbrew can help quite a lot. But that isn&#39;t the goal here. The goal is to not just gather information, but also absorb it. So the most important part of Mailbrew is in limiting how much of every source you are provided. &#xA;&#xA;Starting with frequency, Mailbrew can send each one of your brew digests according to the frequency you desire. For Digital Financials, I have Mailbrew send out the brew each day in the morning. But for another public brew of mine called Active Software News, I have it only send a brew weekdays on Monday, Wednesday, and Friday at noon. However if frequency was the only limitation, then you would still have thousands of pieces to drown from when you open those digest.&#xA;&#xA;The next and more important part is the count of each source. This I think is where Mailbrew shines and is why I&#39;ve stuck to using it over alternatives. You see, the frequency is how you limit when you get information, but the count of each source is how you limit the exhaust of those sources. Instead of having every article sent to you waiting to be read in a forever list, you can limit the count by most recent or popular of those sources. &#xA;&#xA;An excellent example of catering counts is the Digital Financials brew. Web3 is almost impossible to keep up to date without using twitter. Most of the news is centered around the platform and gathering alone can be extremely time consuming. Instead, I have a few set of lists on the brew to limit how much of each type of source is given. Some have three posts for source types and other just two for any given twitter handle. Along with limiting each post, the brew&#39;s source types also have the count of the source type limited. Meaning, not only how many post per author is given, but limiting how many posts of the source type as well. This allows to be informed, but only enough. So that it takes about 10-20 minutes a day to get informed of what&#39;s going on versus hours drained of my free time. It works well enough that I don&#39;t need to be constantly reading articles and posts the gathered information.&#xA;&#xA;The power of Mailbrew with both limiting frequency and count of digest brews means you are able to focus. You don&#39;t or need to be on top of every piece of news available on those digests. You can still very much go down the rabbit hole of a specific source, but you don&#39;t have to start with everything from the start. Only enough to both gather information and absorb what you are gathering. Not constantly checking your sources, scrolling endless lists of social media, and being able to disable notifications for practically everything. So you can spend your day doing the thousand other things you have to do.&#xA;&#xA;[1]: https://mailbrew.com/&#xA;[2]: https://twitter.com/dhh&#xA;[3]: https://world.hey.com/dhh/not-just-what-you-read-but-how-64648303&#xA;[4]: https://app.mailbrew.com/baez/active-software-news-CFG2pEb8x6Ar&#xA;[5]: https://app.mailbrew.com/baez/digital-financials-kull4zNOGA79&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>Tools helping you absorb information are necessities now. Not simply gathering information. Gone are the days you can use only RSS and call it a day. Almost every service under the sun wants to notify you about 14 times a day. Only to make sure you are forever spending your time on them. Mailbrew is the tool I been using to stop the onslaught.</p>



<p>I&#39;ve used RSS for a long time. Absorbing was the same as gathering information then. In the past, all it took was to subscribe to a few feeds of some sites or blogs with summaries. Then going about and sitting down to read through those feeds of interests. RSS worked immensely well for the time. However, as more content ended up being produced, even RSS has become very unmanageable. You end up with multiple sources and feeds that are simply never ending. Requiring you to continuously spend your time gathering information rather than absorbing.</p>

<p>Worse still is with how much social media has changed where you have to gather. It is not enough to read through your RSS. You also now need to go through twitter, reddit, hacker news, countless of mailing lists, and all those blogs you follow. Gathering information can get exhausting. Especially if your focus is to absorb, learn, and move on with your day. Simply put, too much content to gather and too little time to absorb it all.</p>

<p>The way I been tackling this problem is having to reluctantly move away from RSS. Instead using a proprietary tool called <a href="https://mailbrew.com/">Mailbrew</a>. Somewhere around last year, <a href="https://twitter.com/dhh">DHH</a> wrote a post on <a href="https://world.hey.com/dhh/not-just-what-you-read-but-how-64648303">not using twitter directly</a>. And the tool, he ended up focusing on to achieve that was the very same; Mailbrew. So I&#39;ve gone sort of the same channel of thought, but taken it a bit further. I don&#39;t use Mailbrew only for twitter. I use it to gather <em>all</em> of the news I use to absorb.</p>

<p>With Mailbrew, I&#39;m able to use multiple different sources. So I can still have all the RSS feeds I had before. But now, I can also add social media to the mix. So yes, twitter, reddit, and all the rest. All gathering information in different digests Mailbrew calls “brews”. Each brew of mine are centered on themes. For example, I have one for all software development news I tend to look for I call <a href="https://app.mailbrew.com/baez/active-software-news-CFG2pEb8x6Ar">Active Software News</a>. The list focuses solely on software development and gathers information from a few source types like hacker news, RSS, and twitter. Another would then be for everything I want to know of web3 I call <a href="https://app.mailbrew.com/baez/active-software-news-CFG2pEb8x6Ar">Digitial Financials</a>. Again same idea of gathering informational themes.</p>

<p>If only gathering information was enough, Mailbrew can help quite a lot. But that isn&#39;t the goal here. The goal is to not just gather information, but also absorb it. So the most important part of Mailbrew is in limiting how much of every source you are provided.</p>

<p>Starting with frequency, Mailbrew can send each one of your brew digests according to the frequency you desire. For <a href="https://app.mailbrew.com/baez/digital-financials-kull4zNOGA79">Digital Financials</a>, I have Mailbrew send out the brew each day in the morning. But for another public brew of mine called <a href="https://app.mailbrew.com/baez/active-software-news-CFG2pEb8x6Ar">Active Software News</a>, I have it only send a brew weekdays on Monday, Wednesday, and Friday at noon. However if frequency was the only limitation, then you would still have thousands of pieces to drown from when you open those digest.</p>

<p>The next and more important part is the count of each source. This I think is where Mailbrew shines and is why I&#39;ve stuck to using it over alternatives. You see, the frequency is how you limit when you get information, but the count of each source is how you limit the exhaust of those sources. Instead of having every article sent to you waiting to be read in a forever list, you can limit the count by most recent or popular of those sources.</p>

<p>An excellent example of catering counts is the <a href="https://app.mailbrew.com/baez/digital-financials-kull4zNOGA79">Digital Financials</a> brew. Web3 is almost impossible to keep up to date without using twitter. Most of the news is centered around the platform and gathering alone can be extremely time consuming. Instead, I have a few set of lists on the brew to limit how much of each type of source is given. Some have three posts for source types and other just two for any given twitter handle. Along with limiting each post, the brew&#39;s source types also have the count of the source type limited. Meaning, not only how many post per author is given, but limiting how many posts of the source type as well. This allows to be informed, but only enough. So that it takes about 10-20 minutes a day to get informed of what&#39;s going on versus hours drained of my free time. It works well enough that I don&#39;t need to be constantly reading articles and posts the gathered information.</p>

<p>The power of Mailbrew with both limiting frequency and count of digest brews means you are able to focus. You don&#39;t or need to be on top of every piece of news available on those digests. You can still very much go down the rabbit hole of a specific source, but you don&#39;t have to start with everything from the start. Only enough to both gather information and absorb what you are gathering. Not constantly checking your sources, scrolling endless lists of social media, and being able to disable notifications for practically everything. So you can spend your day doing the thousand other things you have to do.</p>
]]></content:encoded>
      <guid>https://baez.link/absorbing-information-in-the-spam-notifications-age</guid>
      <pubDate>Mon, 11 Apr 2022 17:16:53 +0000</pubDate>
    </item>
    <item>
      <title>Getting Started Using Nix Flakes As An Elixir Development Environment</title>
      <link>https://baez.link/getting-started-using-nix-flakes-as-an-elixir-development-environment?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Never is a project started from &#39;just&#39; the init. You have to take care of packages you use, CI tools for builds you make, database hookups, development tooling, and countless other parts. All of this takes time. With nix flakes, you may be able to start with all the main components you need immediately. Giving way to actually developing that app you been itching to build, without the days/weeks adventure getting everything you need just right.&#xA;&#xA;!--more--&#xA;&#xA;Now it doesn&#39;t mean that immediately reading this starter guide, you will have everything under the sun set up with Nix Flakes for your development need. But at least, you won&#39;t have to worry about setting up asdf, your weird hacks you need for your machine and the other tiny little things to get elixir started with elixir-ls.&#xA;&#xA;Background&#xA;&#xA;A little background. Nix, for the uninitiated, is a purely functional language for package management. What makes Nix interesting is you can use the purely functional aspect to build out artifacts which are entirely idempotent. Meaning, no matter how many times you run the nix expressions you have, the end result will always be the same regardless of external state of a machine. Its build structure as a package manager has evolved the language to build out guaranteed result for all sorts of software. Yet, Nix itself isn&#39;t exactly easy to learn. Due in part to the ambitions of the project and difficulties, which arose from those ambitions, complexities crept in. Nix Flakes is an answer to some of these complexities and more. &#xA;&#xA;With Nix Flakes, you have the ability to have a very well defined package for a project. Written and using Nix in a way that takes all the learning from its years. Making it easier to define what you want from nix; the build result for a package.  &#xA;&#xA;Getting flakes enabled&#xA;&#xA;So how do you get started with Nix Flakes? The first part that you should probably have is nix already installed and some familiarity with the nix language. Run through the guide to your platform needs.&#xA;&#xA;Next is the settings to use nix with flakes. Since flakes is still in development (but relatively stable), you do need to enable the feature on nix. You can do so by enabling the experimental features of both nix-command and flakes through the nix.conf file.&#xA;&#xA;make nix config path if not existent&#xA;mkdir ~/.config/nix/ -p&#xA;add the settings to the file on config path&#xA;echo &#34;experimental-features = nix-command flakes&#34; | tee ~/.config/nix/nix.conf -a&#xA;&#xA;Once the settings are applied, you should be able to validate by running show-config&#xA;&#xA;nix show-config | grep experimental&#xA;&#xA;If all is successful, you should see an output for the same experimental features you wrote on the nix.conf file.&#xA;&#xA;The flake.nix file&#xA;&#xA;So now to get to use nix flake comes the heart of the project, the flake.nix file. The file itself needs three defined keys on the set, description, inputs, and outputs. Each one of them plays a different role in how to define your package you want to build. &#xA;&#xA;description key&#xA;&#xA;The description key is a one liner description of what the flake project is. Helps in giving what the flake is for quick review after you have a few hundred of these built out...&#xA;&#xA;flake.nix, ignoring input and output&#xA;{&#xA;  description = &#34;A description of some kind&#34;;&#xA;}&#xA;&#xA;inputs key&#xA;&#xA;The inputs is how you can import external sources of other flakes into the flake project you have. In other words, any project you may need or tools required to get started, this is where you will define their source.  Example below is using the standard nixpkgs and a tool called flake-utils, which provides a set of functions to make flake nix packages simpler to set up without external dependencies.&#xA;&#xA;flake.nix, ignoring description and output&#xA;{&#xA;  inputs = {&#xA;    # using unstable branch for the latest packages of nixpkgs&#xA;    nixpkgs = { url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;; }; &#xA;    flake-utils = { url = &#34;github:numtide/flake-utils&#34;; };&#xA;  };&#xA;}&#xA;&#xA;outputs key&#xA;&#xA;The outputs is where the bulk of your logic for what you will build with flakes. It has quite a numerous of options, but in this use case, we only care of the devShell key, which is what will be used to populate the development environment. While the following may all look like a lot for the output, it&#39;s also everything that would be needed for either building out a nix flake package or use for a development environment: &#xA;&#xA;flake.nix, ignoreing description and input&#xA;{&#xA;  outputs = { self, nixpkgs, flake-utils }:&#xA;   flake-utils.lib.eachDefaultSystem (system:&#xA;      let&#xA;        pkgs = import nixpkgs { inherit system; };&#xA;&#xA;        elixir = pkgs.beam.packages.erlang.elixir;&#xA;        elixir-ls = pkgs.beam.packages.erlang.elixirls;&#xA;        locales = pkgs.glibcLocales;&#xA;      in&#xA;      {&#xA;        devShell = pkgs.mkShell {&#xA;          buildInputs = [&#xA;            elixir&#xA;            locales&#xA;          ]&#xA;        }&#xA;      });&#xA;}&#xA;&#xA;The output key is actually a function which takes the inputs defined on inputs. Hence the set { self, nixpkgs, flake-utils }. All the inputs on the function were defined on the input with self being the flake.nix file itself. The next portion is using a simple but powerful flake-utils function called eachDefaultSystem. What the function provides is actually build the development environment for all available platforms currently available for nix as a default. You can see the list by running nix flake show  (after the file is fully written) and you will be provided an output like the following: &#xA;&#xA;└───devShell&#xA;    ├───aarch64-darwin: development environment &#39;nix-shell&#39;&#xA;    ├───aarch64-linux: development environment &#39;nix-shell&#39;&#xA;    ├───i686-linux: development environment &#39;nix-shell&#39;&#xA;    ├───x8664-darwin: development environment &#39;nix-shell&#39;&#xA;    └───x8664-linux: development environment &#39;nix-shell&#39;&#xA;&#xA;Meaning, you do not have to worry about what OS you using, as long as it&#39;s linux or macos. You write your nix flake with the ability to use it with all the supported platforms from the start for your environment. In other words, Write once, run on all the machines. No more of that &#39;it runs on my machine&#39; debacle. &#xA;&#xA;The last section is a let .. in pair to both declare what you will use in the system you are building and devShell itself. The packages use here are simply elixir, elixir-ls for sanity, and locales to make sure elixir is able to use the proper locale settings of the shell environment produced by the nix flake. &#xA;&#xA;Finally, the devShell in this case is the buildInputs wanted for the shell environment and nothing more: &#xA;&#xA;      {&#xA;        devShell = pkgs.mkShell {&#xA;          buildInputs = [&#xA;            elixir&#xA;            locales&#xA;          ]&#xA;      }&#xA;&#xA;Notice while elixir-ls package isn&#39;t directly declared in the mkShell buildInput, it is part of the output on let. Allowing to still have linked access to its packages.&#xA;&#xA;The full nix flake development environment&#xA;&#xA;Finally, putting it all together, getting a nix flake project started for your development environment, all falls down to what packages you need on the devShell.  With the flake.nix file on the root of your project, you have the ability to have the packages you need and ready for you to work with. &#xA;&#xA;{&#xA;    description = &#34;Development environment&#34;;&#xA;&#xA;  inputs = {&#xA;      nixpkgs = { url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;; };&#xA;    flake-utils = { url = &#34;github:numtide/flake-utils&#34;; };&#xA;  };&#xA;&#xA;  outputs = { self, nixpkgs, flake-utils }:&#xA;    flake-utils.lib.eachDefaultSystem (system:&#xA;      let&#xA;        inherit (nixpkgs.lib) optional;&#xA;        pkgs = import nixpkgs { inherit system; };&#xA;&#xA;        elixir = pkgs.beam.packages.erlang.elixir;&#xA;        elixir-ls = pkgs.beam.packages.erlang.elixirls;&#xA;        locales = pkgs.glibcLocales;&#xA;      in&#xA;      {&#xA;          devShell = pkgs.mkShell&#xA;          {&#xA;              buildInputs = [&#xA;                elixir&#xA;              locales&#xA;            ];&#xA;          };&#xA;      }&#xA;    );&#xA;}&#xA;#elixir #development #nix #flakes &#xA;&#xA;[1]: https://nixos.org/manual/nix/stable/introduction.html&#xA;[2]: https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-flake.html&#xA;[3]: https://nixos.wiki/wiki/Flakes&#xA;[4]: https://peppe.rs/posts/novicenix:flaketemplates/&#xA;[5]: https://nixos.org/guides/install-nix.html&#xA;[6]: https://nixos.org/manual/nix/unstable/command-ref/conf-file.html&#xA;[7]: https://nixos.wiki/wiki/NixExpressionLanguage&#xA;[8]: https://github.com/numtide/flake-utils&#xA;[9]: https://nixos.wiki/wiki/Flakes#Outputschema&#xA;[10]: https://github.com/elixir-lsp/elixir-ls&#xA;&#xA;Bonus: Limiting by platform of choice your nix flake&#xA;&#xA;Glad you made it this far. Well friend, besides the default system listing on flake-utils, there is another route you can go in setting up you development environment. You may have noticed that the run of nix flake show showed multiple platforms and architectures available by default. However, it may be you don&#39;t need to use all those platforms and architecture. You may like to target only a set of architectures or platforms you need and that&#39;s it. Speeding up your build process and saving on some storage if you don&#39;t want those extra platforms.&#xA;&#xA;flake-utils has the option with the flake-utils.lib.eachSystem function. The function itself takes an array of systems you want the flake to build out. To use the function, you have to use a let .. in expression to define the array and then it&#39;s use. For my use, I only target aarch64-linux and x8664-linux since those are the only platforms I work with. So I define them on an array. &#xA;&#xA;supportedSystems = [ &#34;x8664-linux&#34; &#34;aarch64-linux&#34; ];&#xA;Then I use that defined array to apply to the the function flake-utils.lib.eachSystem&#xA;&#xA;flake-utils.lib.eachSystem supportedSystems (system: &#xA;  # same as blocks before with flake-utils.lib.eachDefaultSystem&#xA;)&#xA;&#xA;With all of it put together below, you can see how to use with let .. in expression:&#xA;&#xA;{&#xA;  description = &#34;Development environment&#34;;&#xA;&#xA;  inputs = {&#xA;    nixpkgs = { url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;; };&#xA;    flake-utils = { url = &#34;github:numtide/flake-utils&#34;; };&#xA;  };&#xA;&#xA;  outputs = { self, nixpkgs, flake-utils }:&#xA;    let supportedSystems = [ &#34;x8664-linux&#34; &#34;aarch64-linux&#34; ];&#xA;    in&#xA;    flake-utils.lib.eachSystem supportedSystems (system:&#xA;      let&#xA;        inherit (nixpkgs.lib) optional;&#xA;        pkgs = import nixpkgs { inherit system; };&#xA;&#xA;        elixir = pkgs.beam.packages.erlang.elixir;&#xA;        elixir-ls = pkgs.beam.packages.erlang.elixirls;&#xA;        locales = pkgs.glibcLocales;&#xA;      in&#xA;      {&#xA;        devShell = pkgs.mkShell&#xA;          {&#xA;            buildInputs = [&#xA;              elixir&#xA;              locales&#xA;            ];&#xA;          };&#xA;      }&#xA;    );&#xA;}&#xA; &#xA;Now when you run nix flake show the output should be only the platforms you defined: &#xA;&#xA;    ├───aarch64-linux: development environment &#39;nix-shell&#39;&#xA;    └───x86_64-linux: development environment &#39;nix-shell&#39;&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p>Never is a project started from &#39;just&#39; the init. You have to take care of packages you use, CI tools for builds you make, database hookups, development tooling, and countless other parts. All of this takes time. With nix flakes, you may be able to start with all the main components you need immediately. Giving way to actually developing that app you been itching to build, without the days/weeks adventure getting everything you need just right.</p>



<p>Now it doesn&#39;t mean that immediately reading this starter guide, you will have everything under the sun set up with <a href="https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-flake.html">Nix Flakes</a> for your development need. But at least, you won&#39;t have to worry about setting up asdf, your weird hacks you need for your machine and the other tiny little things to get elixir started with <a href="https://github.com/elixir-lsp/elixir-ls">elixir-ls</a>.</p>

<h2 id="background" id="background">Background</h2>

<p>A little background. <a href="https://nixos.org/manual/nix/stable/introduction.html">Nix</a>, for the uninitiated, is a purely functional language for package management. What makes Nix interesting is you can use the purely functional aspect to build out artifacts which are entirely idempotent. Meaning, no matter how many times you run the nix expressions you have, the end result will always be the same regardless of external state of a machine. Its build structure as a package manager has evolved the language to build out guaranteed result for all sorts of software. Yet, Nix itself isn&#39;t exactly easy to learn. Due in part to the ambitions of the project and difficulties, which arose from those ambitions, complexities crept in. <a href="https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-flake.html">Nix Flakes</a> is an answer to some of these complexities and more.</p>

<p>With Nix Flakes, you have the ability to have a very well defined package for a project. Written and using Nix in a way that takes all the learning from its years. Making it easier to define what you want from nix; the build result for a package.</p>

<h2 id="getting-flakes-enabled" id="getting-flakes-enabled">Getting flakes enabled</h2>

<p>So how do you get started with Nix Flakes? The first part that you should probably have is <a href="https://nixos.org/guides/install-nix.html">nix already installed</a> and some <a href="https://nixos.wiki/wiki/Nix_Expression_Language">familiarity with the nix language</a>. Run through the guide to your platform needs.</p>

<p>Next is the settings to use nix with flakes. Since flakes is still in development (but relatively stable), you do need to enable the feature on nix. You can do so by enabling the experimental features of both <code>nix-command</code> and <code>flakes</code> through the <a href="https://nixos.org/manual/nix/unstable/command-ref/conf-file.html">nix.conf file</a>.</p>

<pre><code># make nix config path if not existent
mkdir ~/.config/nix/ -p
# add the settings to the file on config path
echo &#34;experimental-features = nix-command flakes&#34; | tee ~/.config/nix/nix.conf -a
</code></pre>

<p>Once the settings are applied, you should be able to validate by running <code>show-config</code></p>

<pre><code>nix show-config | grep experimental
</code></pre>

<p>If all is successful, you should see an output for the same experimental features you wrote on the <code>nix.conf</code> file.</p>

<h2 id="the-flake-nix-file" id="the-flake-nix-file">The <code>flake.nix</code> file</h2>

<p>So now to get to use nix flake comes the heart of the project, the <code>flake.nix</code> file. The file itself needs three defined keys on the set, <code>description</code>, <code>inputs</code>, and <code>outputs</code>. Each one of them plays a different role in how to define your package you want to build.</p>

<h3 id="description-key" id="description-key"><code>description</code> key</h3>

<p>The <code>description</code> key is a one liner description of what the flake project is. Helps in giving what the flake is for quick review after you have a few hundred of these built out...</p>

<pre><code># flake.nix, ignoring input and output
{
  description = &#34;A description of some kind&#34;;
}
</code></pre>

<h3 id="inputs-key" id="inputs-key"><code>inputs</code> key</h3>

<p>The <code>inputs</code> is how you can import external sources of other flakes into the flake project you have. In other words, any project you may need or tools required to get started, this is where you will define their source.  Example below is using the standard nixpkgs and a tool called <a href="https://github.com/numtide/flake-utils">flake-utils</a>, which provides a set of functions to make flake nix packages simpler to set up without external dependencies.</p>

<pre><code class="language-nix"># flake.nix, ignoring description and output
{
  inputs = {
    # using unstable branch for the latest packages of nixpkgs
    nixpkgs = { url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;; }; 
    flake-utils = { url = &#34;github:numtide/flake-utils&#34;; };
  };
}
</code></pre>

<h3 id="outputs-key" id="outputs-key"><code>outputs</code> key</h3>

<p>The <code>outputs</code> is where the bulk of your logic for what you will build with flakes. It has quite <a href="https://nixos.wiki/wiki/Flakes#Output_schema">a numerous of options</a>, but in this use case, we only care of the <code>devShell</code> key, which is what will be used to populate the development environment. While the following may all look like a lot for the output, it&#39;s also everything that would be needed for either building out a nix flake package or use for a development environment:</p>

<pre><code class="language-nix"># flake.nix, ignoreing description and input
{
  outputs = { self, nixpkgs, flake-utils }:
   flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs { inherit system; };

        elixir = pkgs.beam.packages.erlang.elixir;
        elixir-ls = pkgs.beam.packages.erlang.elixir_ls;
        locales = pkgs.glibcLocales;
      in
      {
        devShell = pkgs.mkShell {
          buildInputs = [
            elixir
            locales
          ]
        }
      });
}
</code></pre>

<p>The output key is actually a function which takes the inputs defined on inputs. Hence the set <code>{ self, nixpkgs, flake-utils }</code>. All the inputs on the function were defined on the input with <code>self</code> being the <code>flake.nix</code> file itself. The next portion is using a simple but powerful flake-utils function called <code>eachDefaultSystem</code>. What the function provides is actually build the development environment for all available platforms currently available for nix as a default. You can see the list by running <code>nix flake show</code>  (after the file is fully written) and you will be provided an output like the following:</p>

<pre><code>└───devShell
    ├───aarch64-darwin: development environment &#39;nix-shell&#39;
    ├───aarch64-linux: development environment &#39;nix-shell&#39;
    ├───i686-linux: development environment &#39;nix-shell&#39;
    ├───x86_64-darwin: development environment &#39;nix-shell&#39;
    └───x86_64-linux: development environment &#39;nix-shell&#39;
</code></pre>

<p>Meaning, you do not have to worry about what OS you using, as long as it&#39;s linux or macos. You write your nix flake with the ability to use it with all the supported platforms from the start for your environment. In other words, Write once, run on all the machines. No more of that &#39;it runs on my machine&#39; debacle.</p>

<p>The last section is a <code>let .. in</code> pair to both declare what you will use in the system you are building and <code>devShell</code> itself. The packages use here are simply elixir, <a href="https://github.com/elixir-lsp/elixir-ls">elixir-ls</a> for sanity, and locales to make sure elixir is able to use the proper locale settings of the shell environment produced by the nix flake.</p>

<p>Finally, the <code>devShell</code> in this case is the buildInputs wanted for the shell environment and nothing more:</p>

<pre><code class="language-nix">      {
        devShell = pkgs.mkShell {
          buildInputs = [
            elixir
            locales
          ]
      }
</code></pre>

<p>Notice while <code>elixir-ls</code> package isn&#39;t directly declared in the mkShell buildInput, it is part of the output on <code>let</code>. Allowing to still have linked access to its packages.</p>

<h2 id="the-full-nix-flake-development-environment" id="the-full-nix-flake-development-environment">The full nix flake development environment</h2>

<p>Finally, putting it all together, getting a nix flake project started for your development environment, all falls down to what packages you need on the <code>devShell</code>.  With the <code>flake.nix</code> file on the root of your project, you have the ability to have the packages you need and ready for you to work with.</p>

<pre><code class="language-nix">{
    description = &#34;Development environment&#34;;

  inputs = {
      nixpkgs = { url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;; };
    flake-utils = { url = &#34;github:numtide/flake-utils&#34;; };
  };

  outputs = { self, nixpkgs, flake-utils }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        inherit (nixpkgs.lib) optional;
        pkgs = import nixpkgs { inherit system; };

        elixir = pkgs.beam.packages.erlang.elixir;
        elixir-ls = pkgs.beam.packages.erlang.elixir_ls;
        locales = pkgs.glibcLocales;
      in
      {
          devShell = pkgs.mkShell
          {
              buildInputs = [
                elixir
              locales
            ];
          };
      }
    );
}
</code></pre>

<p><a href="https://baez.link/tag:elixir" class="hashtag"><span>#</span><span class="p-category">elixir</span></a> <a href="https://baez.link/tag:development" class="hashtag"><span>#</span><span class="p-category">development</span></a> <a href="https://baez.link/tag:nix" class="hashtag"><span>#</span><span class="p-category">nix</span></a> <a href="https://baez.link/tag:flakes" class="hashtag"><span>#</span><span class="p-category">flakes</span></a></p>

<h2 id="bonus-limiting-by-platform-of-choice-your-nix-flake" id="bonus-limiting-by-platform-of-choice-your-nix-flake">Bonus: Limiting by platform of choice your nix flake</h2>

<p>Glad you made it this far. Well friend, besides the default system listing on <code>flake-utils</code>, there is another route you can go in setting up you development environment. You may have noticed that the run of <code>nix flake show</code> showed multiple platforms and architectures available by default. However, it may be you don&#39;t need to use all those platforms and architecture. You may like to target only a set of architectures or platforms you need and that&#39;s it. Speeding up your build process and saving on some storage if you don&#39;t want those extra platforms.</p>

<p><a href="https://github.com/numtide/flake-utils">flake-utils</a> has the option with the <code>flake-utils.lib.eachSystem</code> function. The function itself takes an array of systems you want the flake to build out. To use the function, you have to use a <code>let .. in</code> expression to define the array and then it&#39;s use. For my use, I only target <code>aarch64-linux</code> and <code>x86_64-linux</code> since those are the only platforms I work with. So I define them on an array.</p>

<pre><code class="language-nix">supportedSystems = [ &#34;x86_64-linux&#34; &#34;aarch64-linux&#34; ];
</code></pre>

<p>Then I use that defined array to apply to the the function <code>flake-utils.lib.eachSystem</code></p>

<pre><code>flake-utils.lib.eachSystem supportedSystems (system: 
  # same as blocks before with `flake-utils.lib.eachDefaultSystem`
)
</code></pre>

<p>With all of it put together below, you can see how to use with <code>let .. in</code> expression:</p>

<pre><code class="language-nix">{
  description = &#34;Development environment&#34;;

  inputs = {
    nixpkgs = { url = &#34;github:NixOS/nixpkgs/nixpkgs-unstable&#34;; };
    flake-utils = { url = &#34;github:numtide/flake-utils&#34;; };
  };

  outputs = { self, nixpkgs, flake-utils }:
    let supportedSystems = [ &#34;x86_64-linux&#34; &#34;aarch64-linux&#34; ];
    in
    flake-utils.lib.eachSystem supportedSystems (system:
      let
        inherit (nixpkgs.lib) optional;
        pkgs = import nixpkgs { inherit system; };

        elixir = pkgs.beam.packages.erlang.elixir;
        elixir-ls = pkgs.beam.packages.erlang.elixir_ls;
        locales = pkgs.glibcLocales;
      in
      {
        devShell = pkgs.mkShell
          {
            buildInputs = [
              elixir
              locales
            ];
          };
      }
    );
}
</code></pre>

<p>Now when you run <code>nix flake show</code> the output should be only the platforms you defined:</p>

<pre><code>    ├───aarch64-linux: development environment &#39;nix-shell&#39;
    └───x86_64-linux: development environment &#39;nix-shell&#39;
</code></pre>
]]></content:encoded>
      <guid>https://baez.link/getting-started-using-nix-flakes-as-an-elixir-development-environment</guid>
      <pubDate>Sun, 09 Jan 2022 18:12:28 +0000</pubDate>
    </item>
    <item>
      <title>A quick converter from bitwarden to 1password using SQLite</title>
      <link>https://baez.link/a-quick-converter-from-bitwarden-to-1password-using-sqlite?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I been recently experimenting in using a different tool besides bitwarden for my password management needs. Nothing wrong with bitwarden, but it is always a good idea to look at other products to get a feel of what could be improved or what you can do to find ways around certain features. &#xA;&#xA;!--more--&#xA;&#xA;I been using 1password for a while at work and with the 1Password Linux having some nice integrations, thought why not try it out as an every day driver. Not to mention that a business plan gives a free family plan for every account on its plan. So I started to venture forth. And of course, immediately I hit a snag. Bitwarden isn&#39;t supported as a 1password direct conversion. Although 1password do offer a way for you to use csv import, you still need to format the csv correctly.&#xA;&#xA;Bitwarden graciously offers the ability to export as a csv and json (and encrypted json if that&#39;s your thing). The exported raw csv doesn&#39;t exactly format as is to 1password. Looked around online and found some tools that could do the job, but they kind of suck in setting it up how I would like. So of course I want to make my own. Thing is, I wanted to do this as quick as I could. &#xA;&#xA;The tool would have to take the exported csv file and truncate to only have login type rows and only columns related to such. One of the endless interesting features sqlite has is the ability to import and export a csv. You also get a tool to mutate a relational data file with a file based relation database. Thus, sqlite was my preferred choice to get the conversion out of the way.&#xA;&#xA;So first&#39;s thing&#39;s first: figure out what is the converted format to use on 1password. When you export from bitwarden as an csv, the export column outputs as follows:&#xA;&#xA;folder,favorite,type,name,notes,fields,loginuri,loginusername,loginpassword,logintotp&#xA;&#xA;I only care for the conversion to get a few columns set up:&#xA;&#xA;name,notes,url,username,password,one-time password&#xA;&#xA;Knowing the format, go right ahead and make a database on sqlite:&#xA;&#xA;sqlite3 convert.db&#xA;&#xA;Next, set the import mode to csv and import the exported csv to a raw table&#xA;&#xA;sqlite  .mode csv&#xA;sqlite  .import raw.csv raw&#xA;&#xA;Due to not giving the schema here nor needing to, all the columns will be imported as TEXT type. The imported type works to the need here. So let&#39;s make a new table called &#39;output&#39; with the columns desired. The table created will be the same as raw but with the selected columns only and their names. To do so, simply run a select for said columns:&#xA;&#xA;CREATE TABLE output AS SELECT type, name, notes, loginuri as url, loginusername as username, loginpassword as password, logintotp as &#39;one-time password&#39; from raw;&#xA;&#xA;Next, remember the ask here was only to have login types from 1password for this conversion. Running a delete for everything but the login type:&#xA;&#xA;delete from output where type != &#39;login&#39;;&#xA;&#xA;Then drop the column for &#39;type&#39; as it&#39;s not needed:&#xA;&#xA;ALTER TABLE output DROP COLUMN type; &#xA;&#xA;Now the output table should be ready to exporting to import in 1password. Let&#39;s go ahead and set sqlite headers on and set output file:&#xA;&#xA;.headers on&#xA;.output converted.csv&#xA;&#xA;Lastly, get that output table exported. Since we know we want all the fields let&#39;s go ahead and export with a select all:&#xA;&#xA;select * from output;&#xA;&#xA;You&#39;re done with the file. It should now work in importing on 1password. Now before you go and say, &#34;hey fool, you could do this much simpler&#34;. You are right, I could. So here&#39;s the lines for everything above using relation database correctly:&#xA;&#xA;.mode csv&#xA;.import raw.csv raw&#xA;.headers on&#xA;.output converted.csv&#xA;&#xA;SELECT &#xA;  name, notes, loginuri as url, loginusername as username, loginpassword as password, logintotp as &#39;one-time password&#39; &#xA;FROM raw&#xA;WHERE type = &#39;login&#39;;&#xA;&#xA;The output is exactly what is desired here and it also gives a baseline of what I want my csv output to be. You can go ahead and import the csv to 1password now. Just make sure to remove the first row and correlate the column names with those that would be for 1password. The one last piece is that one time password isn&#39;t available as a type you can choose. So you need to make a custom field. Then you can apply by editing the field after import. I know, annoying.&#xA;&#xA;[1]: https://blog.1password.com/welcoming-linux-to-the-1password-family/&#xA;[2]: https://www.sqlite.org/cli.html#exporttocsv&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I been recently experimenting in using a different tool besides bitwarden for my password management needs. Nothing wrong with bitwarden, but it is always a good idea to look at other products to get a feel of what could be improved or what you can do to find ways around certain features.</p>



<p>I been using 1password for a while at work and with the <a href="https://blog.1password.com/welcoming-linux-to-the-1password-family/">1Password Linux having some nice integrations</a>, thought why not try it out as an every day driver. Not to mention that a business plan gives a free family plan for every account on its plan. So I started to venture forth. And of course, immediately I hit a snag. Bitwarden isn&#39;t supported as a 1password direct conversion. Although 1password do offer a way for you to use csv import, you still need to format the csv correctly.</p>

<p>Bitwarden graciously offers the ability to export as a csv and json (and encrypted json if that&#39;s your thing). The exported raw csv doesn&#39;t exactly format as is to 1password. Looked around online and found some tools that could do the job, but they kind of suck in setting it up how I would like. So of course I want to make my own. Thing is, I wanted to do this as quick as I could.</p>

<p>The tool would have to take the exported csv file and truncate to only have login type rows and only columns related to such. One of the endless interesting features sqlite has is the <a href="https://www.sqlite.org/cli.html#export_to_csv">ability to import and export a csv</a>. You also get a tool to mutate a relational data file with a file based relation database. Thus, sqlite was my preferred choice to get the conversion out of the way.</p>

<p>So first&#39;s thing&#39;s first: figure out what is the converted format to use on 1password. When you export from bitwarden as an csv, the export column outputs as follows:</p>

<pre><code>folder,favorite,type,name,notes,fields,login_uri,login_username,login_password,login_totp
</code></pre>

<p>I only care for the conversion to get a few columns set up:</p>

<pre><code>name,notes,url,username,password,one-time password
</code></pre>

<p>Knowing the format, go right ahead and make a database on sqlite:</p>

<pre><code>sqlite3 convert.db
</code></pre>

<p>Next, set the import mode to csv and import the exported csv to a <code>raw</code> table</p>

<pre><code>sqlite&gt; .mode csv
sqlite&gt; .import raw.csv raw
</code></pre>

<p>Due to not giving the schema here nor needing to, all the columns will be imported as TEXT type. The imported type works to the need here. So let&#39;s make a new table called &#39;output&#39; with the columns desired. The table created will be the same as raw but with the selected columns only and their names. To do so, simply run a select for said columns:</p>

<pre><code>CREATE TABLE output AS SELECT type, name, notes, login_uri as url, login_username as username, login_password as password, login_totp as &#39;one-time password&#39; from raw;
</code></pre>

<p>Next, remember the ask here was only to have login types from 1password for this conversion. Running a delete for everything but the login type:</p>

<pre><code>delete from output where type != &#39;login&#39;;
</code></pre>

<p>Then drop the column for &#39;type&#39; as it&#39;s not needed:</p>

<pre><code>ALTER TABLE output DROP COLUMN type; 
</code></pre>

<p>Now the output table should be ready to exporting to import in 1password. Let&#39;s go ahead and set sqlite headers on and set output file:</p>

<pre><code>.headers on
.output converted.csv
</code></pre>

<p>Lastly, get that output table exported. Since we know we want all the fields let&#39;s go ahead and export with a select all:</p>

<pre><code>select * from output;
</code></pre>

<p>You&#39;re done with the file. It should now work in importing on 1password. Now before you go and say, “hey fool, you could do this much simpler”. You are right, I could. So here&#39;s the lines for everything above using relation database correctly:</p>

<pre><code>.mode csv
.import raw.csv raw
.headers on
.output converted.csv

SELECT 
  name, notes, login_uri as url, login_username as username, login_password as password, login_totp as &#39;one-time password&#39; 
FROM raw
WHERE type = &#39;login&#39;;
</code></pre>

<p>The output is exactly what is desired here and it also gives a baseline of what I want my csv output to be. You can go ahead and import the csv to 1password now. Just make sure to remove the first row and correlate the column names with those that would be for 1password. The one last piece is that one time password isn&#39;t available as a type you can choose. So you need to make a custom field. <em>Then</em> you can apply by editing the field after import. I know, annoying.</p>
]]></content:encoded>
      <guid>https://baez.link/a-quick-converter-from-bitwarden-to-1password-using-sqlite</guid>
      <pubDate>Sat, 26 Jun 2021 18:20:39 +0000</pubDate>
    </item>
    <item>
      <title>Forty Four Keys of Joy</title>
      <link>https://baez.link/forty-four-keys-of-joy?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[A keyboard for software developers, system administrators, code junkies, writers, human being, and possibly cats is utmost important in this age. Do yourself a favor and get yourself a decent keyboard. Possibly if you read on, that may just be Keyboardio&#39;s Atreus.&#xA;&#xA;!--more--&#xA;&#xA;Originally I thought the keyboard was a neat idea. I had a new need for a new keyboard and it&#39;s been a awful long while (read two years) since I had something decent. Not to mention, I suffer from what I am calling the Linux terminal syndrome. It is a deadly disease in which you tell everyone you use Linux and happen to also almost exclusively use a keyboard due to some bizarre hatred of mouses. Because of this lifetime choice, I am constantly typing and try as much as possible to take care of my hands. It is also to lessen the suffering from something very real like carpal tunnel or tendinitis. So I gave the team the good ol&#39; funding on kickstarter. &#xA;&#xA;It&#39;s been one month since my life has changed with Keyboardio&#39;s Atreus keyboard. The keyboard itself is forty four hardware keys in total. Take some time to leave that number to sink in... I&#39;ll wait.... &#xA;&#xA;Forty four keys.&#xA;&#xA;FORTY FOUR KEYS. &#xA;&#xA;OK. Why is the magic number such a big deal?  So a normal keyboard has somewhere around 104 keys. What the team at Keyboardio did was look at a normal keyboard and say, &#34;lol, NO.&#34; These people went and cut down by more than half all the keys, yet figured a way to make you have more functionality on your existing keys than you can possibly ever need. Keyboardio went ahead like mad men, saw the potential of shifting to a different layer on a keyboard, and took it up a notch. You see, the keyboard has an ability to shift to different layers of which all the keys can be different to focus on what you need. So the 104 keys can be stuffed in just 3 layers on the keyboard, with spacing to spare. &#xA;&#xA;Afterwards, Keyboardio went and then said, &#34;You know what? Let&#39;s just add more layers, cause why not?&#34; Thus, Atreus keyboard has the ability of holding nine layers of key definitions you can fully customize to whatever your needs or itch running at any time. The crazy people over at Keyboardio also made it so you can import and export the key definitions in structured JSON. Technically giving you endless formatting to your heart&#39;s content. So in case you somehow wanted to share your keys (I do and did, click here), you can with minimal effort.&#xA;&#xA;Along with the customization, and there is definite more you can do with this, the Atreus design is what really makes sense for balanced sane typing. Since all the keys are layered on the forty four keys, your hand never reaches to press any key on the keyboard. This means none of that extended stretching to press the ESC key three meters away from the next key. It also features an ergonomic design that just makes sense. So much so that it feels very awkward touching another keyboard, which doesn&#39;t use the structure.&#xA;&#xA;Lastly, and certainly not least, you can use BOX switches to make as much sound as mechanically possible. For me, this means using BOX red switches. Allowing me to keep my tradition of using my keyboard as if I&#39;m playing a piano. Bundle all this in and you got yourself something almost healing to the touch. I dare say, you will probably be typing faster and smoother as time goes on. &#xA;&#xA;The biggest piece to get and learn is the layout of the layering you set up. There was a lot of thought put into the original formatting. In other words, do fight the urge of customizing Keyboardio&#39;s choosing for Atreus layering layout. It will make the curve just that much easier to adapt towards. Then once you get a feel, start making it to your needs.&#xA;&#xA;I&#39;m still learning this little powerhouse, but I can safely say my typing has only gotten better without sacrificing my hands in the process.&#xA;&#xA;Disclaimer: this whole article was written with the wondrous forty four keys of Atreus. &#xA;&#xA;#keyboard #tooling #hardware&#xA;&#xA;[1]: https://shop.keyboard.io/products/keyboardio-atreus?variant=31382379823177&#xA;[2]: https://fosstodon.org/@zeab/104984743773623883&#xA;[3]: https://hg.sr.ht/~ab/keyboardio-atreus/&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>A keyboard for software developers, system administrators, code junkies, writers, human being, and possibly cats is utmost important in this age. Do yourself a favor and get yourself a decent keyboard. Possibly if you read on, that may just be Keyboardio&#39;s Atreus.</p>



<p>Originally I thought the keyboard was a neat idea. I had a new need for a new keyboard and it&#39;s been a awful long while (read two years) since I had something decent. Not to mention, I suffer from what I am calling the Linux terminal syndrome. It is a deadly disease in which you tell everyone you use Linux and happen to also almost exclusively use a keyboard due to some bizarre hatred of mouses. Because of this lifetime choice, I am constantly typing and try as much as possible to take care of my hands. It is also to lessen the suffering from something very real like carpal tunnel or tendinitis. So I gave the team the good ol&#39; funding on kickstarter.</p>

<p>It&#39;s been <a href="https://fosstodon.org/@zeab/104984743773623883">one month</a> since my life has changed with <a href="https://shop.keyboard.io/products/keyboardio-atreus?variant=31382379823177">Keyboardio&#39;s Atreus</a> keyboard. The keyboard itself is forty four hardware keys in total. Take some time to leave that number to sink in... I&#39;ll wait....</p>

<p>Forty four keys.</p>

<p>FORTY FOUR KEYS.</p>

<p>OK. Why is the magic number such a big deal?  So a normal keyboard has somewhere around 104 keys. What the team at Keyboardio did was look at a normal keyboard and say, “lol, NO.” These people went and cut down by more than half all the keys, yet figured a way to make you have more functionality on your existing keys than you can possibly ever need. Keyboardio went ahead like mad men, saw the potential of shifting to a different layer on a keyboard, and took it up a notch. You see, the keyboard has an ability to shift to different layers of which all the keys can be different to focus on what you need. So the 104 keys can be stuffed in just 3 layers on the keyboard, with spacing to spare.</p>

<p>Afterwards, Keyboardio went and then said, “You know what? Let&#39;s just add more layers, cause why not?” Thus, Atreus keyboard has the ability of holding nine layers of key definitions you can fully customize to whatever your needs or itch running at any time. The crazy people over at Keyboardio also made it so you can import and export the key definitions in structured JSON. Technically giving you endless formatting to your heart&#39;s content. So in case you somehow wanted to share your keys (I do and did, <a href="https://hg.sr.ht/~ab/keyboardio-atreus/">click here</a>), you can with minimal effort.</p>

<p>Along with the customization, and there is definite more you can do with this, the Atreus design is what really makes sense for balanced sane typing. Since all the keys are layered on the forty four keys, your hand never reaches to press any key on the keyboard. This means none of that extended stretching to press the ESC key three meters away from the next key. It also features an ergonomic design that just makes sense. So much so that it feels very awkward touching another keyboard, which doesn&#39;t use the structure.</p>

<p>Lastly, and certainly not least, you can use BOX switches to make as much sound as mechanically possible. For me, this means using BOX red switches. Allowing me to keep my tradition of using my keyboard as if I&#39;m playing a piano. Bundle all this in and you got yourself something almost healing to the touch. I dare say, you will probably be typing faster and smoother as time goes on.</p>

<p>The biggest piece to get and learn is the layout of the layering you set up. There was a lot of thought put into the original formatting. In other words, do fight the urge of customizing Keyboardio&#39;s choosing for Atreus layering layout. It will make the curve just that much easier to adapt towards. Then once you get a feel, start making it to your needs.</p>

<p>I&#39;m still learning this little powerhouse, but I can safely say my typing has only gotten better without sacrificing my hands in the process.</p>

<p>Disclaimer: this whole article was written with the wondrous forty four keys of Atreus.</p>

<p><a href="https://baez.link/tag:keyboard" class="hashtag"><span>#</span><span class="p-category">keyboard</span></a> <a href="https://baez.link/tag:tooling" class="hashtag"><span>#</span><span class="p-category">tooling</span></a> <a href="https://baez.link/tag:hardware" class="hashtag"><span>#</span><span class="p-category">hardware</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/forty-four-keys-of-joy</guid>
      <pubDate>Wed, 11 Nov 2020 02:19:13 +0000</pubDate>
    </item>
    <item>
      <title>Write Once for Web Assembly, Run On Everything</title>
      <link>https://baez.link/write-once-for-web-assembly-run-on-everything?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[If you ever heard the phrase &#39;write once and run everywhere&#39; then you know there&#39;s definitely difficulties in writing for multi architectures. Web Assembly may be the actual reality of running everywhere with one target. &#xA;&#xA;!--more--&#xA;&#xA;Anyone who has written software for multiple architectures know it is never plainly, &#34;compile for your target and it will just work.&#34; The reality is, there are always some exception you must take account for. It makes sense why. Any difference in architecture means entirely different ISA or even revision. Which translates to different rules for memory, instruction sets available, operational cost analyst, and much more. So taking your C code and just compiling it for AArch64 wouldn&#39;t cut it.  &#xA;&#xA;The same can be said for a process virtual machine used in Java. In theory, using the virtual machine meant you would target only the virtual machine and not have have to worry about the architecture the virtual machine would run under. However, in practice it has not been the case. Almost always, you still have to be aware of the architecture you are targeting due to bound operational differences like the list given prior.   &#xA;&#xA;This is where Web Assembly (wasm) is starting to shine. Wasm initially started as a way to run bytecode faster on the browser. Allowing for running heavier logic not possible with ECMAScript. To accomplish this, wasm was set up to use a process virtual machine like the JVM, but that targets a very specific ISA. So the VM can execute the ISA differently, but still requires to be able to run without modification. This results in the bytecode having exactly the same assembly definition regardless of physical architecture. Which means the VM can execute the bytecode on how it would be better optimized for the architecture. Giving way to focusing on only writing for a single target and only optimizing for that target. Leaving the VM that will run the wasm binary to take care of the heavy lifting as it should.&#xA;&#xA;Right now, the prospect of wasm use in running on everything is not there just yet. It was only December 2019 the wasm specification was even agreed upon by W3C. Yet, you can see the ability starting to creep up. Projects like WASI from Mozilla for portability and security, Krustlet for kubelet running wasm instead of containers on kubernetes, and Cloudflare&#39;s Workers for running on the edge with GEO distribution using wasm. &#xA;&#xA;I&#39;m excited to see web assembly&#39;s potential. Its progress may result in a saner stack. Maybe even only requiring one runtime and nothing else, web assembly. &#xA;&#xA;#Day14 #100DaysToOffload #Wasm #WebAssembly&#xA;&#xA;[1]: https://en.wikipedia.org/wiki/Instructionsetarchitecture&#xA;[2]: https://en.wikipedia.org/wiki/Virtualmachine#Processvirtual_machines&#xA;[4]: https://en.wikichip.org/wiki/arm/aarch64&#xA;[5]: https://webassembly.org/&#xA;[6]: https://en.wikipedia.org/wiki/ECMAScript&#xA;[7]: https://www.w3.org/TR/wasm-core-1/&#xA;[8]: https://wasi.dev/&#xA;[9]: https://github.com/deislabs/krustlet]]&gt;</description>
      <content:encoded><![CDATA[<p>If you ever heard the phrase &#39;write once and run everywhere&#39; then you know there&#39;s definitely difficulties in writing for multi architectures. Web Assembly may be the actual reality of running everywhere with one target.</p>



<p>Anyone who has written software for multiple architectures know it is <strong>never</strong> plainly, “compile for your target and it will just work.” The reality is, there are always some exception you must take account for. It makes sense why. Any difference in architecture means entirely different <a href="https://en.wikipedia.org/wiki/Instruction_set_architecture">ISA</a> or even revision. Which translates to different rules for memory, instruction sets available, operational cost analyst, and much more. So taking your C code and just compiling it for <a href="https://en.wikichip.org/wiki/arm/aarch64">AArch64</a> wouldn&#39;t cut it.</p>

<p>The same can be said for a <a href="https://en.wikipedia.org/wiki/Virtual_machine#Process_virtual_machines">process virtual machine</a> used in Java. In theory, using the virtual machine meant you would target only the virtual machine and not have have to worry about the architecture the virtual machine would run under. However, in practice it has not been the case. Almost always, you still have to be aware of the architecture you are targeting due to bound operational differences like the list given prior.</p>

<p>This is where <a href="https://webassembly.org/">Web Assembly</a> (wasm) is starting to shine. Wasm initially started as a way to run bytecode faster on the browser. Allowing for running heavier logic not possible with <a href="https://en.wikipedia.org/wiki/ECMAScript">ECMAScript</a>. To accomplish this, wasm was set up to use a process virtual machine like the JVM, but that targets a very specific ISA. So the VM can execute the ISA differently, but still requires to be able to run without modification. This results in the bytecode having exactly the same assembly definition regardless of physical architecture. Which means the VM can execute the bytecode on how it would be better optimized for the architecture. Giving way to focusing on only writing for a single target and only optimizing for that target. Leaving the VM that will run the wasm binary to take care of the heavy lifting as it should.</p>

<p>Right now, the prospect of wasm use in running on everything is not there just yet. It was only <a href="https://www.w3.org/TR/wasm-core-1/">December 2019 the wasm specification</a> was even agreed upon by W3C. Yet, you can see the ability starting to creep up. Projects like <a href="https://wasi.dev/">WASI</a> from Mozilla for portability and security, <a href="https://github.com/deislabs/krustlet">Krustlet</a> for kubelet running wasm instead of containers on kubernetes, and <a href="https://workers.cloudflare.com/">Cloudflare&#39;s Workers</a> for running on the edge with GEO distribution using wasm.</p>

<p>I&#39;m excited to see web assembly&#39;s potential. Its progress may result in a saner stack. Maybe even only requiring one runtime and nothing else, web assembly.</p>

<p><a href="https://baez.link/tag:Day14" class="hashtag"><span>#</span><span class="p-category">Day14</span></a> <a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Wasm" class="hashtag"><span>#</span><span class="p-category">Wasm</span></a> <a href="https://baez.link/tag:WebAssembly" class="hashtag"><span>#</span><span class="p-category">WebAssembly</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/write-once-for-web-assembly-run-on-everything</guid>
      <pubDate>Sun, 17 May 2020 03:56:40 +0000</pubDate>
    </item>
    <item>
      <title>Potential of Infrastructure as Code Without Boilerplate</title>
      <link>https://baez.link/potential-of-infrastructure-as-code-without-boilerplate?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Ever had a case where you look at your infrastructure as code work and think, &#34;why was I crazy enough to try to automate with this junk?&#34;  &#xA;&#xA;!--more--&#xA;&#xA;If you haven&#39;t, then you haven&#39;t had a codebase at the multi thousand lines of boilerplate. You just naturally end up writing so much logic gates, just to get around the limitations of what you are using. This is true in software development, but it is especially true in a Domain Specific Language (DSL) for Infrastructure as Code (IaC). My setup has always been to use the declarative nature of Hashicorp&#39;s Terraform and AWS Cloudformation for their respective jobs. The IaC tools have worked great. I&#39;ve been a heavy user of both. But it can get pretty ridiculous how much you have to do to get one single resource up and running (correctly anyway). &#xA;&#xA;Moreover, if you try to do the same with a provisioner or ochestrator, then you are definitely in for multiples levels of hell. Not to say you can&#39;t go the route of using Kubernetes for your IaC implementation or Ansible in a declarative fashion. The problem is, you end up writing too much boilerplate before you ever get to what you wanted to do. Your goal ends being forever further away from your intentions.&#xA;&#xA;What terraform and cloudformation get right is you need a full set of primitives to your infrastructure. It makes declarative state the goal of what you want, rather than the how you get to that state. If you try to use libraries for cloud providers directly, like boto or godo, with a programming language of choice, you end essentially building an entire in-house IaC provisioner. Requiring no difference in the level of boilerplate you write. Probably more so as you now need code to define basic primitives in a declarative fashion before you can create what you want.&#xA;&#xA;However, using a generic programming language for IaC can have some strong upsides. You can get precise logic and structures that are very well defined to what you need the infrastructure to be. You can also do some classical test driven development to better define your logic. So in the past few months, I have been thinking on how to resolve the problems of having to constantly write so much boilerplate, make maintenance more manageable, and have better abstractions for core primitives I want to create on a cloud provider. Instead of trying to do this again with cloudformation and terraform, I have begun working with pulumi and AWS CDK for the ambitious goal. &#xA;&#xA;I&#39;m still early in my venture, but so far I have learned both provide much simpler definitions for resources to create. The boilerplate is extremely minimal as the tools are designed for you to create modules or packages that you extend for your needs. With both, my codebases have gone done considerably. Making maintenance actually feasible. I&#39;m still discovering of their usage. Still, I&#39;m really liking the ability to use a full generic programming language to do everything I need to in my IaC.   &#xA;&#xA;#100DaysToOffload #Day13 #Infrastructure #IaC #Declarative&#xA;&#xA;[1]: https://www.terraform.io/&#xA;[2]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html&#xA;[3]: https://en.wikipedia.org/wiki/Domain-specificlanguage&#xA;[4]: https://github.com/digitalocean/godo&#xA;[5]: https://github.com/boto/boto3&#xA;[6]: https://www.pulumi.com/&#xA;[7]: https://docs.aws.amazon.com/cdk/latest/guide/home.html&#xA;[8]: https://kops.sigs.k8s.io/&#xA;[9]: https://www.ansible.com/]]&gt;</description>
      <content:encoded><![CDATA[<p>Ever had a case where you look at your infrastructure as code work and think, “why was I crazy enough to try to automate with this junk?”</p>



<p>If you haven&#39;t, then you haven&#39;t had a codebase at the multi thousand lines of boilerplate. You just naturally end up writing so much logic gates, just to get around the limitations of what you are using. This is true in software development, but it is especially true in a <a href="https://en.wikipedia.org/wiki/Domain-specific_language">Domain Specific Language</a> (DSL) for Infrastructure as Code (IaC). My setup has always been to use the declarative nature of <a href="https://www.terraform.io/">Hashicorp&#39;s Terraform</a> and <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html">AWS Cloudformation</a> for their respective jobs. The IaC tools have worked great. I&#39;ve been a heavy user of both. But it can get pretty ridiculous how much you have to do to get one single resource up and running (correctly anyway).</p>

<p>Moreover, if you try to do the same with a provisioner or ochestrator, then you are definitely in for multiples levels of hell. Not to say you can&#39;t go the route of using <a href="https://kops.sigs.k8s.io/">Kubernetes for your IaC</a> implementation or <a href="https://www.ansible.com/">Ansible</a> in a declarative fashion. The problem is, you end up writing too much boilerplate before you ever get to what you wanted to do. Your goal ends being forever further away from your intentions.</p>

<p>What terraform and cloudformation get right is you need a full set of primitives to your infrastructure. It makes declarative state the goal of what you want, rather than the how you get to that state. If you try to use libraries for cloud providers directly, like <a href="https://github.com/boto/boto3">boto</a> or <a href="https://github.com/digitalocean/godo">godo</a>, with a programming language of choice, you end essentially building an entire in-house IaC provisioner. Requiring no difference in the level of boilerplate you write. Probably more so as you now need code to define basic primitives in a declarative fashion before you can create what you want.</p>

<p>However, using a generic programming language for IaC can have some strong upsides. You can get precise logic and structures that are very well defined to what you need the infrastructure to be. You can also do some classical <a href="https://en.wikipedia.org/wiki/Test-driven_development">test driven development</a> to better define your logic. So in the past few months, I have been thinking on how to resolve the problems of having to constantly write so much boilerplate, make maintenance more manageable, and have better abstractions for core primitives I want to create on a cloud provider. Instead of trying to do this again with cloudformation and terraform, I have begun working with <a href="https://www.pulumi.com/">pulumi</a> and <a href="https://docs.aws.amazon.com/cdk/latest/guide/home.html">AWS CDK</a> for the ambitious goal.</p>

<p>I&#39;m still early in my venture, but so far I have learned both provide much simpler definitions for resources to create. The boilerplate is extremely minimal as the tools are designed for you to create modules or packages that you extend for your needs. With both, my codebases have gone done considerably. Making maintenance actually feasible. I&#39;m still discovering of their usage. Still, I&#39;m really liking the ability to use a full generic programming language to do everything I need to in my IaC.</p>

<p><a href="https://baez.link/tag:100DaysToOffload" class="hashtag"><span>#</span><span class="p-category">100DaysToOffload</span></a> <a href="https://baez.link/tag:Day13" class="hashtag"><span>#</span><span class="p-category">Day13</span></a> <a href="https://baez.link/tag:Infrastructure" class="hashtag"><span>#</span><span class="p-category">Infrastructure</span></a> <a href="https://baez.link/tag:IaC" class="hashtag"><span>#</span><span class="p-category">IaC</span></a> <a href="https://baez.link/tag:Declarative" class="hashtag"><span>#</span><span class="p-category">Declarative</span></a></p>
]]></content:encoded>
      <guid>https://baez.link/potential-of-infrastructure-as-code-without-boilerplate</guid>
      <pubDate>Wed, 13 May 2020 03:45:45 +0000</pubDate>
    </item>
  </channel>
</rss>