{
  "version": "https://jsonfeed.org/version/1",
  "title": "Ian's Digital Garden",
  "home_page_url": "https://ianwwagner.com/",
  "feed_url": "https://ianwwagner.com//archive-2026.json",
  "description": "",
  "items": [
    {
      "id": "https://ianwwagner.com//setting-up-a-wireguard-tunnel-on-freebsd-15.html",
      "url": "https://ianwwagner.com//setting-up-a-wireguard-tunnel-on-freebsd-15.html",
      "title": "Setting up a WireGuard Tunnel on FreeBSD 15",
      "content_html": "<p>It's not like the world needs yet another WireGuard tutorial,\nbut I thought I'd write one since one of the top SEO-ranked ones I stumbled upon was pretty low quality,\nwith several obvious errors and omissions.</p>\n<p>In this post, I'll focus on how you can set up a VPN tunnel\nin the sense that such things were used before shady companies hijacked the term.\nIt's just a way to tunnel traffic between networks.\nFor example, to connect non-internet-facing servers behind a firewall\nto a public host that firewalls and selectively routes traffic over the tunnel.</p>\n<p>I'll assume a pair of FreeBSD servers for the rest of the post,\none that's presumably more accessible (the &quot;server&quot;),\nand a client which is not necessarily routable over the internet.</p>\n<h1><a href=\"#server-setup\" aria-hidden=\"true\" class=\"anchor\" id=\"server-setup\"></a>&quot;Server&quot; setup</h1>\n<p>We'll start with the server setup.\nThis is where your client(s) will connect.\nAt a high level, we'll generate a keypair for the server,\na keypair for the client,\nand generate configuration files for both.\nAnd finally we'll do some basic firewall configuration.</p>\n<h2><a href=\"#wireguard-config\" aria-hidden=\"true\" class=\"anchor\" id=\"wireguard-config\"></a>WireGuard config</h2>\n<p>The following can be run,\neither in a script or line-by-line in a POSIX shell as root.</p>\n<pre><code class=\"language-sh\"># Set this to your server's public IP\nSERVER_PUBLIC_IP=&quot;192.0.2.42&quot;\n\n# We'll be setting up some config files here that we only want to be readable by root.\n# The umask saves us the effort of having to chmod these later.\numask 077\n\n# Wireguard kernel-level support is available in FreeBSD 14+,\n# but this port has a nice service wrapper\npkg install wireguard-tools\n\n# Set up WireGuard config directory\nchmod 770 /usr/local/etc/wireguard\ncd /usr/local/etc/wireguard\n\n# Create a keypair for the server\nSERVER_PRIV_KEY=$(wg genkey)\nSERVER_PUB_KEY=`echo $SERVER_PRIV_KEY | wg pubkey`\n\n# Generate the first section of our WireGuard server config.\n# We'll use 172.16.0.1/24 (no real reason for the choice;\n# it's just somewhat convenient as it doesn't collide with the more common\n# Class A and Class C private networks).\ncat &gt; wg0.conf &lt;&lt;EOF\n[Interface]\nAddress = 172.16.0.1/24\nSaveConfig = true\nListenPort = 51820\nPrivateKey = ${SERVER_PRIV_KEY}\nEOF\n\n# Similarly, we need a client keypair\nCLIENT_PRIV_KEY=$(wg genkey)\nCLIENT_PUB_KEY=`echo $CLIENT_PRIV_KEY | wg pubkey`\n\n# Add peer to the server config.\n# This is what lets your client connect later.\n# The server only stores the client's public key\n# and the private IP that it will connect as.\nCLIENT_IP=&quot;172.16.0.2&quot;\ncat &gt;&gt; wg0.conf &lt;&lt;EOF\n# bsdcube\n[Peer]\nPublicKey = ${CLIENT_PUB_KEY}\nAllowedIps = ${CLIENT_IP}/32\nEOF\n\numask 022 # Revert to normal umask\n\n# Enable the wireguard service\nsysrc wireguard_interfaces=&quot;wg0&quot;\nsysrc wireguard_enable=&quot;YES&quot;\nservice wireguard start\n</code></pre>\n<p><strong>Don't ditch this shell session yet!</strong>\nWe'll come back to the client config later and will need the vars defined above.\nBut first, a brief interlude for packet filtering.</p>\n<h2><a href=\"#pf-setup\" aria-hidden=\"true\" class=\"anchor\" id=\"pf-setup\"></a><code>pf</code> setup</h2>\n<p>We'll use <code>pf</code>, the robust packet filtering (colloquially &quot;firewall&quot;) system\nported from OpenBSD.</p>\n<p>I'm using <code>vtnet0</code> for the external interface,\nsince that's the interface name with my VPS vendor.\nYou may need to change this based on what your main network interface is\n(check <code>ifconfig</code>).</p>\n<p><strong>DISCLAIMER</strong>: This is <em>not</em> necessarily everything you need to launch a production system.\nI've distilled just the parts that are relevant to a minimal WireGuard setup.\nThat said, here's a minimal <code>/etc/pf.conf</code>.</p>\n<pre><code class=\"language-pf\">ext_if = &quot;vtnet0&quot;\nwg_if = &quot;wg0&quot;\n\n# Pass all traffic on the loopback interface\nset skip on lo\n\n# Basic packet cleanup\nscrub in on $ext_if all fragment reassemble\n\n# Allows WireGuard clients to reach the internet.\n# I do not nede this in my config, but noting it here\n# in case your use case is *that* sort of VPN.\n# nat on $ext_if from $wg_if:network to any -&gt; ($ext_if)\n\n# Allow all outbound connections\npass out keep state\n\n# SSH (there's a good chance you need this)\npass in on $ext_if proto tcp from any to ($ext_if) port 22\n\n# Allow inbound WireGuard traffic\npass in on $ext_if proto udp from any to ($ext_if) port 51820\n\n# TODO: Forwarding for the services that YOU need\n# Here's one example demonstrating how you would allow traffic\n# to route directly to one of the WireGuard network IPs (e.g. 172.16.42.1/24 in this example)\n# over port 8080.\n# pass in on $wg_if proto tcp from $wg_if:network to ($wg_if) port 8080\n\n# Allow ICMP\npass in inet proto icmp all\npass in inet6 proto icmp6 all\n</code></pre>\n<p>Next, we enable the service and start it.\nIf you're already running <code>pf</code>, then at least part of this isn't necessary.</p>\n<pre><code class=\"language-sh\"># Allow forwarding of traffic from from WireGuard clients\nsysctl net.inet.ip.forwarding=1\n\n# Enable pf\nsysrc pf_enable=&quot;YES&quot;\nservice pf start\n</code></pre>\n<h1><a href=\"#client-configuration\" aria-hidden=\"true\" class=\"anchor\" id=\"client-configuration\"></a>Client configuration</h1>\n<p>And now we come back to the client configuration.\nThe &quot;client&quot; in this case does not necessarily have to be routable over the internet;\nit just needs to be able to connect to the server.\nYou've still got the same shell session with those variables, right?</p>\n<pre><code class=\"language-sh\">cat &lt;&lt;EOF\n[Interface]\nPrivateKey = ${CLIENT_PRIV_KEY}\nAddress = ${CLIENT_IP}/24\n\n[Peer]\nPublicKey = ${SERVER_PUB_KEY}\nAllowedIPs = 172.16.0.0/24  # Only route private subnet traffic over the tunnel\nEndpoint = ${SERVER_PUBLIC_IP}:51820\nPersistentKeepalive = 30\nEOF\n</code></pre>\n<p>That's it; that's the client config.\nRun through the same initial setup steps for adding the <code>wireguard-tools</code> package\nand creating the directory with the right permissions.\nThen put this config in <code>/usr/local/etc/wireguard/wg0.conf</code>.</p>\n<p>The client will also need a similar <code>pf</code> configuration,\nbut rather than blanket allowing traffic in over <code>$wg_if</code>,\nyou probably want something a bit more granular.\nFor example, allowing traffic in over a specific port (e.g. <code>8080</code>).\nI'll leave that as an exercise to the reader based on the specific scenario.</p>\n",
      "summary": "",
      "date_published": "2026-04-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "networking",
        "FreeBSD"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//two-weeks-of-emacs.html",
      "url": "https://ianwwagner.com//two-weeks-of-emacs.html",
      "title": "Two Weeks of Emacs",
      "content_html": "<p>I'm approximately 2 weeks into using emacs as my daily editor and, well, I haven't opened JetBrains since.\nI honestly didn't expect that, but here we are.</p>\n<h1><a href=\"#papercuts-i-said-i-would-solve-later\" aria-hidden=\"true\" class=\"anchor\" id=\"papercuts-i-said-i-would-solve-later\"></a>Papercuts I said I would solve later</h1>\n<p>Here's the list of things I noted in my last post that I said I'd come back to.\nThe list has changed a bit since the last post:</p>\n<p>Solved:</p>\n<ul>\n<li>Issues with automatic indentation</li>\n<li>Files not reloading automatically when changed externally (fixed with <code>global-auto-revert-mode</code>)</li>\n<li>Highlighting mutable variables</li>\n</ul>\n<p>Haven't bothered to try resolving (infrequently used):</p>\n<ul>\n<li>Macro expansion</li>\n<li>Code completion and jump to definition within rustdoc comments</li>\n</ul>\n<p>The highlighting one is worth a bit of explanation.\nHere's what I had to do to get it working:</p>\n<pre><code class=\"language-lisp\">;; Highlight mutable variables (like RustRover/JetBrains).\n;; NB: Requires eglot 1.20+\n(defface eglot-semantic-mutable\n  '((t :underline t))\n  &quot;Face for mutable variables via semantic tokens.&quot;)\n\n(with-eval-after-load 'eglot\n  (add-to-list 'eglot-semantic-token-modifiers &quot;mutable&quot;))\n</code></pre>\n<p>Apparently this requires a fairly recent version of eglot to work,\nand it isn't necessarily supported by every LSP,\nbut it works for me with rust-analyzer.\nI spent way too much time on this because for some reason running <code>M-x eglot-reconnect</code>\nor <code>M-x eglot</code> and accepting a restart didn't reset the buffer settings or something.\nIf this doesn't work, try killing the buffer and then find the file again.</p>\n<h1><a href=\"#other-new-papercuts\" aria-hidden=\"true\" class=\"anchor\" id=\"other-new-papercuts\"></a>Other (new) papercuts!</h1>\n<p>Here's a similarly categorized list of things that I found over the past week or so.</p>\n<p>Solved:</p>\n<ul>\n<li>&quot;Project&quot; views: I got even more than I bargained for with <code>(setq tab-bar-mode t)</code>! It's great.\nIt's even better than I expected TBH since every tab can contain an arbitrary configuration of buffers.\nThis is a weird way of thinking at first, but it's really nice since stuff doesn't need to follow the traditional bounds\nthat I was used to in IDEs (e.g. a tab can be entirely terminal buffers, or cross &quot;projects&quot; which is useful to me).</li>\n<li><code>xref-matches-in-files</code> was SLOW. Turned out to be an issue in my <code>fish</code> configuration (which isn't even my &quot;preferred&quot; shell,\nbut it's still my login shell due to being more supported than nushell, which I use for most things).\nRemoving pyenv fixed that.\nAlso you can set it to use ripgrep with <code>(setq xref-search-program 'ripgrep)</code></li>\n<li>Fuzzy finding files by name within a project quickly annoyed me.\nTurns out this is also not an unreasonable hotkey with the built-in project.el: <code>C-x p f</code> (mnemonic: project find).</li>\n<li>Searching the project by <em>symbol</em> (variable, struct, trait, etc.) works well with the <code>consult-eglot</code> package.\nSpecifically, it includes a <code>consult-eglot-symbols</code> command.</li>\n</ul>\n<p>Not solved yet:</p>\n<ul>\n<li>It was really nice to just fold sections of code by clicking something in the margin (&quot;fringe&quot; in emacs parlance; gutter in JetBrains).\nIt looks like there are ways to do this; I just haven't had time to mess with it.</li>\n<li>The language server can get confused if you do a big operation like a git branch switch. Restarting eglot fixes this.\nI'm sure this happened occasionally with JetBrains but it seems worse here.</li>\n<li>The lovely <code>diff-hl</code> package doesn't get the hint when files reload for some reason.</li>\n</ul>\n<p>I'll also add a quick note that it's (still) surprisingly easy to screw up your own config.\nEmacs as a system is super flexible but that also makes it somewhat fragile.\nEverything is programmable, in a single-threaded, garbage-collected language.</p>\n<p>One snag I hit was that after some period, the environment got super slow,\naffecting things like unit test runtimes in terminal buffers,\nand making input noticeably laggy.\nThe issue turned out to be my <code>global-auto-revert-mode</code> config.\nApparently if you do it wrong, it turns into a whole stack of polling operations for every buffer.\nThis was a consequence of Claude suggesting something dumb and me not researching it :P\nThe normal configuration will use filesystem notifications like kqueue or inotify.</p>\n<h1><a href=\"#whats-next\" aria-hidden=\"true\" class=\"anchor\" id=\"whats-next\"></a>What's next?</h1>\n<p>I'm pretty happy with the new setup overall.\nObviously some room for tweaks, but it's pretty great overall,\nand I'm really enjoying the tab bar approach for organizing things.\nI'm also frankly shocked at how little CPU I'm using relative to previous norms on my MacBook.</p>\n<p>Next up I'll probably try (in no particular order):</p>\n<ul>\n<li>Magit / Majitsu; I actually love Sublime Merge, but wouldn't mind one less context switch.\nEspecially if I can get a view of the current project easily based on context.\nSublime's search interface is terrible when you have hundreds of repos.</li>\n<li>Chezmoi for dotfile sync + see what breaks on my desktop (FreeBSD).</li>\n<li>More adventures with TRAMP. I used this extensively in the early '00s but have mostly been doing local dev this time around.\nBut I see emacs having a lot of potential for remote dev with TRAMP so I'll give that a shot for some stuff over the next few weeks.</li>\n</ul>\n",
      "summary": "",
      "date_published": "2026-03-28T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "software-engineering"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//returning-to-emacs.html",
      "url": "https://ianwwagner.com//returning-to-emacs.html",
      "title": "Returning to Emacs",
      "content_html": "<h1><a href=\"#jetbrains-woes\" aria-hidden=\"true\" class=\"anchor\" id=\"jetbrains-woes\"></a>JetBrains woes</h1>\n<p>I have been a fan of JetBrains products for over a decade by now,\nand an unapologetic lover of IDEs generally.\nI've used PyCharm since shortly after it launched,\nand over the years I've used IntelliJ IDEA,\nWebStorm, DataGrip, RustRover, and more.\nI literally have the all products pack (and have for many years).</p>\n<p>I truly believe that a good IDE can be a productivity multiplier.\nYou get refactoring, jump-to-definition, symbol-aware search,\nsaved build/run configurations, a nice and consistent interface\nto otherwise terrible tooling (looking at you CMake and the half dozen Python package managers\nof the last decade and change).</p>\n<p>But something has changed over the past few years.\nThe quality of the product has generally deteriorated in several ways.\nWith the advent of LSP, the massive lead JetBrains had in &quot;code intelligence&quot;\nhas eroded, and in many cases no longer exists.\nThe resource requirements of the IDE have also ballooned massively,\neven occasionally causing memory pressure on my amply equipped MacBook Pro with 32GB of RAM.</p>\n<p>(Side note: I regularly have 3 JetBrains IDEs open at once because I need to work in many languages,\nand for some reason they refuse to ship a single product that does that.\nI would have paid for such a product.)</p>\n<p>And as if that weren't enough, it seems like I have to restart to install some urgent nagging update\nseveral times/week, usually related to one of their confusing mess of AI plugins\n(is AI Chat what we're supposed to use? Or Junie? Or... what?).\nTo top it all off, stability has gone out the window.\nAt least once/week, I will open my laptop from sleep,\nonly to find out that one or more of my JetBrains IDEs has crashed.\nUsually RustRover.\nWhich also eats up like 30GB of extra disk space for things like macro expansions\nand other code analysis.\nThe taxes are high and increasing on every front.</p>\n<h1><a href=\"#my-philosophy-of-editors\" aria-hidden=\"true\" class=\"anchor\" id=\"my-philosophy-of-editors\"></a>My philosophy of editors</h1>\n<p>So, I decided the time was right to give Emacs another shot.</p>\n<p>If you know me personally, you may recall that I made some strong statements in the past\nto the effect that spending weeks writing thousands of lines of Lua to get the ultimate Neovim config was silly.\nAnd my strongly worded statements of the past were partially based on my own experiences with such editors,\nincluding Emacs.\nBasically, I appreciate that you <em>can</em> &quot;build your own lightsaber&quot;,\nbut I did not consider that to be a good use of my time.\nOne of the reasons I like(d) JetBrains is that I <em>didn't</em> ever need to think about tweaking configs!</p>\n<p>But things have gotten so bad that I figured I'd give it a shot with a few stipulations.</p>\n<ol>\n<li>I would try it for a week, but if it seriously hampered my productivity after a few days, I'd switch back.</li>\n<li>I was only going to spend a few hours configuring it.</li>\n</ol>\n<p>With these constraints, I set off to see if I needed to revise my philosophy of editors.</p>\n<h1><a href=\"#why-emacs\" aria-hidden=\"true\" class=\"anchor\" id=\"why-emacs\"></a>Why Emacs?</h1>\n<p>Aside: why not (Helix|Neovim|Zed|something else)?\nA few reasons, in no particular order:</p>\n<ul>\n<li>I sorta know Emacs. I used it as one of my primary editors for a year or two in the early 2010s.</li>\n<li>I tried Helix for a week last year. It didn't stick; something about &quot;modal editing&quot; just does not fit with my brain.</li>\n<li>I don't mind a terminal per se, but we invented windowing systems decades before I was born and I don't understand the fascination\nwith running <em>everything</em> in a terminal (or a web browser, for that matter :P).</li>\n<li>If I'm going to go through the pain of switching, I want to be confident it'll be around and thriving in another 10 years.\nAnd it should work everywhere, including lesser known platforms like FreeBSD.</li>\n<li>If your movement keys require a QWERTY layout, I will be very annoyed.</li>\n</ul>\n<h1><a href=\"#first-impressions-3-days-in\" aria-hidden=\"true\" class=\"anchor\" id=\"first-impressions-3-days-in\"></a>First impressions (3 days in)</h1>\n<p>So, how's it going so far?\nHere are a few of the highlights.</p>\n<h2><a href=\"#lsps-have-improved-a-lot\" aria-hidden=\"true\" class=\"anchor\" id=\"lsps-have-improved-a-lot\"></a>LSPs have improved a lot!</h2>\n<p>It used to be the case that JetBrains had a dominant position in code analysis.\nThis isn't the case anymore, and most of the languages I use that would benefit from an LSP\nhave a great one available.\nThings have improved a lot, particularly in terms of Emacs integrations,\nover the past decade!\n<a href=\"https://www.gnu.org/software/emacs/manual/html_node/eglot/Eglot-Features.html\"><code>eglot</code></a> is now bundled with Emacs,\nso you don't even need to go out of your way to get some funky packages hooked up\n(like I had to with some flycheck plugin for Haskell back in the day).</p>\n<h3><a href=\"#refactoring-tools-have-also-improved\" aria-hidden=\"true\" class=\"anchor\" id=\"refactoring-tools-have-also-improved\"></a>Refactoring tools have also improved</h3>\n<p>The LSP-guided tools for refactoring have also improved a lot.\nIt used to be that only a &quot;real IDE&quot; had much better than grep and replace.\nI was happy to find that <code>eglot-rename</code> &quot;just worked&quot;.</p>\n<h3><a href=\"#docs\" aria-hidden=\"true\" class=\"anchor\" id=\"docs\"></a>Docs</h3>\n<p>I'm used to hovering my mouse over any bit of code, waiting a few seconds,\nand being greeted by a docs popover.\nThis is now possible in Emacs too with <code>eldoc</code> + your LSP.\nI added the <a href=\"https://github.com/casouri/eldoc-box\"><code>eldoc-box</code></a> plugin and configured it to my liking.</p>\n<h3><a href=\"#quick-fix-actions-work-too\" aria-hidden=\"true\" class=\"anchor\" id=\"quick-fix-actions-work-too\"></a>Quick fix actions work too!</h3>\n<p>So far, every single quick-fix action that I'm used to in RustRover\nseems to be there in the eglot integration with rust-analyzer.\nIt took me a few minutes to realize that this was called <code>eglot-code-actions</code>),\nbut once I figured that out, I was rolling.</p>\n<h2><a href=\"#jump-to-definition-works-great-but-navigation-has-caveats\" aria-hidden=\"true\" class=\"anchor\" id=\"jump-to-definition-works-great-but-navigation-has-caveats\"></a>Jump to definition works great, but navigation has caveats</h2>\n<p>I frequently use the jump-to-definition feature in IDEs.\nUsually by command+clicking.\nYou can do the same in Emacs with <code>M-.</code>, which is a bit weird, but okay.\nI picked up the muscle memory after less than an hour.\nThe weird thing though is what happens next.\nI'm used to JetBrains and most other well-designed software (<em>glares in the general direction of Apple</em>)\n&quot;just working&quot; with the forward+back buttons that many input devices have.\nEmacs did not out of the box.</p>\n<p>One thing JetBrains did fairly well was bookmarking where you were in a file, and even letting you jump back after\nnavigating to the definition or to another file.\nThis had some annoying side effects with multiple tabs, which I won't get into but it worked overall.\nIn Emacs, you can return from a definition jump with <code>M-,</code>, but there is no general navigate forward/backward concept.\nThis is where the build-your-own-lightsaber philosophy comes in I guess.\nI knew I'd hit it eventually.</p>\n<p>I tried out a package called <code>better-jumper</code> but it didn't <em>immediately</em> do what I wanted,\nso I abandoned it.\nI opted instead to simple backward and forward navigation.\nIt works alright.</p>\n<pre><code class=\"language-lisp\">(global-set-key (kbd &quot;&lt;mouse-3&gt;&quot;) #'previous-buffer)\n(global-set-key (kbd &quot;&lt;mouse-4&gt;&quot;) #'next-buffer)\n</code></pre>\n<p>Aside: I had to use <code>C-h k</code> (<code>describe-key</code>) to figure out what the mouse buttons were.\nAdvice I saw online apparently isn't universally applicable,\nand Xorg, macOS, etc. may number the buttons differently!</p>\n<h2><a href=\"#terminal-emulation-within-emacs\" aria-hidden=\"true\" class=\"anchor\" id=\"terminal-emulation-within-emacs\"></a>Terminal emulation within Emacs</h2>\n<p>The emacs <code>shell</code> mode is terrible.\nIt's particularly unusable if you're running any sort of TUI application.\nA friend recommended <a href=\"https://codeberg.org/akib/emacs-eat\"><code>eat</code></a> as an alternative.\nThis worked pretty well out of the box with most things,\nbut when I ran <code>cargo nextest</code> for the first time,\nI was shocked at how slow it was.\nMy test suite which normally runs in under a second took over 30!\nYikes.\nI believe the slowness is because it's implemented in elisp,\nwhich is still pretty slow even when native compilation is enabled.</p>\n<p>Another Emacs user recommended I try out <a href=\"https://github.com/akermu/emacs-libvterm\"><code>vterm</code></a>, so I did.\nHallelujah!\nIt's no iTerm 2, and it does have a few quirks,\nbut it's quite usable and MUCH faster.\nIt also works better with full-screen TUI apps like Claude Code.</p>\n<h2><a href=\"#claude-code-cli-is-actually-great\" aria-hidden=\"true\" class=\"anchor\" id=\"claude-code-cli-is-actually-great\"></a>Claude Code CLI is actually great</h2>\n<p>I'm not going to get into the pros and cons of LLMs in this post.\nBut if you use these tools in your work,\nI think you'll be surprised by how good the experience is with <code>vterm</code> and the <code>claude</code> CLI.\nI have been evaluating JetBrains' disjoint attempts at integrations with Junie,\nand more recently Claude Code and Codex.</p>\n<p>Junie is alright for some things.\nThe only really good thing I have to say about the product is that at least it let me select a GPT model.\nAnthropic models have been severely hampered in their ability to do anything useful in most codebases I work in,\ndue to tiny context windows.\nThat recently changed when Anthropic rolled out a 1 million token context window to certain users.</p>\n<p>JetBrains confusingly refers to Claude Code as &quot;Claude Agent&quot; and team subscriptions automatically include some monthly credits.\nEvery single JetBrains IDE will install its own separate copy of Claude Code (yay).\nBut it <em>is</em> really just shelling out to Claude Code it seems\n(it asks for your permission to download the binary.\nCodex is the same.)</p>\n<p>Given this, I assumed the experience and overall quality would be similar.\nWell, I was VERY wrong there.\nClaude Code in the terminal is far superior for a number of reasons.\nNot just access to the new model though that helps.\nYou can also configure &quot;effort&quot; (lol), and the &quot;plan&quot; mode seems to be far more sophisticated than what you get in the JetBrains IDEs.</p>\n<p>So yeah, if you're going to use these tools, just use the official app.\nIt makes sense; they have an incentive to push people to buy direct.\nAnd it so happens that Claude Code fits comfortably in my Emacs environment.</p>\n<p>More directly relevant to this post,\nLLMs (any of them really) are excellent at recommending Emacs packages and config tweaks.\nSo it's never been easier to give it a try.\nI've spent something like 2-3x longer writing this post than I did configuring Emacs.\n(And yes, before you ask, this post is 100% hand-written.)\nMy basic flow was to work, get annoyed (thats pretty easy for me),\nand describe my problem to ChatGPT or Claude.\nI am nowhere near the hours I budgeted for config fiddling.\nThat surprised me!</p>\n<h2><a href=\"#vcs-integration\" aria-hidden=\"true\" class=\"anchor\" id=\"vcs-integration\"></a>VCS integration</h2>\n<p>While I'm no stranger to hacking around with nothing more than a console,\nI really don't like the git CLI.\nI've heard jj is better, but honestly I think GUIs are pretty great most of the time.\nI will probably try magit at some point,\nbut for now I'm very happy with Sublime Merge.</p>\n<p>But one thing I MUST have in my editor is a &quot;gutter&quot; view of lines that are new/changed,\nand a way to get a quick inline diff.\nJetBrains had a great UX for this which I used daily.\nAnd for Emacs, I found something just as great: <a href=\"https://github.com/dgutov/diff-hl\"><code>diff-hl</code></a>.</p>\n<p>My config for this is very simple:</p>\n<pre><code class=\"language-lisp\">(unless (package-installed-p 'diff-hl)\n  (package-install 'diff-hl))\n(use-package diff-hl\n  :config\n  (global-diff-hl-mode))\n</code></pre>\n<p>To get a quick diff of a section that's changed,\nI use <code>diff-hl-show-chunk</code>.\nI might even like the hunk review experience here better than in JetBrains!</p>\n<h2><a href=\"#project-wide-search\" aria-hidden=\"true\" class=\"anchor\" id=\"project-wide-search\"></a>Project-wide search</h2>\n<p>I think JetBrains has the best search around with their double-shift, cmd+shift+o, and cmd-shift-f views.\nI have not yet gotten my Emacs configured to be as good.\nBut <code>C-x p g</code> (<code>project-find-regexp</code>) is pretty close.\nI'll look into other plugins later for fuzzy filename/symbol search.\nI <em>do</em> miss that.</p>\n<h2><a href=\"#run-configurations\" aria-hidden=\"true\" class=\"anchor\" id=\"run-configurations\"></a>Run configurations</h2>\n<p>The final pleasant surprise is that I don't miss JetBrains run configurations as much as I expected.\nI instead switch to putting a <a href=\"https://just.systems/man/en/introduction.html\"><code>justfile</code></a> in my repo and populating that with my run configurations\n(much of the software I work on has half a dozen switches which vary by environment).\nThis also has the side effect of cleaning up some of my CI configuration (<code>just</code> run the same thing!)\nand also serves as useful documentation to LLMs.</p>\n<h2><a href=\"#spell-checking\" aria-hidden=\"true\" class=\"anchor\" id=\"spell-checking\"></a>Spell checking</h2>\n<p>I have <a href=\"https://github.com/crate-ci/typos\"><code>typos</code></a> configured for most of my projects in CI,\nbut it drives me nuts when an editor doesn't flag typos for me.\nJetBrains did this well.\nEmacs has nothing out of the box (Zed also annoyingly doesn't ship with anything, which is really confusing to me).\nBut it's easy to add.</p>\n<p>I went with Jinx.\nThere are other options, but this one seemed pretty modern and worked without any fuss, so I stuck with it.</p>\n<h1><a href=\"#papercuts-to-solve-later\" aria-hidden=\"true\" class=\"anchor\" id=\"papercuts-to-solve-later\"></a>Papercuts to solve later</h1>\n<p>This is all a lot more positive than I was expecting to be honest!\nI am not going to cancel my JetBrains subscription tomorrow;\nthey still <em>do</em> make the best database tool I know of.\nBut I've moved all my daily editing to Emacs.</p>\n<p>That said, there are still some papercuts I need to address:</p>\n<ul>\n<li>Macro expansion. I liked that in RustRover. There's apparently a way to get this with <code>eglot-x</code> which I'll look into later.</li>\n<li>Automatic indentation doesn't work out of the box for all modes to my liking. I think I've fixed most of these but found the process confusing.</li>\n<li>Files don't reload in buffers automatically with disk changes (e.g. <code>cargo fmt</code>)!</li>\n<li>Code completion and jump to definition don't work inside rustdoc comments.</li>\n<li>RustRover used to highlight all of my <code>mut</code> variables. I would love to get that back in Emacs.</li>\n</ul>\n",
      "summary": "",
      "date_published": "2026-03-18T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "software-engineering",
        "shell"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//typing-hanja-on-macos.html",
      "url": "https://ianwwagner.com//typing-hanja-on-macos.html",
      "title": "Typing Hanja on macOS",
      "content_html": "<p>이제 漢字 쓰는 方法을 알게 되었다!</p>\n<p>I was today years old when I finally figured out how to type Hanja (characters from China that were historically used to write Korean).\nIt struck me as very strange that this didn't seem possible in any of the obvious input methods.\nIn Japanese, for example, you get search-as-you-type style suggestions popping up as you type,\nwhether in Kana or Romaji mode.\nIn fact, until now, I mostly relied on my prior study of Japanese,\nswitched to that layout, and typed in a Japanese reading.\nThis was quite clunky though as I am now learning the Korean readings.</p>\n<p>I even asked several Koreans if they knew how,\nand none did (at least for macOS), since it's relatively uncommon to use them these days,\nparticularly for younger people.\nWindows keyboards have a dedicated Hanja mode key,\nbut I've never seen an Apple keyboard with this,\nand I'm not even totally sure if macOS understands the key code\n(if anyone knows, let me know on Mastodon).</p>\n<p>It turns out this IS in fact possible; it's just uncharacteristically buried.\nThe trick is to press option+return.\nThen you'll get a menu where you can select Hanja matches\nfor the previous &quot;word&quot; (it seems to rely on spacing, which is not always completely consistent in colloquial writing,\nbut it's not too hard to get used to.)\nI found tip on Apple's <a href=\"https://support.apple.com/en-gb/guide/korean-input-method/welcome/mac\">website</a>\nvia a search.</p>\n<p>This is probably only relevant to like 2 other people on the internet, but I thought I'd spread the word\nsince it was relatively hard to find!</p>\n",
      "summary": "",
      "date_published": "2026-03-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "macos",
        "i18n",
        "korean",
        "languages"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//reqwest-0-13-upgrade-and-webpki.html",
      "url": "https://ianwwagner.com//reqwest-0-13-upgrade-and-webpki.html",
      "title": "reqwest 0.13 Upgrade and WebPKI",
      "content_html": "<p>In case you missed the <a href=\"https://seanmonstar.com/blog/reqwest-v013-rustls-default/\">announcement</a>,\nthe <code>reqwest</code> crate has a new and very important release out!\n<code>reqwest</code> is an opinionated, high-level HTTP client for Rust,\nand the main feature of this release is that <a href=\"https://rustls.dev/\"><code>rustls</code></a>\nis now the default TLS backend.\nRead the excellent blog posts from Sean and others on why <code>rustls</code>\nsafer and often faster than native TLS.\nIt's also a lot more convenient most of the time!</p>\n<h1><a href=\"#changes-to-certificate-verification\" aria-hidden=\"true\" class=\"anchor\" id=\"changes-to-certificate-verification\"></a>Changes to certificate verification</h1>\n<p>This post is about one of the more mundane parts of the release.\nPreviously there were a lot of somewhat confusing features related to certificate verification.\nThese have been condensed down to a smaller number of feature flags.\nThe summary of these changes took a bit to &quot;click&quot; for me so here's a rephrasing in my own words.</p>\n<ul>\n<li>By default, it uses the <a href=\"https://docs.rs/rustls-platform-verifier/latest/rustls_platform_verifier/\">native platform verifier</a>,\nwhich looks for root certificates in your system store, and inherits systemwide revocations and explicit trust settings\nin addition to the &quot;baseline&quot; root CAs trusted by your OS.</li>\n<li>The feature flag to enable WebPKI bundling of roots is gone.\nWebPKI is a bundle of CA root certificates trusted and curated by Mozilla.\nIt's a reasonably standard set, and most other trust stores look pretty similar.</li>\n<li>You can merge in your own <em>additionally</em> trusted root certificates using <a href=\"https://docs.rs/reqwest/latest/reqwest/struct.ClientBuilder.html#method.tls_certs_merge\"><code>tls_certs_merge</code></a>.</li>\n<li>You can be extra exclusive and use <a href=\"https://docs.rs/reqwest/latest/reqwest/struct.ClientBuilder.html#method.tls_certs_only\"><code>tls_certs_only</code></a>\nto limit verification to only the certificates you specify.</li>\n</ul>\n<p>The documentation and release notes also mention that <code>tls_certs_merge</code> is not always supported.\nI frankly have no idea what conditions cause this to be supported or not.\nBut <code>tls_certs_only</code> apparently can't fail. ¯\\_(ツ)_/¯</p>\n<h1><a href=\"#what-this-means-for-containerized-applications\" aria-hidden=\"true\" class=\"anchor\" id=\"what-this-means-for-containerized-applications\"></a>What this means for containerized applications</h1>\n<p>The reason I'm interested in this in mostly because at <code>$DAYJOB</code>, just about everything is deployed in containers.\nFor reasons that I don't fully understand (something about image size maybe??),\nthe popular container images like <code>debian:trixie-slim</code> <strong>do not include any root CAs</strong>.\nYou have to <code>apt-get install</code> them yourself.\nThis is to say that most TLS applications will straight up break in the out-of-the-box config.</p>\n<p>Previously I had seen this solved in two ways.\nThe first is to install the certs from your distribution's package manager like so:</p>\n<pre><code class=\"language-dockerfile\">RUN apt-get update \\\n &amp;&amp; apt-get install -y --no-install-recommends ca-certificates \\\n &amp;&amp; rm -rf /var/lib/apt/lists/*\n</code></pre>\n<p>The second is to add the WebPKI roots to your cargo dependencies.\nThis actually requires some manual work; adding the crate isn't enough.\nYou then have to add all of the roots (e.g. via <code>tls_certs_merge</code> or <code>tls_certs_only</code>).</p>\n<h1><a href=\"#which-approach-is-better\" aria-hidden=\"true\" class=\"anchor\" id=\"which-approach-is-better\"></a>Which approach is better?</h1>\n<p>The net result is <em>approximately</em> the same, but not entirely.\nThe system-level approach is more flexible.\nPresumably you would get updates in some cases without having to rebuild your application\n(though you do <em>not</em> get these automatically; the certs are only loaded once on app startup\nby <code>rustls_platform_verifier</code>!).\nPresumably you would also get any, say, enterprise-level trust, distrust, CRLs, etc.\nthat are dictated by your corporate IT department.</p>\n<p>The WebPKI approach on the other hand is baked at build time.\nThe <a href=\"https://docs.rs/webpki-root-certs/latest/webpki_root_certs/\">crate</a>\nhas a pretty strong, if slightly obtuse warning about this:</p>\n<blockquote>\n<p>This library is suitable for use in applications that can always be recompiled and instantly deployed. For applications that are deployed to end-users and cannot be recompiled, or which need certification before deployment, consider a library that uses the platform native certificate verifier such as <code>rustls-platform-verifier</code>. This has the additional benefit of supporting OS provided CA constraints and revocation data.</p>\n</blockquote>\n<p>Attempting to read between the lines, past that &quot;instantly deployed&quot; jargon,\nI think they are really just saying &quot;if you use this, certs are baked at compile time and you <em>never</em> get automatic updates. Be careful with that.&quot;</p>\n<p>So it's clear to me you shouldn't ship, say, a static binary to users with certs baked like this.\nBut I'm building server-side software.\nAnd as of February 2026, people look at you funny if you don't deploy using containers.\nI <em>can</em> deploy sufficiently instantly,\nthough to be honest I would have no idea <em>when</em> I should.\nMost apps get deployed frequently enough that I would assume this just doesn't matter,\nand so I'm not sure the warning as-written does much to help a lot of the Rust devs I know.</p>\n<h1><a href=\"#conclusion\" aria-hidden=\"true\" class=\"anchor\" id=\"conclusion\"></a>Conclusion</h1>\n<p>My conclusion is that if you're deploying containerized apps, there is approximately no functional difference.\nYour container is a static image anyways.\nThey don't typically run background tasks of any sort.\nAnd even if they did, the library won't reload the trusted store during application.\nSo it's functionally the same (delta any minor differences between WebPKI and Debian, which should be minimal).\nSimilarly, unless you work for a large enterprises / government,\nyou probably don't have mandated, hand-picked set of CAs and CRLs.\nSo again here there really is no difference as far as I can tell.</p>\n<p>In spite of that, I decided to switch away from using WebPKI in one of our containers that I upgraded.\nThe reason is that structuring this way\n(provided that the sources are copied from a previous layer!)\nensures that every image build always has the latest certs from Debian.\n<code>cargo build</code> is a lot more deterministic,\nand will use whatever you have in the lockfile unless you explicitly run <code>cargo update</code>.</p>\n<p>And even though I'm fortunate to not have an IT apparatus dictating cert policy today,\nyou never know... this approach seems to be both more flexible and creates a &quot;pit of success&quot;\nrather than a landmine where the trust store may not see an update for a year\ndespite regular rebuilds.</p>\n<p>In other words, I think Sean made the right choice, and you should <em>probably</em> delegate to the system,\nunless you have a particular reason to do otherwise.</p>\n<p>Hope this helps; I wrote this because I didn't understand the tradeoffs initially,\nand had some trouble parsing the existing writing on the subject.</p>\n",
      "summary": "",
      "date_published": "2026-02-13T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "rust",
        "cryptography"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//even-safer-rust-with-miri.html",
      "url": "https://ianwwagner.com//even-safer-rust-with-miri.html",
      "title": "Even Safer Rust with Miri",
      "content_html": "<p>Recently some of the Miri contributors published a <a href=\"https://plf.inf.ethz.ch/research/popl26-miri.html\">paper that was accepted to POPL</a>.\nI've been using Rust professionally for about 7 years now,\nand while I'd <em>heard of</em> Miri several times over the years,\nI think there's a wide lack of knowledge about what it does, and why anyone should care.\nI only recently started using it myself, so I'm writing this post to share\nwhat Miri is, why you should care, and how you can get started easily.</p>\n<h1><a href=\"#what-is-miri\" aria-hidden=\"true\" class=\"anchor\" id=\"what-is-miri\"></a>What is Miri?</h1>\n<p>Miri is an interpreter for Rust's mid-level intermediate representation (MIR; hence the acronym).\nThat's how I first remember seeing it described years ago,\nand that's what the GitHub project description still says.</p>\n<p>The latest README is a bit more helpful though: it's a tool for detecting <em>undefined behavior</em> (UB) in Rust code.\nIn other words, it helps you identify code that's unsafe or unsound.\nWhile it would be a bug to hit such behaviors in safe Rust,\nif you're using <code>unsafe</code> (or any of your dependency chain does!),\nthen this is a real concern!\nMiri has in fact even found soundness bugs in the Rust standard library,\nso even a transitive sort of <code>#![forbid(unsafe_code)]</code> won't help you.</p>\n<h1><a href=\"#what-is-ub-and-why-is-it-bad\" aria-hidden=\"true\" class=\"anchor\" id=\"what-is-ub-and-why-is-it-bad\"></a>What is UB (and why is it bad)?</h1>\n<p>I think to understand why Miri matters,\nwe first need to understand why UB is bad.\nThis is not something that most professional programmers have a great understanding of (myself included).</p>\n<p>In abstract, UB can mean &quot;anything that isn't specified&quot;, or something like that...\nBut that's not very helpful!\nAnd it doesn't really explain the stakes if we don't avoid it.\nThe Rust Reference has a <a href=\"https://doc.rust-lang.org/reference/behavior-considered-undefined.html\">list</a>\nof behaviors that are considered to be undefined in Rust,\nbut they note that this list is not exhaustive.</p>\n<p>When searching for a better understanding,\nI've seen people online make statements like\n&quot;UB means your program can do literally anything at this point, like launch nuclear missiles.&quot;\nWhile this is technically true, this isn't particularly helpful to most readers.\nI want something more concrete...</p>\n<p>The authors of the paper put UB's consequences in terms which really &quot;clicked&quot; for me\nusing a logical equivalence, which I'll quote here:</p>\n<blockquote>\n<p>Furthermore, Undefined Behavior is a massive security problem. Around 70% of critical security vulnerabilities are caused by memory safety violations [38, 18, 32], and all of these memory safety violations are instances of Undefined Behavior. After all, if the attacker overflows a buffer to eventually execute their own code, this is not something that the program does because the C or C++ specification says so—the specification just says that doing out-of-bounds writes (or overwriting the vtable, or calling a function pointer that does not actually point to a function, or doing any of the other typical first steps of an exploit chain) is Undefined Behavior, and executing the attacker’s code is just how Undefined Behavior happens to play out in this particular case.</p>\n</blockquote>\n<p>I never made this connection on my own.\nI equate UB most often with things like data races between threads,\nwhere you can have unexpected update visibility without atomics or locks.\nOr maybe torn reads of shared memory that's not properly synchronized.\nBut this is a new way of looking at it that makes the stakes more clear,\nespecially if you're doing anything with pointers.</p>\n<p>Another connection I never made previously is that UB is relative to a very specific context.\nHere's another quote from the paper:</p>\n<blockquote>\n<p>The standard random number crate used across the Rust ecosystem performed an unaligned memory access. Interestingly, the programmers seemed to have been aware that alignment is a problem in this case: there were dedicated code paths for x86 and for other architectures. Other architectures used read_unaligned, but the x86 code path had a comment saying that x86 allows unaligned reads, so we do not need to use this (potentially slower) operation. Unfortunately, this is a misconception: even though x86 allows unaligned accesses, Rust does not, no matter the target architecture—and this can be relevant for optimizations.</p>\n</blockquote>\n<p>This is REALLY interesting to me!\nIt makes sense in retrospect, but it's not exactly obvious.\nLanguages are free to define their own semantics in addition to or independently of hardware.\nI suspect Rust's specification here is somehow related to its concept of allocations\n(which the paper goes into more detail about).</p>\n<p>It is obviously not &quot;undefined&quot; what the hardware will do when given a sequence of instructions.\nBut it <em>is</em> undefined in Rust, which controls how those instructions are generated.\nAnd here the Rust Reference is explicit in calling this UB.\n(NOTE: I don't actually know what the &quot;failure modes&quot; are here, but you can imagine they could be very bad\nsince it could enable the compiler to make a bad assumption that leads to a program correctness or memory safety vulnerability.)</p>\n<p>I actually encountered the same confusion re: what the CPU guarantees vs what Rust guarantees for unaligned reads in <a href=\"https://github.com/stadiamaps/valinor/blob/5e75b2b8267cee2a57d4f22fcc5605728e0cf76e/valhalla-graphtile/src/graph_tile.rs#L857\">one of my own projects</a>,\nas a previous version of this function didn't account for alignment.\nI addressed the issue by using the native zerocopy <a href=\"https://docs.rs/zerocopy/latest/zerocopy/byteorder/struct.U32.html\"><code>U32</code></a> type,\nwhich is something I'd have needed to do anyways to ensure correctness regardless of CPU endianness.\n(If you need to do something like this at a lower level for some reason, there's a <a href=\"https://doc.rust-lang.org/std/ptr/fn.read_unaligned.html\"><code>read_unaligned</code> function in <code>std::ptr</code></a>).</p>\n<p>TL;DR - UB is both a correctness and a security issue, so it's really bad!</p>\n<h1><a href=\"#using-miri-for-great-good\" aria-hidden=\"true\" class=\"anchor\" id=\"using-miri-for-great-good\"></a>Using Miri for great good</h1>\n<p>One of the reasons I write pretty much everything that I can in Rust is because\nit naturally results in more correct and maintainable software.\nThis is a result of the language guarantees of safe Rust,\nthe powerful type system,\nand the whole ecosystem of excellent tooling.\nIt's a real <a href=\"https://blog.codinghorror.com/falling-into-the-pit-of-success/\">pit of success</a> situation.</p>\n<p>While you can run a program under Miri as a one-shot test,\nthis isn't a practical approach to ensuring correctness long-term.\nMiri is a <em>complementary</em> tool to existing things that you should be doing already.\nAutomated testing is the most obvious one,\nbut fuzzing and other strategies may also be relevant for you.</p>\n<p>If you're already running automated tests in CI, adding Miri is easy.\nHere's an example of how I use it in GitHub actions:</p>\n<pre><code class=\"language-yaml\">steps:\n    - uses: actions/checkout@v4\n    - uses: taiki-e/install-action@nextest\n\n    - name: Build workspace\n      run: cargo build --verbose\n\n    - name: Run tests\n      run: cargo nextest run --no-fail-fast\n\n    - name: Run doc tests (not currently supported by nextest https://github.com/nextest-rs/nextest/issues/16)\n      run: cargo test --doc\n\n    - name: Install big-endian toolchain (s390x)\n      run: rustup target add s390x-unknown-linux-gnu\n\n    - name: Install s390x cross toolchain and QEMU (Ubuntu only)\n      run: sudo apt-get update &amp;&amp; sudo apt-get install -y gcc-s390x-linux-gnu g++-s390x-linux-gnu libc6-dev-s390x-cross qemu-user-static\n\n    - name: Run tests (big-endian s390x)\n      run: cargo nextest run --no-fail-fast --target s390x-unknown-linux-gnu\n\n    - name: Install Miri\n      run: rustup +nightly component add miri\n\n    - name: Run tests in Miri\n      run: cargo +nightly miri nextest run --no-fail-fast\n      env:\n        RUST_BACKTRACE: 1\n        MIRIFLAGS: -Zmiri-disable-isolation\n\n    - name: Run doc tests in Miri\n      run: cargo +nightly miri test --doc\n      env:\n        RUST_BACKTRACE: 1\n        MIRIFLAGS: -Zmiri-disable-isolation\n\n    - name: Install nightly big-endian toolchain (s390x)\n      run: rustup +nightly target add s390x-unknown-linux-gnu\n\n    - name: Run tests in Miri (big-endian s390x)\n      run: cargo +nightly miri nextest run --no-fail-fast --target s390x-unknown-linux-gnu\n      env:\n        RUST_BACKTRACE: 1\n        MIRIFLAGS: -Zmiri-disable-isolation\n</code></pre>\n<p>I know that's a bit longer than what you'll find in the README,\nbut I wanted to highlight my usage in a more complex codebase\nsince these examples are less common.\n(NOTE: I assume an Ubuntu runner here, since Linux has the best support for Miri right now.)\nSome things to highlight:</p>\n<ul>\n<li>I use <a href=\"https://nexte.st/\">nextest</a>, which is significantly faster for large suites. (NOTE: It <a href=\"https://github.com/nextest-rs/nextest/issues/16\">does not support doc tests</a> at the time of this writing).</li>\n<li>I pass some <code>MIRIFLAGS</code> to disable host isolation for my tests, since they require direct filesystem access. You may not need this for your project, but I do for mine.</li>\n<li>Partly because I can, and partly because big-endian CPUS do still exist, I do tests under two targets. Miri is capable of doing this with target flags, which is REALLY cool, and the <code>s390x-unknown-linux-gnu</code> is the &quot;big-endian target of choice&quot; from the Miri authors. This requires a few dependencies and flags.</li>\n<li>Note that cargo doc tests <a href=\"https://github.com/rust-lang/cargo/issues/6460\">do not support building for alternate targets</a>.</li>\n</ul>\n<p>Hopefully you learned something from this post.\nI'm pretty sure I wrote my first line of unsafe Rust less than a year ago\n(after using it professionally for over 6 years prior),\nso even if you don't need this today, file it away for later.\nAs I said at the start, I'm still not an expert,\nso if you spot any errors, please reach out to me on Mastodon!</p>\n",
      "summary": "",
      "date_published": "2026-01-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "rust",
        "software-reliability"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//2025-in-review.html",
      "url": "https://ianwwagner.com//2025-in-review.html",
      "title": "2025 in Review",
      "content_html": "<p>I have never done one of these kinds of public posts, but saw a few from friends so I thought it might be useful!</p>\n<p>This year I was simultaneously more focused on my technical craft than ever,\nbut also had more of a &quot;life&quot; than ever.\nI took more random days off to go chill with friends, go skiing, etc.,\nand had more time with family.</p>\n<p>It is probably also one of the darkest years in world history as a whole.\nThe worst humanitarian abuses in a century continue,\nencouraged and perpetrated by what is supposed to be the &quot;free world.&quot;\nBut enough ink has already been spilled on that and you don't need to hear it from me.\nAnd South Korean politics show that you DO have a voice.\nSo make it heard, and let's focus on the good stuff.</p>\n<h1><a href=\"#travel\" aria-hidden=\"true\" class=\"anchor\" id=\"travel\"></a>Travel</h1>\n<p>I also traveled more than any year since COVID.\nIn addition to my annual pilgrimage to <a href=\"https://latitude59.ee/\">Latitude59</a> in Tallinn,\nother highlights were going to Hong Kong for Rust Asia\nand London for Anjunadeep Open Air.</p>\n<p>Surprisingly, this was my first time to visit the UK,\nand I have to say London is one of the few other cities I could actually see myself living in.\nDespite its flaws, London had a charming atmosphere,\namazing public spaces, loads of greenery,\ngreat food and drink (I don't really get the hate... I thoroughly enjoyed all of my meals),\nand well-functioning public transportation.\nOverall it was a very &quot;livable&quot; city to me,\nand joins Tallinn and Seoul as one of the few places I'd really enjoy living.</p>\n<h1><a href=\"#music\" aria-hidden=\"true\" class=\"anchor\" id=\"music\"></a>Music</h1>\n<p>2025 was a great year for musical experiences.\nHere are a few of my highlights of the year (in no particular order)\nwhich get regularly stuck in my head:</p>\n<ul>\n<li>Kasablanca - Higher Resolution (Side B)</li>\n<li>Monolink - The Beauty of it All</li>\n<li><a href=\"https://www.youtube.com/watch?v=S5UNox0G3xY\">Der Bahn Song</a> (niche bit of parody that I found <em>hilarious</em>)</li>\n<li>Estiva - Little Love (Icarus Remix)</li>\n<li>Perfume - Nebula Romance: Part II</li>\n<li>James Grant pres. Movement Vol. 3 (Live from Mount Agung, Bali)</li>\n</ul>\n<p>Besides all the great albums and mixes,\nI enjoyed more live shows than I have in a very long time (probably since 2015 or so).\nThe club nights and live bands in Tallinn were as amazing as ever.\nI got extremely lucky with tickets to a sold-out Fred Again tour show just 15 mins from home.\nThat was probably the best live show I've ever seen; absolutely incredible production and musical talent!\nAnd Anjunadeep Open Air was great.</p>\n<p>2025 also saw me get back into <em>creating</em> music for fun.\nI hadn't made much time for this in the past decade,\nbut the time felt right.\nI bought myself an Ableton Push,\nand will probably upload something on SoundCloud at some point.\nOr not.\nI'm making music for me, for fun.\nI wish I could do house parties where I'm just jamming,\nbut that probably won't happen in a Korean apartment anytime soon.</p>\n<h1><a href=\"#community\" aria-hidden=\"true\" class=\"anchor\" id=\"community\"></a>Community</h1>\n<p>I initially used &quot;conferences&quot; as a section heading,\nbut it struck me that the reason I go to conferences,\nmeetups, coworking, and online forums is the same: community.</p>\n<p>As I do basically every year, I went to Latitude59 in May\nfor the community gatherings.\nIt was a great time, and I got an early look at how AI agents were being adopted.</p>\n<p>The other international conference I attended was Rust Asia in Hong Kong.\nWhat a cool and diverse group of people!\nIt was also great to be back in Hong Kong again for the first time in quite a few years.\nI really hope they do the conference again in 2026.</p>\n<p>I also got to attend two local conferences late in the year: FOSS for All, and FOSS4G Korea.\nBoth conferences wouldn't have been on my radar if not for some friends being involved organizing them.</p>\n<p>FOSS for All is also a new conference, and the first edition was a huge success.\nIt was far more international than I expected for a Korean conference,\nand a model for running a properly international, bilingual conference.\nI was somewhat surprised that I gave the <em>only</em> talk with a heavy focus on Rust.\nAnd I was pleasantly surprised to see how much of the Korean FOSS community is active on Mastodon.\nI think I tripled the amount of Koreans I follow in an afternoon.</p>\n<p>It was also a surprisingly good value for my company as a sponsor.\nI had something like 20 serious conversations with people at our table,\nwhich was something I didn't really expect (the conference was maybe 200 attendees)!\nI'll definitely be back next year.</p>\n<p>FOSS4G Korea was also surprisingly great!\nI think I was the only non-Asian there; a few dedicated people flew in from Japan, which was awesome!\nAI was definitely a theme, and it wasn't the sort of slop generating 10x &quot;productivity&quot; sort of narrative.\nThe talks were overall even more interesting than I had expected; better than the last international FOSS4G I attended!\nThis was also the first time I fully participated in a conference conducted in another language.\nI'm setting a goal to give a talk in Korean next time.</p>\n<p>And speaking of international FOSS4G, it seems the next edition will also be close by\nin Hiroshima!\nI'm very excited to go, after several years of them being quite far away.\nGuess I need to start working on my talk proposals ;)</p>\n<p>Meetup-wise, I took over hosting the Seoul Rust meetup this year, and we did a lot more events than any year since COVID.\nWe've had some great talks, and even started a <a href=\"https://www.youtube.com/@RustSeoul\">YouTube channel</a>,\nwhere we'll post recordings of talks in the future (provided that the speaker is OK with it).\nI also gave two talks at the Seoul iOS Meetup: one on Ferrostar, and another on Apple's new Foundation models.\nThe iOS meetup also spawned a new, more general meetup called Dev Korea,\nwhich is growing really fast and has a great community on Discord!</p>\n<h1><a href=\"#reading\" aria-hidden=\"true\" class=\"anchor\" id=\"reading\"></a>Reading</h1>\n<p>I read a lot last year!\nI finally <a href=\"finishing-dragonball-in-korean.html\">finished reading Dragonball in Korean</a>.\nI had never read / watch the series before (because I grew up in relatively rural America without cable TV),\nbut it came highly recommended.\nYou can read about that in my other post.</p>\n<p>Here are some other things that I read + highly recommend:</p>\n<ul>\n<li>Sarah Wynn-Williams - Careless People</li>\n<li>John Carreyrou - Bad Blood</li>\n<li>David Graeber - Debt: The First 5000 Years</li>\n<li>Joseph Cox - Dark Wire</li>\n<li>Geoff White - The Lazarus Heist</li>\n<li>John Bloom - Eccentric Orbits</li>\n<li>Sarah Goodyear, Doug Gordon, and Aaron Naparstek - Life After Cars</li>\n<li>Karl Popper - The Open Society and its Enemies</li>\n</ul>\n<h1><a href=\"#work\" aria-hidden=\"true\" class=\"anchor\" id=\"work\"></a>Work</h1>\n<p>I probably talk about this enough elsewhere, but it was a really fun year work-wise, and we grew a lot too!</p>\n<p>Ferrostar started as one of those audacious ideas which I just couldn't resist trying.\nIt's now a healthy open-source project with weekly meetings of the core contributors,\nover 300 stars on GitHub, and 56(!!) forks.\nI think it's pretty safe to say that it's now regarded as the first choice\nunless you want to pay Google millions of dollars, or have an <em>extremely</em> simple use case.\nIt's being adopted by large companies in the space,\nwe're benefiting from contributions back upstream,\nand we're getting new business as a result.</p>\n<p>I'm pretty proud of this as I think it's an example of how open source can balance\ncommunity, collaboration, and sustainability.\nThat last 2 points is worth emphasizing.\nAll of the core contributors are working in a professional capacity,\nand find it valuable to work together on a shared foundation.</p>\n<p>The other big achievement that I haven't written as much about is rewriting our geocoder,\nmore or less from scratch, in a matter of months.\nYou've probably heard of the <a href=\"https://en.wikipedia.org/wiki/Second-system_effect\">second-system syndrome</a>.\nThe popular trope these days is for engineers to take something that works but is clunky / limited,\nand decide to rewrite it (maybe in Rust, like me 🤣), and never ship, or ship VERY late due to feature creep\nand wanting to get everything perfect.\nI'm definitely guilty of being a perfectionist, but I also believe you can get there gradually while shipping something valuable quickly.</p>\n<p>I approached this rewrite with a clear set of things that I wanted to change,\nand focused almost all of the time initially on getting the foundations right,\nwhich would let me replace the higher layers in a more &quot;agile&quot; way\n(in the sense of the normal use of the word, not a specific methodology).\nIt worked.\nWithin a few months, I had replaced the existing API layer with a new one,\nwhich was serving 99% of our traffic.\nWe didn't have any downtime, and I'm only aware of one accidental breaking change.\nThis is a result of careful testing, including snapshot testing at several levels (using the <code>insta</code> crate),\nand oracle testing (simple Python scripts in this case which hit the current and next gen APIs and flagged any differences).</p>\n<p>There will always be more improvements to make, but what's important is that we shipped,\nand we have a solid foundation to build from here.\nAnd not just that, we also have a v2 API with a bunch of improvements.\nAnd since the new API system is serving all the traffic,\nwe even get to backport a lot of the improvements to v1!\nIn fact, we have zero plans of deprecating our v1 API, since the internals are shared,\nand we can continue improving it within the limits of that API contract.\nThis is an engineering achievement I'm really proud of.</p>\n<h1><a href=\"#the-year-ahead\" aria-hidden=\"true\" class=\"anchor\" id=\"the-year-ahead\"></a>The Year Ahead</h1>\n<p>I don't do new years resolutions per se,\nbut I expect to work at a slightly less crazy pace,\nand make more time side projects like music and non-work-related tech.\nI've also decided on my next Korean reading series: Neon Genesis Evangelion.\nI'm currently on volume 5, and expect to finish that this year.</p>\n",
      "summary": "",
      "date_published": "2026-01-03T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "reflections"
      ],
      "language": "en"
    }
  ]
}