LLBBL Blog

LLBBL Blog

Subscribe About Support Archive Photos Replies Links
  • Security and Reliability in AI-Assisted Development

    You may not realize it, but AI code generation is fundamentally non-deterministic. It’s probabilistic at its core, it’s predicting code rather than computing it.

    And while there’s a lot of orchestration happening between the raw model output and what actually lands in your editor, you can still get wildly different results depending on how you use the tools.

    This matters more than most people realize.

    Garbage In, Garbage Out (Still True)

    The old programming adage applies here with renewed importance. You need to be explicit with these tools. Adding predictability into how you build is crucial.

    Some interesting patterns:

    • Specialized agents set up for specific tasks
    • Skills and templates for common operations
    • Orchestrator conversations that plan but don’t implement directly
    • Multiple conversation threads working on the same codebase via Git workspaces

    The more structure you provide, the more consistent your output becomes.

    The Security Problem

    This topic doesn’t get talked about enough. All of our common bugs have snuck into the training data. SQL injection patterns, XSS vulnerabilities, insecure defaults… they’re all in there.

    The model can’t always be relied upon to build it correctly the first time. Then there’s the question of trust.

    Do you trust your LLM provider?

    Is their primary focus on quality and reliable, consistent output? What guardrails exist before the code reaches you? Is the model specialized for coding, or is it a general-purpose model that happens to write code?

    These are important engineering questions.

    Deterministic Wrappers Around Probabilistic Cores

    The more we can put deterministic wrappers around these probabilistic cores, the more consistent the output will be.

    So, what does this look like in practice?

    Testing is no longer optional. We used to joke that we’d get to testing when we had time. That’s not how it works anymore. Testing is required because it provides feedback to the models. It’s your mechanism for catching problems before they compound.

    Testing is your last line of defense against garbage sneaking into the system.

    AI-assisted review is essential. The amount of code you can now create has increased dramatically. You need better tools to help you understand all that code. The review step, typically done during a pull request, is now crucial for product development. Not optional. Crucial.

    The models need to review itself, or you need a separate review process that catches what the generating step missed.

    The Takeaway

    We’re in an interesting point in time. These tools can dramatically increase your output, but only if you build the right guardrails around them should we trust the result.

    Structure your prompts. Test everything. Review systematically. Trust but verify.

    The developers who figure out how to add predictability to unpredictable processes are the ones who’ll who will be shipping features instead of shitting out code.

    Jan 22, 2026 / DevOps / AI / Programming

    Enjoyed this?
  • Learning to Program in 2026

    If I had to start over as a programmer in 2026, what would I do differently? This question comes up more and more and with people actively building software using AI, it’s as relevant as ever.

    Some people will tell you to pick a project and learn whatever language fits that project best. Others will push JavaScript because it’s everywhere and you can build just about anything with it. Both are reasonable takes, but I do think there’s a best first language.

    However, don’t take my word for it. Listen to Brian Kernighan. If you’re not familiar with the name, he co-authored The C Programming Language back in 1978 and worked at Bell Labs alongside the creators of Unix. Oh also, he is a computer science Professor at Princeton. This man TAUGHT programming to generations of computer scientists.

    There’s an excellent interview on Computerphile with Kernighan where he makes a compelling case for Python as the first language.

    Why Python?

    Kernighan makes three points that you should listen to.

    First, the “no limitations” argument. Tools like Scratch are great for kids or early learners, but you hit a wall pretty quickly. Python sits in a sweet spot—it’s readable and approachable, but the ecosystem is deep enough that you won’t outgrow it.

    Second, the skills transfer. Once you learn the fundamentals—loops, variables, data structures—they apply everywhere. As Kernighan puts it: “If you’ve done N programming languages, the N+1 language is usually not very hard to get off the ground.”

    Learning to think in code matters more than any specific syntax.

    Third, Python works great for prototyping. You can build something to figure out your algorithms and data structures, then move to another language depending on your needs.

    Why Not JavaScript?

    JavaScript is incredibly versatile, but it throws a lot at beginners. Asynchronous behavior, event loops, this binding, the DOM… and that’s a lot of cognitive overhead when you’re just trying to grasp what a variable is.

    Python’s readable syntax lets you focus on learning how to think like a programmer. Fewer cognitive hurdles means faster progress on the fundamentals that actually matter.

    There’s also the type system. JavaScript’s loose equality comparisons (== vs ===) and automatic type coercion trip people up constantly.

    Python is more predictable. When you’re learning, predictable is good.

    The Path Forward

    So here’s how I’d approach it: start with Python and focus on the basics. Loops, variables, data structures.

    Get comfortable reading and writing code. Once you’ve got that foundation, you can either go deeper with Python or branch out to whatever language suits the projects you want to build.

    The goal isn’t to master Python, it’s to learn how to think about problems and express solutions in code.

    That skill transfers everywhere, including reviewing AI-generated code in whatever language you end up working with.

    There are a ton of great resources online to help you learn Python, but one I see consistently is Python for Everyone by Dr Chuck.

    Happy coding!

    Jan 21, 2026 / Programming / Python / learning

    Enjoyed this?
  • So, Zero Trust is this idea that you never trust and you always verify.

    Cloudflare Tunnels, TailScale, and Ngrok are three different approaches to Zero Trust networking.

    Cloudflare Tunnel is a reverse proxy. TailScale is more of a mesh-based VPN. Cloudflare Tunnel is an ingress as service, which means that it makes it really easy to spin up public URLs.

    What you need depends on your use case.

    Jan 20, 2026 / Networking / security / Homelab

  • Transhumanism (as a fictional genre, not as a philosophy) is about the idea that we can use technology to overcome the problems inherent to human nature, while cyberpunk is about the idea that we can’t.

    I propose a new form of philosophy called “Cyberpunk Luddism.” The idea comes from reading a blog post on Kagi Small Web about Molly’s Guide to Cyberpunk Gardening. In it, they mention the interesting quote above, that I have tracked down the original source from 2009. Stephen Lea Sheppard on RPG.net

    Jan 20, 2026 / Philosophy / Technology

  • Everyone crashing out over OpenCode situation. Why not just use Claude Code (2.1+)? Or you know, there’s AMP. AMP exists too, and it looks equally interesting to me.

    Jan 19, 2026 / AI / Tools / Development

  • Claude Code has been working great for me. OpenCode looks interesting, but uh, Opus 4.5 access is necessary for real work. I’m not doing any sketchy workarounds to get it running, and API pricing isn’t appealing either. So for now, OpenCode stays firmly in the “interesting” category.

    Jan 19, 2026 / AI / Tools

  • Two Arguments Against AI in Programming (And Why I'm Not Convinced)

    I’ve been thinking about the programmers who are against AI tools, and I think their arguments generally fall into two camps.

    Of course, these are just my observations, so take them with a grain of salt, or you know, tell me I’m a dumbass in the comments.

    The Learning Argument

    The first position is that AI prevents you from learning good software engineering concepts because it does the hard work for you.

    All those battle scars that industry veterans have accumulated over the years aren’t going to be felt by the new breed. For sure, the painful lessons about why you should do something this way and not that way are important to preserve into the future.

    Maybe we’re already seeing anti-patterns slip back into how we build code? I don’t know for sure, its going to require some PHD level research to figure it out.

    To this argument I say, if we haven’t codified the good patterns by now, what the hell have we all been doing? I think we have more good patterns in the public code than there are bad ones.

    So just RELAX! The cream will rise to the top. The glass is half full. We’ll be fine… Which brings me to the next argument.

    The Non-Determinism Argument

    The second position comes from people who’ve dug into how large language models actually work.

    They see that it’s predicting the next token, and they start thinking of it as this fundamentally non-deterministic thing.

    How can we trust software built on predictions? How do we know what’s actually going to happen when everything is based on weights created during training?

    Here’s the thing though: when you’re using a model from a provider, you’re not getting raw output. There’s a whole orchestration layer. There’s guardrails, hallucination filters, mixture of experts approaches, and the thinking features that all work together to let the model double-check its work before responding.

    It’s way more sophisticated than “predict the next word and hope for the best.”

    That said, I understand the discomfort. We’re used to deterministic systems where the same input reliably produces the same output.

    We are are now moving from those type of systems to ones that are probabilistic.

    Let me remind you, math doesn’t care about the differences between a deterministic and a probabilistic system. It just works, and so we1.

    The Third Argument I’m Skipping

    There’s obviously a third component; the ethical argument about training data, labor displacement, and whether these tools should exist at all.

    I will say this though, it’s too early to make definitive ethical judgments on a tool while we’re still building it, while we’re still discovering what it’s actually useful for.

    Will it all be worth it in the end? We won’t know until the end.


    1. This “we” I use to mean us as in the human race, but also our software we build. ↩︎

    Jan 18, 2026 / AI / Programming / Software-engineering

    Enjoyed this?
  • Claude Cowork: First Impressions (From the Sidelines)

    Claude Cowork released this week, and the concept seems genuinely useful. I think a lot of people are going to love it once they get their hands on it.

    Unfortunately, I haven’t been able to get it working yet. Something’s off with my local environment, and I’m not entirely sure what. Claude Desktop sometimes throws up a warning asking if I want to download Node and I usually say no, but this time I said yes. Whether that’s related to my issues, I honestly don’t know. I did submit a bug report though, so hopefully that helps.

    Here’s the thing that really impresses me: Anthropic noticed a trend and shipped a major beta feature in about 10 days.

    That’s remarkable turnaround for something this substantial. Even if it’s not working perfectly for everyone yet (hi, that’s me), seeing that kind of responsiveness from a company is genuinely exciting.

    I’m confident they’ll get it sorted before it leaves beta. These things take time, and beta means beta.

    I have explored using the CLI agents outside of pure coding workflows and so I think there’s a lot more flexibility there than you might expect.

    For now, I’m watching from the sidelines, waiting for my environment issues to sort themselves out.

    Jan 17, 2026 / AI / Tools / Claude

    Enjoyed this?
  • When do you think everyone will finally agree that Python is Python 3 and not 2? I know we aren’t going to get a major version bump anytime soon, if ever again, but we really should consider putting uv in core… Python needs modern package management baked in.

    Jan 16, 2026 / Programming / Python

  • My api2spec plans this weekend is to continue working on the fixtures. We got all the fixtures in place, but the real work is outputting specs to files and validating them against the tree-sitter parser code for each framework. That validation step is where things get interesting—making sure the generated specs actually match what each framework expects.

    Jan 16, 2026 / Api2spec / Coding

  • After two days with Beads, my agent-based workflows feel supercharged. I’ve been experimenting with agents for about a month now, and something clicked. Could be the new Claude Code 2.1 upgrade helping too, but the combination is 🚀🚀🚀.

    Jan 15, 2026 / AI / Claude / Workflow

  • Spent some time with Google Antigravity today and I think I’m starting to get it. Built a few agents to test it out. The agent manager stuff is genuinely interesting and seems useful. The planning features (spec-driven development) though? Not sold on those yet.

    Jan 15, 2026 / AI / Google / Agents

  • Beads: Git-Native Issue Tracking for AI-Assisted Development

    If you’re working with AI coding agents like Claude Code, you’ve probably noticed a friction point: context.

    Every time you start a new session, you’re rebuilding mental state. What was I working on? What’s blocked? What’s next?

    I’ve been using Beads, and it’s changed how I manage work across multiple AI sessions.

    What Makes Beads Different?

    Beads takes a fundamentally different approach. Issues live in your repo as a .beads/issues.jsonl file, syncing like any other code. This means:

    • No context switching: Your AI agent can read and update issues without leaving the terminal
    • Always in sync: Issues travel with your branch and merge with your code
    • Works offline: No internet required, just git
    • Branch-aware: Issues can follow your branch workflow naturally

    The CLI-first design is what makes it click with AI coding agents. When I’m working with Claude Code, I can say “check what’s ready to work on” and it runs bd ready to find unblocked issues. No copying and pasting from a browser tab.

    Getting Started

    Getting up and running takes about 30 seconds:

    # Install Beads
    curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
    
    # Initialize in your repo
    bd init
    
    # Create your first issue
    bd create --title="Try out Beads" --type=task
    

    From there, the workflow is straightforward:

    • bd ready shows issues with no blockers
    • bd update <id> --status=in_progress to claim work
    • bd close <id> when you’re done
    • bd sync to commit beads changes

    Why This Matters for AI Workflows

    The real power shows up when you’re juggling multiple tasks across sessions. Your AI agent can:

    1. Pick up exactly where you left off by reading the issue state
    2. Track dependencies between tasks (this issue blocks that one)
    3. Create new issues for discovered work without breaking flow
    4. Close completed work and update status in real-time

    I’ve found this especially useful for longer projects where I’m bouncing between features, bugs, and cleanup tasks. The AI doesn’t lose track because the state is right there in the repo.

    Is It Right for You?

    Beads isn’t trying to replace GitHub Issues for team collaboration or complex project management.

    It’s designed for a specific workflow: developers using AI coding agents who want persistent, agent-friendly task tracking.

    If you’re already working with Claude Code, Aider, or similar tools, give it a try. The setup cost is minimal, and you might find it solves a problem you didn’t realize you had.

    Jan 14, 2026 / Productivity / AI / Developer-tools / Git

    Enjoyed this?
  • 36 Framework Fixtures in One Session: How Beads + Claude Code Changed Our Testing Game

    We built test fixtures for 36 web frameworks in a single session. Not days. Not a week of grinding through documentation. Hours.

    Here’s what happened and why it matters.

    The Problem

    api2spec is a CLI tool that parses source code to generate OpenAPI specifications. To test it properly, we needed real, working API projects for every supported framework—consistent endpoints, predictable responses, the whole deal.

    We started with 5 frameworks: Laravel, Axum, Flask, Gin, and Express. The goal was to cover all 36 supported frameworks with fixture projects we could use to validate our parsers.

    What We Actually Built

    36 fixture repositories across 15 programming languages. Each one includes:

    • Health check endpoints (GET /health, GET /health/ready)
    • Full User CRUD (GET/POST /users, GET/PUT/DELETE /users/:id)
    • Nested resources (GET /users/:id/posts)
    • Post endpoints with pagination (GET /posts?limit=&offset=)
    • Consistent JSON response structures

    The language coverage tells the story:

    • Go: Chi, Echo, Fiber, Gin
    • Rust: Actix, Axum, Rocket
    • TypeScript/JS: Elysia, Express, Fastify, Hono, Koa, NestJS
    • Python: Django REST Framework, FastAPI, Flask
    • Java: Micronaut, Spring
    • Kotlin: Ktor
    • Scala: Play, Tapir
    • PHP: Laravel, Slim, Symfony
    • Ruby: Rails, Sinatra
    • C#/.NET: ASP.NET, FastEndpoints, Nancy
    • C++: Crow, Drogon, Oat++
    • Swift: Vapor
    • Haskell: Servant
    • Elixir: Phoenix
    • Gleam: Gleam (Wisp)

    For languages without local runtimes on my machine—Haskell, Elixir, Gleam, Scala, Java, Kotlin—we created Docker Compose configurations with both an app service and a dev service for interactive development.

    How Beads Made This Possible

    We used beads (a lightweight git-native issue tracker) to manage the work. The structure was simple:

    • 40 total issues created
    • 36 closed in one session
    • 5 Docker setup tasks marked as P2 priority (these blocked dependent fixtures)
    • 31 fixture tasks at P3 priority
    • 4 remaining for future work

    The dependency tracking was key. Docker environments had to be ready before their fixtures could be worked on, and beads handled that automatically.

    When I’d finish a Docker setup task, the blocked fixture tasks became available.

    Claude Code agents worked through the fixture implementations in parallel where possible.

    The combination of clear task definitions, dependency management, and AI-assisted coding meant we weren’t context-switching between “what do I need to do next?” and “how do I implement this?”

    The Numbers

    Metric Value
    Total Issues 40
    Closed 36
    Avg Lead Time 0.9 hours
    New GitHub Repos 31
    Languages Covered 15

    That average lead time of under an hour per framework includes everything: creating the repo, implementing the endpoints, testing, and pushing.

    What’s Left

    Four tasks queued for follow-up sessions:

    1. Drift detection - Compare generated specs against expected output
    2. Configurable report formats - JSON, HTML, and log output options
    3. CLAUDE.md files - Development instructions for each fixture
    4. Claude agents - Framework-specific coding assistants

    The Takeaway

    Doing this today, was like having a super power. “I need to test across 36 frameworks” and actually having those test fixtures ready, but with agents and Opus 4.5 and beads, BAM done!

    Beads gave us the structure to track dependencies and progress.

    Claude Code agents handled the repetitive-but-different implementation work across languages and frameworks.

    The combination let us focus on the interesting problems instead of the mechanical ones.

    All 36 repos are live at github.com/api2spec with the api2spec-fixture-* naming convention.

    Have you tried this approach yet?

    Jan 13, 2026 / DevOps / Open-source / Testing / Ai-tools / Claude-code

    Enjoyed this?
  • vibe is the bait, code is the switch.

    Vibe coding gets people in the door. We all know the hooks. Once you’re actually building something real, you still need to understand what the code is doing. And that’s not a bad thing.

    Jan 12, 2026 / AI / Programming / Vibe-coding

  • Why Airtable Killed Google Sheets for Me

    It’s 2026, you should probably stop using Google Sheets or Microsoft Excel for most things. What follows is my attempt at explaining why Airtable (and other similar products) are better.

    Data Integrity Actually Exists

    In Google Sheets, a cell can be anything. A string, a date, a number, whatever you accidentally paste into it. There’s no real way to enforce data types beyond formatting tricks or functions that break when someone inevitably ignores them.

    In Airtable, you define what a column is. It’s a date field. It’s a number. It’s a single select. And it stays that way. No more spreadsheets slowly devolving into data hellscapes where row 47 has a phone number stored as text and row 48 has it as a number.

    Relationships Without the Visual Basic Nightmare

    Want to connect data across multiple sheets in Google? Get ready to write formulas. You’re essentially writing code, Visual Basic or Apps Script, just to link your tables together.

    In Airtable, relationships are native. You click a few buttons, link two tables, and boom, you’ve got the equivalent of a SQL join without writing a single line of code. The data stays connected, and you can traverse those relationships naturally.

    Attachments That Don’t Suck

    Google Sheets handles attachments by letting you paste a link to a file somewhere else. Maybe it’s in Google Drive, maybe it’s on some random server. Either way, it’s just a URL sitting in a cell.

    Airtable lets you drag files directly into a cell. They get thumbnails. They’re actually attached to the record. Sure, on the backend it’s probably still stored in a bucket somewhere, but the interface abstracts all that away. It just feels like the file belongs there. Good file, that’s a good boy, go sit in your cell.

    Views That Make Sense

    In Sheets, you get a grid. Maybe you can set up a filter view if you know how. That’s about it. Airtable gives you options out of the box: grid view, Kanban board, calendar, gallery, timeline.

    You can look at the same data in completely different ways depending on what you’re trying to accomplish. Want to see your projects as cards on a board? Done. Want to see them as events on a calendar? Also done. No formulas, no hacks. Notion has entered the chat

    Automation Without Writing Production Code in a Spreadsheet

    This one really gets me. Want to automate something in Google Sheets? Time to open Apps Script and write JavaScript. I hope you’re comfortable debugging code that has no unit tests, no version control, and will silently break at 2 AM when you’re asleep.

    Airtable’s automation is point-and-click. Set a trigger, define actions, connect to integrations. It’s like Zapier or n8n built right into your database. When this field changes, send that notification. When a record is created, add it to another table. No code required.

    The Row Editing Experience

    This might seem minor, but it’s not. When you want to focus on a single row in Google Sheets—really dig into one record and edit multiple fields—you’re clicking into cells and scrolling horizontally to find column 30 while trying not to lose your place.

    Airtable has an expanded record view. Click on a row and it opens up vertically, showing you every field in a clean, organized layout. You can actually see what you’re editing. Nice.


    Look, Google Sheets isn’t going anywhere, and we should all probably be happy it still exists….. but yeah, the spreadsheet paradigm had a good run and it’s time to move on.

    For anything that resembles actual data management, where you need integrity, relationships, and a UI that doesn’t fight you, Airtable (or Notion, or similar tools) should be your first choice.

    What do you think? Have I missed anything?

    Jan 11, 2026 / Productivity / Tools / Airtable

    Enjoyed this?
  • The Markdown Mode Manifesto

    Google Docs is for grandma. Markdown is for actual work.

    I know that sounds harsh, but hear me out.

    Developers love Markdown because it’s extremely portable. It’s just a text file with some agreed-upon formatting symbols.

    No proprietary binary format, no vendor lock-in, no mysterious corruption when you open it in a different app. If you want your writing to survive the digital apocalypse, Markdown is your best bet.

    This isn’t a post explaining what Markdown is, the point of this post is simpler:

    Google Docs sucks for anyone who wants real Markdown support, and I wish Google would actually fix this.

    Their current “Markdown support” is a joke. When Google Docs detects something that looks like a heading, it helpfully deletes your Markdown syntax and converts it to rich text. Thanks, I hate it.

    It’s basically impossible to write in Markdown in Google Docs because the app fights you every step of the way. The only real support they offer is exporting to Markdown format. That’s not what I want. I want to write in Markdown, not just export to it after the fact.

    In a real Markdown editor like Obsidian or 30 other options, you can toggle between source view and preview. You see the raw text with all its formatting symbols, then flip to see the rendered result.

    It’s clean, it’s simple, it works. In Google Docs? There’s no source view. It’s only rich text, forever and ever, amen. They built the whole thing to mimic Microsoft Word, and that’s all you get.

    I don’t need another Word clone. I’ve got, brought to you by Copilot Word, if I want Word.

    I would gladly sacrifice whatever bloated features necessary to get rid of all that stuff I don’t care about, as long as it has real Markdown support.

    What “Real Support” Would Look Like

    • A source/preview toggle (like literally every other Markdown editor)
    • Stop auto-converting my syntax into rich text formatting
    • Let me paste Markdown without it being “helpfully” transformed
    • Native .md file handling, not just export

    Is that so much to ask?

    All this is to say: we probably need a new product entirely. Google’s not going to rebuild Docs from the ground up, and Microsoft’s not going to make Word understand that some of us don’t want 47 ribbon tabs and a formatting pane that takes up half the screen.

    But, you know, whatever. I guess the best we get these days is another VS Code clone.

    If you’re a fellow Markdown Apostle stuck in a rich-text world, I feel your pain. Until then, I’ll be over here editing in Obsidian for some things and Cursor for others.

    Jan 10, 2026 / Writing / Tools / Markdown / Google-docs

    Enjoyed this?
  • Twenty Years of DevOps: What's Changed and What Hasn't

    I’ve been thinking about how much our industry has transformed over the past two decades. It’s wild to realize that 20 years ago, DevOps as we know it didn’t even exist. We were deploying to production using FTP. Yes, FTP. You use the best tool that is available to you and that’s what we had.

    So what’s actually changed, and what’s stayed stubbornly the same?

    The Constants

    JavaScript is still king. Although to be fair, the JavaScript of 2005 and the JavaScript of today are almost unrecognizable. We’ve gone from jQuery spaghetti to sophisticated module systems, TypeScript, and frameworks that would have seemed like science fiction back then.

    And yet, we’re still centering that div.

    Certainly, HTML5 and semantic tags have genuinely helped, and I’m certainly grateful we’re not building everything out of tables and spans anymore.

    What’s Different

    The list of things we didn’t have 20 years ago is endless but here are some of the big ones:

    • WebSockets
    • HTTP/2
    • SSL certificates as a default (most sites were running plain HTTP)
    • Git and GitOps
    • Containers and Kubernetes
    • CI/CD pipelines as we know them
    • Jenkins didn’t exist
    • Docker wasn’t even a concept

    The framework landscape is unrecognizable. You might call it a proliferation … We went from a handful of options to, well, a new JavaScript framework every week, so the joke goes.

    Git adoption has been one of the best things to happen to our industry. (RIP SVN) Although I hear rumors that some industries are still clinging to some truly Bazaar version control systems. Mecurial anyone?

    The Bigger Picture

    Here’s the thing that really gets me: our entire discipline didn’t exist. DevOps, SRE, platform engineering… these weren’t job titles. They weren’t even concepts people were discussing.

    We had developers in their hole and operations in their walled gardens. Now we have infrastructure as code, GitOps workflows, observability platforms, and the expectation that you can deploy to production multiple times a day without breaking a sweat.

    The cultural shift from “ops handles production” to “you build it, you run it” fundamentally changed how we think about software.

    What Stays the Same

    Despite all the tooling changes, some things remain constant. We’re still trying to ship reliable software faster. We’re still balancing speed with stability.

    Twenty years from now, I wonder what we’ll be reminiscing about. Remember when we used to actually write software ourselves and complain about testing?

    What seems cutting-edge is the new legacy before you know it.

    Jan 9, 2026 / DevOps / Web-development / Career / Retrospective

    Enjoyed this?
  • Just discovered my theme’s category links were broken. If you tried clicking one and got nowhere—sorry about that! Should be fixed now. One of those little bugs that slips through until you actually use your own site like a visitor would. Testing in production… 😅

    Jan 8, 2026 / blogging / Themes

  • api2spec work will continue this weekend. We will add support for more frameworks and languages. Finding gaps in the implementation as we build fixtures for each framework. If you have a framework you’d like to see supported, please let me know.

    Jan 7, 2026 / Open-source / Api2spec / Development

  • It’s always some weird networking thing. Switched an internal Astro site from PNPM to Bun last night, didn’t test it before bed. Woke up to port 4321 not binding; dev server wouldn’t start. Turns out it randomly decided to only bind on IPv6 overnight. Had to explicitly tell it to bind on IPv4. Why is it always networking?

    Jan 7, 2026 / Dev / Astro / Bun / Networking

  • Just discovered DB Pro, a new desktop app for SQLite and LibSQL databases. Looks pretty promising. Meanwhile, I’m still waiting for DataGrip to get proper LibSQL support. Come on JetBrains, just give me Turso already!

    Jan 6, 2026 / Sqlite / Databases / Tools

  • Yay, I got the link to the conversation working on my replies page.

    Jan 6, 2026 / blogging / Webdev

  • Introducing api2spec: Generate OpenAPI Specs from Source Code

    You’ve written a beautiful REST API. Routes are clean, handlers are tested and the types are solid. But where’s your OpenAPI spec? It’s probably outdated, incomplete, or doesn’t exist at all.

    If you’re “lucky”, you’ve been maintaining one by hand. The alternatives aren’t great either, runtime generation requires starting your app and hitting every endpoint or annotation-heavy approaches clutter your code. At this point we should all know, with manual maintenance it’ll inevitably drift from reality.

    What if you could just point a tool at your source code and get an OpenAPI spec?

    Enter api2spec

    # Install
    go install github.com/api2spec/api2spec@latest
    
    # Initialize config (auto-detects your framework)
    api2spec init
    
    # Generate your spec
    api2spec generate
    

    That’s it. No decorators to add. No server to start. No endpoints to crawl.

    What We Support

    Here’s where it gets interesting. We didn’t build this for one framework—we built a plugin architecture that supports 30+ frameworks across 16 programming languages:

    • Go: Chi, Gin, Echo, Fiber, Gorilla Mux, stdlib
    • TypeScript/JavaScript: Express, Fastify, Koa, Hono, Elysia, NestJS
    • Python: FastAPI, Flask, Django REST Framework
    • Rust: Axum, Actix, Rocket
    • PHP: Laravel, Symfony, Slim
    • Ruby: Rails, Sinatra
    • JVM: Spring Boot, Ktor, Micronaut, Play
    • And more: Elixir Phoenix, ASP.NET Core, Gleam, Vapor, Servant…

    How It Works

    The secret sauce is tree-sitter, an incremental parsing library that can parse source code into concrete syntax trees.

    Why tree-sitter instead of language-specific AST libraries?

    1. One approach, many languages. We use the same pattern-matching approach whether we’re parsing Go, Rust, TypeScript, or PHP.
    2. Speed. Tree-sitter is designed for real-time parsing in editors. It’s fast enough to parse entire codebases in seconds.
    3. Robustness. It handles malformed or incomplete code gracefully, which is important when you’re analyzing real codebases.
    4. No runtime required. Your code never runs. We can analyze code even if dependencies aren’t installed or the project won’t compile.

    For each framework, we have a plugin that knows how to detect if the framework is in use, find route definitions using tree-sitter queries, and extract schemas from type definitions.


    Let’s Be Honest: Limitations

    Here’s where I need to be upfront. Static analysis has fundamental limitations.

    When you generate OpenAPI specs at runtime (like FastAPI does natively), you have perfect information. The actual response types. The real validation rules. The middleware that transforms requests.

    We’re working with source code. We can see structure, but not behavior.

    What this means in practice:

    • Route detection isn’t perfect. Dynamic routing or routes defined in unusual patterns might be missed.
    • Schema extraction varies by language. Go structs with JSON tags? Great. TypeScript interfaces? We can’t extract literal union types as enums yet.
    • We can’t follow runtime logic. If your route path comes from a database, we won’t find it.
    • Response types are inferred, not proven.

    This is not a replacement for runtime-generated specs. But maybe so in the future and for many teams, it’s a massive improvement over having no spec at all.

    Built in a Weekend

    The core of this tool was built in three days.

    • Day one: Plugin architecture, Go framework support, CLI scaffolding
    • Day two: TypeScript/JavaScript parsers, schema extraction from Zod
    • Day three: Python, Rust, PHP support, fixture testing, edge case fixes

    Is it production-ready? Maybe?

    Is it useful? Absolutely.

    For the fixture repositories we’ve created realistic APIs in Express, Gin, Flask, Axum, and Laravel. api2spec correctly extracts 20-30 routes and generates meaningful schemas. Not perfect. But genuinely useful.

    How You Can Help

    This project improves through real-world testing. Every fixture we create exposes edge cases. Every framework has idioms we haven’t seen yet.

    • Create a fixture repository. Build a small API in your framework of choice. Run api2spec against it. File issues for what doesn’t work.
    • Contribute plugins. The plugin interface is straightforward. If you know a framework well, you can make api2spec better at parsing it.
    • Documentation. Found an edge case? Document it. Figured out a workaround? Share it.

    The goal is usefulness, and useful tools get better when people use them.


    Getting Started

    go install github.com/api2spec/api2spec@latest
    cd your-api-project
    api2spec init
    api2spec generate
    cat openapi.yaml
    

    If it works well, great! If it doesn’t, file an issue. Either way, you’ve helped.

    api2spec is open source under the FSL-1.1-MIT license. Star us on GitHub if you find it useful.

    Built with love, tree-sitter, and too much tea. ☕

    Jan 5, 2026 / DevOps / Programming / Openapi / Golang / Open-source

    Enjoyed this?
  • My 2026 Blogging Goal: Showing Up More Consistently

    I’ve been thinking … I want to blog and post more regularly.

    So what does this look like in practice? I’m aiming to:

    • Write shorter posts when I don’t have time for longer ones
    • Share more porjects and works-in-progress
    • Post about the small stuff
    • Actually hit publish

    I think there’s something powerful about building a habit of showing up. Even when it’s imperfect. Especially when it’s imperfect.

    If you’re reading this, I hope you’ll join me on this journey.

    Here’s to more writing and less overthinking.

    Jan 4, 2026 / Writing / blogging / Goals

    Enjoyed this?
Older →