open source
-
Semantic Docs Spring Update: Astro 6, Auto-Releases, npm
The last two months on Semantic Docs have mostly been maintenance work, but a few things I wanted to talk about. I pushed through a major framework upgrade, swapped out a vendored library for a real published package, and finally automated the release pipeline. Five tagged releases later, here’s where we are.
The Headlines
- Upgraded to Astro 6
- Switched from a vendored logger to the published
logan-loggernpm package - Shipped an auto-release workflow driven by Conventional Commits
- Three rounds of dependency updates plus a security-focused sweep
- Five tagged releases,
v1.3.3throughv1.5.0
Astro 6
The Astro 6 upgrade was easy. Semantic Docs runs a hybrid setup, static article pages plus a server-rendered search endpoint, and that part barely needed any attention. Most of the work was in the dependency layout, not the application code.
One note if you’re forking or syncing this theme: if you’re upgrading from
v1.3.5or earlier (anything pre-Astro-6, which landed inv1.4.0), delete yournode_modulesand your lockfile and do a clean install. Skip that step and you’ll get weird errors that look like your code is broken when it’s really just leftover state.A Real npm Package Instead of a Vendored Logger
For a while, the project was using a logger I wrote to experiment with publishing to both npm and JSR. It was a useful exercise. I wanted to see what a clean foundational package looked like across both registries, and I think it turned out well.
But for this repo, I wanted consistency over experimentation. So I swapped the vendored copy for the published
logan-loggernpm package. Behavior is the same, the surface area is the same, it’s just back on the npm registry.Automated Releases
I’ve liked using Conventional Commits to drive automated releases. When a PR merges to main, the workflow figures out the next version from the commit messages, tags it, and publishes a GitHub release with a generated changelog.
The commit type determines the version bump.
feat:bumps the minor,fix:bumps the patch, breaking changes bump the major. The changelog falls out of the same metadata. More automation here the better.If you’ve been on the fence about Conventional Commits, this is the use case that sold me.
What’s Next: Embedding Quality
The reference implementation uses TEI for search embeddings, and that’s been fine. But “fine” is not the same as “good,” and I want to actually compare quality across providers before I commit to anything long term.
Two I want to test:
- Jina (now owned by Elastic)
- Mistral, which has been putting out genuinely strong embedding models
The goal is to run the same corpus through each, evaluate the search results, and figure out which one earns a highlight. Whatever I learn from that work will get folded back into the open source Semantic Docs repo so anyone running their own instance can make an informed choice instead of just trusting my defaults.
I’d appreciate a follow. You can subscribe with your email below. The emails go out once a week, or you can find me on Mastodon at @[email protected].
/ DevOps / Open-source / Astro / Semantic-docs
-
npmx.dev Is the NPM Frontend We've Been Asking For
If you’ve spent any time on npmjs.com, you know the drill. You land on a package page, eyeball the tarball size, squint at the dependency list, then bounce out to bundlephobia, then to Are The Types Wrong, then to Socket.dev, just to figure out if this thing is safe to install. That’s not a workflow. That’s a scavenger hunt.
For years, the JavaScript community has been filing requests on the official npm tracker asking for this stuff to live in one place. As Andrew Nesbitt notes, native dark mode was the single most upvoted request on the tracker for something like five years before it shipped. Real install sizes (transitive, not just the tarball). UI warnings about compromised packages and hidden postinstall hooks. Concrete version resolution instead of squinting at semver ranges. The list is long, and it mostly went nowhere.
So the community built npmx.dev instead.
What It Actually Is
It’s important to note: npmx.dev is not a separate registry. It doesn’t mirror packages, and
npm installstill pulls from the official Microsoft/GitHub-owned registry. What npmx is, strictly, is an alternative frontend. It hits the official npm APIs in real time, caches the results at the edge, and renders a much better page.The stack is Nuxt on top of the full VoidZero toolchain with Vite, Vitest, Rolldown, oxlint, oxfmt, and the other usual packages you’d expect. It also leans heavily on SSR and ISR for near-instant page loads. VoidZero (the company behind Vite) sponsors the project, which makes sense, since it’s open-source MIT and the operational costs are basically web traffic and compute, not package storage.
What You Actually Get
I really like the dependency view. Instead of a flat list of what the package declares, you get an expandable tree with recursive vulnerability tracking via OSV. You can drill into transitive dependencies and see which ones are flagged. That’s the thing every security-conscious developer has been doing manually for years.
A few other wins worth calling out:
- Real install size. Not the tarball. The actual disk footprint with all dependencies resolved.
- postinstall scripts surfaced. If a package is going to run arbitrary code on your machine at install time, npmx puts that on the page where you can see it. Not buried in the manifest.
- Concrete resolved versions displayed alongside the semver range, so you know what you’re actually getting.
- Banners suggesting lighter alternatives for bloated legacy packages. Opinionated, and I’m here for it.
A Quick Note on OSV
The vulnerability data comes from the OSV ecosystem, and it’s worth understanding what that is, because it’s one of the more important pieces of open-source infrastructure right now.
OSV is a centralized aggregator that pulls from GitHub Security Advisories, PyPA, RustSec, the Global Security Database, and many ecosystem-specific feeds. The schema is standardized JSON, machine-readable, and maps vulnerabilities to exact versions or commit hashes. That matters because it’s what lets automated scanners avoid the false-positive flood that makes most security tooling annoying to use.
It’s an OpenSSF project under the Linux Foundation. Google maintains the infrastructure; GitHub, the Rust Foundation, the PyPA, Red Hat, and a long list of ecosystem maintainers contribute data. Vendor-neutral, free, and the boring kind of governance you want for security data.
Making It Your Default
Here is how you can make it the actual default npm frontend. Since npmx.dev maintains URL parity with npmjs.com, you can redirect every npm link you click without thinking about it. A couple of options depending on your setup.
A few options:
Kagi users can add a global URL redirect rule:
^https://www.npmjs.com|https://npmx.devClick an npm link in your search results, land on npmx instead.
Browser extensions work for everyone else. There’s a dedicated
npmx-redirectextension for Chrome and Firefox that does only this one thing. Or use Redirector with a regex rule:- Include:
^(?:https?://)?(?:www\.)?npmjs\.com/(.*) - Redirect to:
https://npmx.dev/$1
Custom site search. In Chrome, Brave, Arc, or Safari, add a new site search with shortcut
nx(or@npm) pointing at:https://npmx.dev/search?q=%sNow you type
nx, space,zod, enter. Done. You never see npmjs.com again unless you want to.Why This Matters
The npm registry is critical infrastructure for, conservatively, a huge chunk of the software running on the internet. The fact that the community had to build its own frontend to surface basic security and observability data tells you something about where the priorities have been.
I’ll be using npmx as my default going forward. I’m not going back.
I’d appreciate a follow. You can subscribe with your email below. The emails go out once a week, or you can find me on Mastodon at @[email protected].
Sources
/ open source / security / javascript / Tooling / Npm
-
Time-Series PostgreSQL at Petabyte Scale
From the creators of TimescaleDB — the PostgreSQL platform trusted by enterprises processing trillions of metrics daily. Start a free trial or get a demo.
/ links / platform / open source / database / postgresql
-
Lisette is a New Rust-to-Go Language, So I Built It a Test Library
This morning I dove into a new programming language called Lisette. I saw it from @lmika and had to take a look. It gives you Rust-like syntax but compiles down to Go, and you can import from the Go standard library directly.
It’s early in development, so a lot of things don’t exist yet. They have a roadmap posted with plans for third-party package support, a test runner, bitwise operators, and configurable diagnostics.
So Naturally, I Built a Test Library
Anyone who reads my blog knows I care a lot about testing. So when I saw “implement a test runner” sitting on the roadmap, I did what any reasonable person would do on a Monday morning. I built a testing library for Lisette called LisUnit.
I wanted something that felt familiar if you’ve used Jest or PHPUnit. Test cases are closures that return a result, and assertions work the same way. Here’s what it looks like:
lisunit.Suite.new("math") .case("add produces sum", || { lisunit.assert_eq_int(add(2, 3), 5)? Ok(()) }) .case("add is commutative", || { lisunit.assert_eq_int(add(2, 3), add(3, 2))? Ok(()) }) .run()Define a suite, chain your test cases, run it.
Why Bother?
I don’t know exactly what direction the Lisette team is headed with their own test runner, so this is just a prototype. Building a test library turns out to fun way to try out a new language because you end up touching a lot of language constucts?
I’ll probably keep poking at it as Lisette evolves. Happy Monday.
/ Programming / Golang / Open-source / Testing / Rust
-
/ Tools / links / open source / database
-
LangChain: Observe, Evaluate, and Deploy Reliable AI Agents
LangChain provides the engineering platform and open source frameworks developers use to build, test, and deploy reliable AI agents.
/ Tools / links / agent / open source / software engineering
-
I published an Agentic Maturity Model on GitHub, a mental framework for thinking about and categorizing AI tools. It’s open to contributions and I’m looking for coauthors.
/ AI / Open-source / Agentic
-
Laravel - The PHP Framework For Web Artisans
Laravel is a PHP web application framework with expressive, elegant syntax. We’ve already laid the foundation — freeing you to create without sweating the small things.
/ Programming / Webdev / links / open source / php / laravel / framework
-
Don’t sleep on OpenClaw. There are a ton of people building with it right now who aren’t talking about it yet. The potential is real, and when those projects start surfacing, it’s going to turn heads. Sometimes the most exciting stuff happens quietly before it hits the mainstream.
/ AI / Open-source / Openclaw
-
kagisearch/smallweb: Kagi Small Web
Kagi Small Web. Contribute to kagisearch/smallweb development by creating an account on GitHub.
/ links / open source / kagi
-
OpenCode | The open source AI coding agent
OpenCode - The open source coding agent.
/ AI / Programming / Tools / links / agent / open source / code
-
36 Framework Fixtures in One Session: How Beads + Claude Code Changed Our Testing Game
We built test fixtures for 36 web frameworks in a single session. Not days. Not a week of grinding through documentation. Hours.
Here’s what happened and why it matters.
The Problem
api2spec is a CLI tool that parses source code to generate OpenAPI specifications. To test it properly, we needed real, working API projects for every supported framework—consistent endpoints, predictable responses, the whole deal.
We started with 5 frameworks: Laravel, Axum, Flask, Gin, and Express. The goal was to cover all 36 supported frameworks with fixture projects we could use to validate our parsers.
What We Actually Built
36 fixture repositories across 15 programming languages. Each one includes:
- Health check endpoints (
GET /health,GET /health/ready) - Full User CRUD (
GET/POST /users,GET/PUT/DELETE /users/:id) - Nested resources (
GET /users/:id/posts) - Post endpoints with pagination (
GET /posts?limit=&offset=) - Consistent JSON response structures
The language coverage tells the story:
- Go: Chi, Echo, Fiber, Gin
- Rust: Actix, Axum, Rocket
- TypeScript/JS: Elysia, Express, Fastify, Hono, Koa, NestJS
- Python: Django REST Framework, FastAPI, Flask
- Java: Micronaut, Spring
- Kotlin: Ktor
- Scala: Play, Tapir
- PHP: Laravel, Slim, Symfony
- Ruby: Rails, Sinatra
- C#/.NET: ASP.NET, FastEndpoints, Nancy
- C++: Crow, Drogon, Oat++
- Swift: Vapor
- Haskell: Servant
- Elixir: Phoenix
- Gleam: Gleam (Wisp)
For languages without local runtimes on my machine—Haskell, Elixir, Gleam, Scala, Java, Kotlin—we created Docker Compose configurations with both an app service and a dev service for interactive development.
How Beads Made This Possible
We used beads (a lightweight git-native issue tracker) to manage the work. The structure was simple:
- 40 total issues created
- 36 closed in one session
- 5 Docker setup tasks marked as P2 priority (these blocked dependent fixtures)
- 31 fixture tasks at P3 priority
- 4 remaining for future work
The dependency tracking was key. Docker environments had to be ready before their fixtures could be worked on, and beads handled that automatically.
When I’d finish a Docker setup task, the blocked fixture tasks became available.
Claude Code agents worked through the fixture implementations in parallel where possible.
The combination of clear task definitions, dependency management, and AI-assisted coding meant we weren’t context-switching between “what do I need to do next?” and “how do I implement this?”
The Numbers
Metric Value Total Issues 40 Closed 36 Avg Lead Time 0.9 hours New GitHub Repos 31 Languages Covered 15 That average lead time of under an hour per framework includes everything: creating the repo, implementing the endpoints, testing, and pushing.
What’s Left
Four tasks queued for follow-up sessions:
- Drift detection - Compare generated specs against expected output
- Configurable report formats - JSON, HTML, and log output options
- CLAUDE.md files - Development instructions for each fixture
- Claude agents - Framework-specific coding assistants
The Takeaway
Doing this today, was like having a super power. “I need to test across 36 frameworks” and actually having those test fixtures ready, but with agents and Opus 4.5 and beads, BAM done!
Beads gave us the structure to track dependencies and progress.
Claude Code agents handled the repetitive-but-different implementation work across languages and frameworks.
The combination let us focus on the interesting problems instead of the mechanical ones.
All 36 repos are live at github.com/api2spec with the
api2spec-fixture-*naming convention.Have you tried this approach yet?
/ DevOps / Open-source / Testing / Ai-tools / Claude-code
- Health check endpoints (
-
The API management platform built for self-hosted control AI and agentic workloads distributed, low-latency scale Tyk gives you full control to secure, govern, and scale the APIs that move your data, models, and decisions,across any cloud, hybrid, or on-prem environment. Start for free Get a demo Powering the world’s best companies Your vision, powered by Tyk Your APIs. Your infrastructure. Your […]
/ AI / links / platform / self-hosted / open source / api / gateway
-
api2spec work will continue this weekend. We will add support for more frameworks and languages. Finding gaps in the implementation as we build fixtures for each framework. If you have a framework you’d like to see supported, please let me know.
/ Open-source / Api2spec / Development
-
Introducing api2spec: Generate OpenAPI Specs from Source Code
You’ve written a beautiful REST API. Routes are clean, handlers are tested and the types are solid. But where’s your OpenAPI spec? It’s probably outdated, incomplete, or doesn’t exist at all.
If you’re “lucky”, you’ve been maintaining one by hand. The alternatives aren’t great either, runtime generation requires starting your app and hitting every endpoint or annotation-heavy approaches clutter your code. We should all know at this point, with manual maintenance, it will inevitably drift from reality.
What if you could just point a tool at your source code and get an OpenAPI spec?
Enter api2spec
# Install go install github.com/api2spec/api2spec@latest # Initialize config (auto-detects your framework) api2spec init # Generate your spec api2spec generateThat’s it. No decorators to add. No server to start. No endpoints to crawl.
What We Support
Here’s where it gets interesting. We didn’t build this for one framework—we built a plugin architecture that supports 30+ frameworks across 16 programming languages:
- Go: Chi, Gin, Echo, Fiber, Gorilla Mux, stdlib
- TypeScript/JavaScript: Express, Fastify, Koa, Hono, Elysia, NestJS
- Python: FastAPI, Flask, Django REST Framework
- Rust: Axum, Actix, Rocket
- PHP: Laravel, Symfony, Slim
- Ruby: Rails, Sinatra
- JVM: Spring Boot, Ktor, Micronaut, Play
- And more: Elixir Phoenix, ASP.NET Core, Gleam, Vapor, Servant…
How It Works
The secret sauce is tree-sitter, an incremental parsing library that can parse source code into concrete syntax trees.
Why tree-sitter instead of language specific AST libraries?
- One approach, many languages. We use the same pattern-matching approach whether we’re parsing Go, Rust, TypeScript, or PHP.
- Speed. Tree-sitter is designed for real-time parsing in editors. It’s fast enough to parse entire codebases in seconds.
- Robustness. It handles malformed or incomplete code gracefully, which is important when you’re analyzing real codebases.
- No runtime required. Your code never runs. We can analyze code even if dependencies aren’t installed or the project won’t compile.
For each framework, we have a plugin that knows how to detect if the framework is in use, find route definitions using tree-sitter queries, and extract schemas from type definitions.
Let’s Be Honest: Limitations
Here’s where I need to be upfront. Static analysis has fundamental limitations.
When you generate OpenAPI specs at runtime (like FastAPI does natively), you have perfect information. The actual response types. The real validation rules. The middleware that transforms requests.
We’re working with source code. We can see structure, but not behavior.
What this means in practice:
- Route detection isn’t perfect. Dynamic routing or routes defined in unusual patterns might be missed.
- Schema extraction varies by language. Go structs with JSON tags? Great. TypeScript interfaces? We can’t extract literal union types as enums yet.
- We can’t follow runtime logic. If your route path comes from a database, we won’t find it.
- Response types are inferred, not proven.
This is not a replacement for runtime-generated specs. But maybe so in the future and for many teams, it’s a massive improvement over having no spec at all.
Built in a Weekend
The core of this tool was built in three days.
- Day one: Plugin architecture, Go framework support, CLI scaffolding
- Day two: TypeScript/JavaScript parsers, schema extraction from Zod
- Day three: Python, Rust, PHP support, fixture testing, edge case fixes
Is it production-ready? Maybe?
Is it useful? Absolutely.
For the fixture repositories we’ve created—realistic APIs in Express, Gin, Flask, Axum, and Laravel—api2spec correctly extracts 20-30 routes and generates meaningful schemas. Not perfect. But genuinely useful.
How You Can Help
This project improves through real-world testing. Every fixture we create exposes edge cases. Every framework has idioms we haven’t seen yet.
- Create a fixture repository. Build a small API in your framework of choice. Run api2spec against it. File issues for what doesn’t work.
- Contribute plugins. The plugin interface is straightforward. If you know a framework well, you can make api2spec better at parsing it.
- Documentation. Found an edge case? Document it. Figured out a workaround? Share it.
The goal is usefulness, and useful tools get better when people use them.
Getting Started
go install github.com/api2spec/api2spec@latest cd your-api-project api2spec init api2spec generate cat openapi.yamlIf it works well, great! If it doesn’t, file an issue. Either way, you’ve helped.
api2spec is open source under the FSL-1.1-MIT license. Star us on GitHub if you find it useful.
Built with love, tree-sitter, and too much tea. ☕
/ DevOps / Programming / Openapi / Golang / Open-source