Testing
-
Why Testing Matters
There is a fundamental misunderstanding about testing in software development. The dirty and not-so-secret, secret, in software development is TESTING is more often than not seen as something that we do after the fact, despite the best efforts from the TDD and BDD crowd.
So why is that the case and why does TESTING matter?
All questions about software decisions lead to a maintainability answer.
If you write software that is intended to be used, and I don’t care how you write it, what language, what framework or what your background is; it should be tested or it should be deleted/archived.
That sounds harsh but it’s the truth.
If you intended to run the software beyond the moment you built it, then it needs to be maintained. It could be used by someone else, or even by you at a later date, it doesn’t matter. Test it.
if software.intended_to_run > once: testing = requiredThat’s just the reality of the craft. Here is why.
Testing Is Showing Your Work
Remember proofs in math class? Testing is the software equivalent. It’s how you show your work. It’s how you demonstrate that the thing you built actually does what you say it does, and will keep doing it tomorrow.
Chances are your project has dependencies. What happens to those dependencies a month from now? Five years from now? A decade?
Code gets updated. Libraries evolve. APIs change. Testing makes sure that those future dependency updates aren’t going to cause regression issues in your application.
It’s a bet against future problems. If I write tests now, I reduce the time I spend debugging later. That’s not idealism, it’s just math.
T = Σ(B · D) - CWhere B = probability of bug, D = debug time, C = cost of writing tests and T is time saved.
Protecting Your Team’s Work
If you’re working on a team at the ole' day job, you want to make sure that the code other people are adding isn’t breaking the stuff you’re working on or the stuff you worked on six months ago, add tests.
Tests give you that safety net. They’re the contract that says “this thing works, and if someone changes it in a way that breaks it, we’ll know immediately.”
Without tests, you’re essentially hoping for the best and hope isn’t good bet when it comes to the future of a software based business.
Your Customers Are Not Your QA Team
Auto-deploying to production without any testing or verification process? That’s just crazy. You shouldn’t be implicitly or explicitly asking your customers to test your software. It’s not their responsibility. It’s yours.
Your job is to produce software that’s as bug-free as possible. Software that people can rely on. Reliable, maintainable software, that’s what you owe the people using what you build.
Bringing Testing to the Table
Look, I get it. Writing tests isn’t the most fun part of the job. However, a lot has changed in the past couple of years. You might have heard about this whole AI thing? With the Agents we all have available to us, we can add tests with as little as 5 words.
“Write tests on new code.”
Looking back at that forumla for C, we can now see that the cost of writing tests is quickly approaching zero. It just takes a bit of time for the tests to be written, it just takes a bit of time to verify the tests the Agent added are useful.
Don’t worry about doing everything at the start and setup a full CI pipeline to run the tests. Just start with the 5 words and add the complicated bits later.
No excuses, just testing.
-
Ditch Jest for Vitest: A Ready-Made Migration Prompt
If you’ve ever sat there watching Jest crawl through your TypeScript test suite, you know pain. I mean, I know your pain.
When Switching to Vitest, and the speed difference is genuinely dramatic. The answers to why it’s slow are easy to figure out. There’s plenty of explanations on that, so I’ll leave that to you to go look up why.
I put together a prompt you can hand to Claude (or any AI assistant) that will handle the migration for you. Let me know how it goes!
The Migration Prompt
Convert all Jest tests in this project to Vitest. Here's what to do: ## Setup 1. Remove Jest dependencies (`jest`, `ts-jest`, `@types/jest`, `babel-jest`, any jest presets) 2. Install Vitest: `pnpm add -D vitest` 3. Remove `jest.config.*` files 4. Add a `test` section to `vite.config.ts` (or create `vitest.config.ts` if no Vite config exists): import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globals: true, }, }) 5. Update the `test` script in `package.json` to `vitest` ## Test File Migration For every test file: 1. Replace imports — Remove any `import ... from '@jest/globals'`. If `globals: true` is set, no imports needed. Otherwise add: import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest' 2. Replace `jest` with `vi` everywhere: - jest.fn() → vi.fn() - jest.mock() → vi.mock() - jest.spyOn() → vi.spyOn() - jest.useFakeTimers() → vi.useFakeTimers() - jest.useRealTimers() → vi.useRealTimers() - jest.advanceTimersByTime() → vi.advanceTimersByTime() - jest.clearAllMocks() → vi.clearAllMocks() - jest.resetAllMocks() → vi.resetAllMocks() - jest.restoreAllMocks() → vi.restoreAllMocks() - jest.requireActual() → vi.importActual() (note: async in Vitest) 3. Fix mock hoisting — vi.mock() is hoisted automatically, but variables used in the mock factory must be prefixed with `vi` or declared inside the factory. 4. Fix jest.requireActual — This becomes vi.importActual() and returns a Promise: // Jest jest.mock('./utils', () => ({ ...jest.requireActual('./utils'), fetchData: jest.fn(), })) // Vitest vi.mock('./utils', async () => ({ ...(await vi.importActual('./utils')), fetchData: vi.fn(), })) 5. Snapshot tests work the same way. No changes needed. 6. Timer mocks — same API after the jest → vi rename. 7. Module mocks with __mocks__ directories work identically. ## TypeScript Config Add Vitest types to tsconfig.json: { "compilerOptions": { "types": ["vitest/globals"] } } ## After Migration 1. Delete leftover Jest config files 2. Update CI config to use `vitest run` 3. Run tests and fix any remaining failures
If you’re still on Jest and your TypeScript test suite is dragging, give Vitest a shot. The migration is low-risk, the speed improvements are real, and with the prompt above, the hardest parts are handled. ☕
/ Development / Testing / Typescript / Vitest / Jest
-
36 Framework Fixtures in One Session: How Beads + Claude Code Changed Our Testing Game
We built test fixtures for 36 web frameworks in a single session. Not days. Not a week of grinding through documentation. Hours.
Here’s what happened and why it matters.
The Problem
api2spec is a CLI tool that parses source code to generate OpenAPI specifications. To test it properly, we needed real, working API projects for every supported framework—consistent endpoints, predictable responses, the whole deal.
We started with 5 frameworks: Laravel, Axum, Flask, Gin, and Express. The goal was to cover all 36 supported frameworks with fixture projects we could use to validate our parsers.
What We Actually Built
36 fixture repositories across 15 programming languages. Each one includes:
- Health check endpoints (
GET /health,GET /health/ready) - Full User CRUD (
GET/POST /users,GET/PUT/DELETE /users/:id) - Nested resources (
GET /users/:id/posts) - Post endpoints with pagination (
GET /posts?limit=&offset=) - Consistent JSON response structures
The language coverage tells the story:
- Go: Chi, Echo, Fiber, Gin
- Rust: Actix, Axum, Rocket
- TypeScript/JS: Elysia, Express, Fastify, Hono, Koa, NestJS
- Python: Django REST Framework, FastAPI, Flask
- Java: Micronaut, Spring
- Kotlin: Ktor
- Scala: Play, Tapir
- PHP: Laravel, Slim, Symfony
- Ruby: Rails, Sinatra
- C#/.NET: ASP.NET, FastEndpoints, Nancy
- C++: Crow, Drogon, Oat++
- Swift: Vapor
- Haskell: Servant
- Elixir: Phoenix
- Gleam: Gleam (Wisp)
For languages without local runtimes on my machine—Haskell, Elixir, Gleam, Scala, Java, Kotlin—we created Docker Compose configurations with both an app service and a dev service for interactive development.
How Beads Made This Possible
We used beads (a lightweight git-native issue tracker) to manage the work. The structure was simple:
- 40 total issues created
- 36 closed in one session
- 5 Docker setup tasks marked as P2 priority (these blocked dependent fixtures)
- 31 fixture tasks at P3 priority
- 4 remaining for future work
The dependency tracking was key. Docker environments had to be ready before their fixtures could be worked on, and beads handled that automatically.
When I’d finish a Docker setup task, the blocked fixture tasks became available.
Claude Code agents worked through the fixture implementations in parallel where possible.
The combination of clear task definitions, dependency management, and AI-assisted coding meant we weren’t context-switching between “what do I need to do next?” and “how do I implement this?”
The Numbers
Metric Value Total Issues 40 Closed 36 Avg Lead Time 0.9 hours New GitHub Repos 31 Languages Covered 15 That average lead time of under an hour per framework includes everything: creating the repo, implementing the endpoints, testing, and pushing.
What’s Left
Four tasks queued for follow-up sessions:
- Drift detection - Compare generated specs against expected output
- Configurable report formats - JSON, HTML, and log output options
- CLAUDE.md files - Development instructions for each fixture
- Claude agents - Framework-specific coding assistants
The Takeaway
Doing this today, was like having a super power. “I need to test across 36 frameworks” and actually having those test fixtures ready, but with agents and Opus 4.5 and beads, BAM done!
Beads gave us the structure to track dependencies and progress.
Claude Code agents handled the repetitive-but-different implementation work across languages and frameworks.
The combination let us focus on the interesting problems instead of the mechanical ones.
All 36 repos are live at github.com/api2spec with the
api2spec-fixture-*naming convention.Have you tried this approach yet?
/ DevOps / Open-source / Testing / Ai-tools / Claude-code
- Health check endpoints (