Development
-
Ditch Jest for Vitest: A Ready-Made Migration Prompt
If you’ve ever sat there watching Jest crawl through your TypeScript test suite, you know pain. I mean, I know your pain.
When Switching to Vitest, and the speed difference is genuinely dramatic. The answers to why it’s slow are easy to figure out. There’s plenty of explanations on that, so I’ll leave that to you to go look up why.
I put together a prompt you can hand to Claude (or any AI assistant) that will handle the migration for you. Let me know how it goes!
The Migration Prompt
Convert all Jest tests in this project to Vitest. Here's what to do: ## Setup 1. Remove Jest dependencies (`jest`, `ts-jest`, `@types/jest`, `babel-jest`, any jest presets) 2. Install Vitest: `pnpm add -D vitest` 3. Remove `jest.config.*` files 4. Add a `test` section to `vite.config.ts` (or create `vitest.config.ts` if no Vite config exists): import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globals: true, }, }) 5. Update the `test` script in `package.json` to `vitest` ## Test File Migration For every test file: 1. Replace imports — Remove any `import ... from '@jest/globals'`. If `globals: true` is set, no imports needed. Otherwise add: import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest' 2. Replace `jest` with `vi` everywhere: - jest.fn() → vi.fn() - jest.mock() → vi.mock() - jest.spyOn() → vi.spyOn() - jest.useFakeTimers() → vi.useFakeTimers() - jest.useRealTimers() → vi.useRealTimers() - jest.advanceTimersByTime() → vi.advanceTimersByTime() - jest.clearAllMocks() → vi.clearAllMocks() - jest.resetAllMocks() → vi.resetAllMocks() - jest.restoreAllMocks() → vi.restoreAllMocks() - jest.requireActual() → vi.importActual() (note: async in Vitest) 3. Fix mock hoisting — vi.mock() is hoisted automatically, but variables used in the mock factory must be prefixed with `vi` or declared inside the factory. 4. Fix jest.requireActual — This becomes vi.importActual() and returns a Promise: // Jest jest.mock('./utils', () => ({ ...jest.requireActual('./utils'), fetchData: jest.fn(), })) // Vitest vi.mock('./utils', async () => ({ ...(await vi.importActual('./utils')), fetchData: vi.fn(), })) 5. Snapshot tests work the same way. No changes needed. 6. Timer mocks — same API after the jest → vi rename. 7. Module mocks with __mocks__ directories work identically. ## TypeScript Config Add Vitest types to tsconfig.json: { "compilerOptions": { "types": ["vitest/globals"] } } ## After Migration 1. Delete leftover Jest config files 2. Update CI config to use `vitest run` 3. Run tests and fix any remaining failures
If you’re still on Jest and your TypeScript test suite is dragging, give Vitest a shot. The migration is low-risk, the speed improvements are real, and with the prompt above, the hardest parts are handled. ☕
/ Development / Testing / Typescript / Vitest / Jest
-
JavaScript Package Managers: NPM, Yarn, PNPM, and Bun Compared
If you’ve been writing JavaScript for any length of time, you’ve probably had opinions about package managers. Everyone has used npm because it’s the default. Maybe you switched to Yarn back in 2016 and haven’t looked back. These days, there are better options.
That may seem like a bold statement, but bear with me. This article is a mix of opinions and facts. The package manager landscape has changed quite a bit in the last decade, and it’s worth exploring. Let’s break it down.
NPM: The Default Everyone Knows
npm is the package manager that ships with Node. It works. Everyone knows it. Node modules are straightforward to reason about, and security has been improving over the years.
But npm has historically struggled with performance. That’s partly a design problem, it was so widely adopted that making fundamental speed improvements meant risking breakage for the massive ecosystem already depending on it. When you’re supporting millions of packages, you need to be careful in managing backward compatibility breaks, making optimization a lot harder.
This performance gap is exactly what opened the door for alternatives.
Yarn: The Pioneer That Over-Optimized
Yarn showed up in 2016, created by Facebook, and it genuinely pushed the ecosystem forward. It parallelized downloads, introduced offline caching, and most notably, introduced lock files to JavaScript. npm eventually adopted lock files too, so Yarn’s influence on the broader ecosystem is undeniable.
Lock files did exist for other languages before 2016, such as Ruby and PHP, but Yarn was the first JavaScript package manager to include it.
The problem came with Yarn 2. It’s a classic case of over-optimization.
Yarn 2 introduced Plug’n’Play mode, which replaces your
node_modulesfolder with zip files. We’re on Yarn 4 now, and while you can swap between modes, if you’re in the zip mode it becomes genuinely painful to inspect the actual JavaScript code you’re installing. You have to unzip things, dig through archives, and it just adds friction where there shouldn’t be any.If you enjoy making JavaScript development harder than it needs to be, Yarn’s PnP mode has you covered. It’s your Toxic coworkers favorite tool.
PNPM: The Clear Upgrade
If you look at the benchmarks, pnpm wins in almost every common scenario. Running install with a warm cache, lock file, and existing node modules? Faster than npm. A clean install with nothing cached? 7 seconds versus 30 seconds. That’s not a marginal improvement.
Speed isn’t even the best part. pnpm uses hard links from a centralized store instead of copying packages into every project’s
node_modules. Depending on the size of your projects, you can save up to 70% of your disk space compared to npm. If you’re working on multiple JavaScript projects (and who isn’t?), that adds up fast.pnpm also handles the node_modules structure in a way that’s strict by default, which means your code can’t accidentally import packages you haven’t explicitly declared as dependencies. It catches bugs that npm would let slide.
So pnpm is the clear winner, right? Well, there’s one more contender we haven’t talked about yet.
Bun: The Speed Demon
Bun was released in 2023, and noticeably faster than pnpm.
The reason comes down to architecture. pnpm is written in TypeScript and runs on Node, which means every time you run
pnpm install, your computer has to start the V8 engine, load all the JavaScript, compile it, and then ask the operating system to do the actual work. That’s a lot of overhead.Bun is a compiled binary written in Zig. It talks directly to your kernel, no middleman, no V8 engine slowing down every tiny decision. On top of that, Bun is hyper-focused on optimizing system calls. Instead of doing file operations one at a time (open file A, write file A, close file A, repeat a thousand times), it aggressively batches them together. The result is speed improvements not just in disk operations but in everything it does.
Earlier versions of Bun had an annoying quirk similar to Yarn, it used a binary lock file that was difficult to manually audit. That’s been fixed. Bun now uses a readable lock file, which removes the biggest objection people had.
Which begs the question… ?
So Why Isn’t Everyone Using Bun?
The short answer: it’s complicated. Bun isn’t just a package manager, it also replaces Node as your runtime. If you’re using Bun as your runtime, using it as your package manager makes total sense. Everything fits together.
But most teams are still on Node. And when you’re on Node, pnpm is the clearer choice for everyone involved. A new developer joining your team sees pnpm and immediately knows, “Oh, this is a JavaScript project, I know how this works.” Bun as a package manager on top of Node adds a layer of “wait, why are we using this?” that you have to explain.
Maybe that changes in the future as Bun’s runtime adoption grows. I’m sure the Bun team is working hard to make that transition as smooth as possible. But the reality right now is that most JavaScript projects are running on Node.
My Recommendation
If you’re starting a new project or looking to switch:
- Using Bun as your runtime? Use Bun for package management too. It’s the fastest option and everything integrates cleanly.
- On Node (most of us)? Use pnpm. It’s faster than npm, saves disk space, and is strict in ways that catch real bugs. Your team will thank you.
- Still on npm? You’re not doing anything wrong, but you’re leaving performance and disk space on the table for no real benefit.
- On Yarn PnP? I have questions, but I respect your commitment.
The JavaScript ecosystem moves fast, and if you haven’t revisited your package manager choice in a while, it might be worth running a quick benchmark on your own project. The numbers might surprise you.
/ Tools / Development / javascript
-
REPL-Driven Development Is Back (Thanks to AI)
So you’ve heard of TDD. Maybe BDD. But have you heard of RDD?
REPL-driven development. I think most programmers these days don’t work this way. The closest equivalent most people are familiar with is something like Python notebooks—Jupyter or Colab.
But RDD is actually pretty old. Back in the 70s and 80s, Lisp and Smalltalk were basically built around the REPL. You’d write code, run it immediately, see the result, and iterate. The feedback loop was instant.
Then the modern era of software happened. We moved to a file-based workflow, probably stemming from Unix, C, and Java. You write source code in files. There’s often a compilation step. You run the whole thing.
The feedback loop got slower, more disconnected. Some languages we use today like Python, Ruby, JavaScript, PHP include a REPL, but that’s not usually how we develop. We write files, run tests, refresh browsers.
Here’s what’s interesting: AI coding assistants are making these interactive loops relevant again.
The new RDD is natural language as a REPL.
Think about it. The traditional REPL loop was:
- Type code
- System evaluates it
- See the result
- Iterate
The AI-assisted loop is almost identical:
- Type (or speak) your intent in natural language
- AI interprets and generates code
- AI runs it and shows you the result
- Iterate
You describe what you want. The AI writes the code. It executes. You see what happened. If it’s not right, you clarify, and the loop continues.
This feels fundamentally different from the file-based workflow most of us grew up with. You’re not thinking about which file to open, You’re thinking about what you want to happen, and you’re having a conversation until it does.
Of course, this isn’t a perfect analogy. With a traditional REPL, you have more control. You understood exactly what was being evaluated because you wrote it.
>>> while True: ... history.repeat()/ AI / Programming / Development
-
I usually brainstorm spec docs using Gemini or Claude, so if you are like me, this prompt is interesting insight into your software decisions.
Based off our previous chats and the previous documents you've helped me with, provide a detailed summary of all my software decisions and preferences when it comes to building different types of applications./ AI / Development
-
Switching to mise for Local Dev Tool Management
I’ve been making some changes to how I configure my local development environment, and I wanted to share what I’ve decided on.
Let me introduce to you, mise (pronounced “meez”), a tool for managing your programming language versions.
Why Not Just Use Homebrew?
Homebrew is great for installing most things, but I don’t like using it for programming language version management. It is too brittle. How many times has
brew upgradedecided to switch your Python or Node version on you, breaking projects in the process? Too many, in my experience.mise solves this elegantly. It doesn’t replace Homebrew entirely, you’ll still use that for general stuff but for managing your system programming language versions, mise is the perfect tool.
mise the Great, mise the Mighty
mise has all the features you’d expect from a version manager, plus some nice extras:
Shims support: If you want shims in your bash or zsh, mise has you covered. You’ll need to update your RC file to get them working, but once you do, you’re off to the races.
Per-project configuration: mise can work at the application directory level. You set up a
mise.tomlfile that defines its behavior for that specific project.Environment management: You can set up environment variables directly in the toml file, auto-configure your package manager, and even have it auto-create a virtual environment.
It can also load environment variables from a separate file if you’d rather not put them in the toml (which you probably want if you’re checking the file in).
It’s not a package manager: This is important. You still need poetry or uv for Python package management. As a reminder: don’t ever use pip. Just don’t.
A Quick Example
Here’s what a
.mise.tomlfile looks like for a Python project:[tools] python = "3.12.1" "aqua:astral-sh/uv" = "latest" [env] # uv respects this for venv location UV_PROJECT_ENVIRONMENT = ".venv" _.python.venv = { path = ".venv", create = true }Pretty clean, right? This tells mise to use Python 3.12.1, install the latest version of uv, and automatically create a virtual environment in
.venv.Note on Poetry Support
I had to install python from source using mise to get poetry working. You will want to leave this setting to be true. There is some problem with the precompiled binaries they are using.
You can install global python packages, like poetry, with the following command:
mise use --global poetry@latestYes, It’s Written in Rust
The programming veterans among you may have noticed the toml configuration format and thought, “Ah, must be a Rust project.” And you’d be right. mise is written in Rust, which means it’s fast! The project is stable, has a ton of GitHub stars, and is actively maintained.
Task Runner Built-In
One feature I wasn’t expecting: mise has a built-in task runner. You can define tasks right in your
mise.toml:[tasks."venv:info"] description = "Show Poetry virtualenv info" run = "poetry env info" [tasks.test] description = "Run tests" run = "poetry run pytest"Then run them with
mise run testormise r venv:info.If you’ve been putting off setting up Make for a project, this is a compelling alternative. The syntax is cleaner and you get descriptions for free
I’ll probably keep using Just for more complex build and release workflows, but for simple project tasks, mise handles it nicely. One less tool to install.
My Experience So Far
I literally just switched everything over today, and it was a smooth process. No too major so far. I’ll report back if anything breaks, but the migration from my previous setup was straightforward.
Now, I need to get the other languages I use, like Go, Rust, and PHP setup and moved to mise. Having everything consolidated into one tool is going to be so nice.
If you’re tired of Homebrew breaking your language versions or juggling multiple version managers for different languages, give mise a try.
The documentation is solid, and the learning curve is minimal.
/ DevOps / Tools / Development / Python
-
The Rise of Spec-Driven Development: A Guide to Building with AI
Spec-driven development isn’t new. It has its own Wikipedia page and has been around longer than you might realize.
With the explosion of AI coding assistants, this approach has found new life and we now have a growing ecosystem of tools to support it.
The core idea is simple: instead of telling an AI “hey, build me a thing that does the boops and the beeps” then hoping it reads your mind, you front-load the thinking.
It’s kinda obvious, with it being in the name, but in case you are wondering, here is how it works.
The Spec-Driven Workflow
Here’s how it typically works:
-
Specify: Start with requirements. What do you want? How should it behave? What are the constraints?
-
Plan: Map out the technical approach. What’s the architecture? What “stack” will you use?
-
Task: Break the plan into atomic, actionable pieces. Create a dependency tree—this must happen before that. Define the order of operations. This is often done by the tool.
-
Implement: You work with whatever tool to build the software from your task list. The human is (or should be) responsible for deciding when a task is completed.
You are still a part of the process. It’s up to you to make the decisions at the beginning. It’s up to you to define the approach. And it’s up to you to decide you’re done.
So how do you get started?
The Tool Landscape
The problem we have now is there is not a unified standard. The tool makers are busy building the moats to take time to agree.
Standalone Frameworks:
-
Spec-Kit - GitHub’s own toolkit that makes “specifications executable.” It supports multiple AI agents through slash commands and emphasizes intent-driven development.
-
BMAD Method - Positions AI agents as “expert collaborators” rather than autonomous workers. Includes 21+ specialized agents for different roles like product management and architecture.
-
GSD (Get Shit Done) - A lightweight system that solves “context rot” by giving each task a fresh context window. Designed for Claude Code and similar tools.
-
OpenSpec - Adds a spec layer where humans and AI agree on requirements before coding. Each feature gets its own folder with proposals, specs, designs, and task lists.
-
Autospec - A CLI tool that outputs YAML instead of markdown, enabling programmatic validation between stages. Claims up to 80% reduction in API costs through session isolation.
Built Into Your IDE:
The major AI coding tools have adopted this pattern too:
- Kiro - Amazon’s new IDE with native spec support
- Cursor - Has a dedicated plan mode
- Claude Code - Plan mode for safe code analysis
- VSCode Copilot - Chat planning features
- OpenCode - Multiple modes including planning
- JetBrains Junie - JetBrains' AI assistant
- Google Antigravity - Implementation planning docs
- Gemini Conductor - Orchestration for Gemini CLI
Memory Tools
- Beads - Use it to manage your tasks. Works very well with your Agents in Claude Code.
Why This Matters
When first getting started building with AI, you might dive right in and be like “go build thing”. You keep then just iterating on a task until it falls apart once you try to do anything substantial.
You end up playing a game of whack-a-mole, where you fix one thing and you break another. This probably sounds familiar to a lot of you from the olden times of 2 years ago when us puny humans did all the work. The point being, even the robots make mistakes.
Another thing that you come to realize is it’s not a mind reader. It’s a prediction engine. So be predictable.
What did we learn? With spec-driven development, you’re in charge. You are the architect. You decide. The AI just handles the details, the execution, but the AI needs structure, and so these are the method(s) to how we provide it.
/ AI / Programming / Tools / Development
-
-
Everyone crashing out over OpenCode situation. Why not just use Claude Code (2.1+)? Or you know, there’s AMP. AMP exists too, and it looks equally interesting to me.
/ AI / Tools / Development
-
Gitea - Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI/CD
/ DevOps / Development / links / platform / self-hosted / code
-
How AI Tools Help Founders Code Again: My Experience with Claude Code
From intimidation to empowerment: how AI tools made modern web development accessible again for a founder who hadn’t coded in 15 years
/ Development / Claude / links / code / coding lessons
-
Claude Code Learning Hub - Master AI-Powered Development
Learn Claude Code, VS Code, Git/GitHub, Python, and R with hands-on tutorials. Build real-world projects with AI assistance.
/ Programming / Development / links / tutorials / code / coding lessons / learning
-
api2spec work will continue this weekend. We will add support for more frameworks and languages. Finding gaps in the implementation as we build fixtures for each framework. If you have a framework you’d like to see supported, please let me know.
/ Open-source / Api2spec / Development