-
Managing Your Context Window in Claude Code
If you’re using Claude Code, there’s a feature you should know about that gives you visibility into how your context window is being used. The
/contextskill breaks everything down so you can see exactly where your tokens are going.Here’s what it shows you:
- System prompt – the base instructions Claude Code operates with
- System tools – the built-in tool definitions
- Custom agents – any specialized agents you’ve configured
- Memory files – your CLAUDE.md files and auto-memory
- Skills – any skills loaded into the session
- Messages – your entire conversation history
Messages is where you have the most control, and it’s also what grows the fastest. Every prompt you send, every response you get back, every file read, every tool output; it all shows up in your message history.
Then there’s the free space, which is what’s left for actual work before a compaction occurs. This is the breathing room Claude Code has to think, generate responses, and use tools.
You’ll also see a buffer amount that’s reserved for auto-compaction. You can’t use this space directly, it’s set aside so Claude Code has enough room to summarize the conversation and hand things off cleanly.
Why This Matters
Understanding your context usage helps you work more efficiently. A few ways to keep your context lean:
- Start fresh sessions for new tasks instead of reusing a long-running one
- Be intentional about file reads — only read what you need, not entire directories
- Use sub-agents — when you delegate work to a sub-agent, it runs in its own context window instead of yours. All those file reads, tool calls, and intermediate reasoning happen over there, and you just get the result back. It’s one of the best ways to preserve your primary context for the work that actually needs it.
- Trim your CLAUDE.md — everything in your memory files loads every session, so keep it tight
I’ll dig into sub-agents more in a future post. For now, don’t forget about
/context/ AI / Claude-code / Developer-tools
-
Built a thing: Chord Compass — an interactive circle of fifths that lights up chord relationships when you click a key. Load preset progressions like I–V–vi–IV, hit play, hear them back. It looks like a brass compass from an old music theory textbook and I love it.
-
I built Scale Lab — pick a root note, choose from 14 scales and modes, and watch the notes light up on a piano keyboard. Hit play to hear them ascend. Just a bit of music theory fun. Enjoy!
-
I published Agentic Maturity Model to GitHub, a mental framework for thinking about and categorizing AI tools. It’s open to contributions and I’m looking for coauthors.
/ AI / Open-source / Agentic
-
Dear Microsoft, Here Is How I Would Fix Xbox
Microsoft Gaming has a profit margin problem. Their marching orders are to hit 30%, and right now (we think) they’re sitting at around 12% to 15% range. That means they need to roughly double their margins, a massive ask for a division with 20,000 employees across a dozen studios.
For context, even the most successful studios only hit 40% margins on a great year. CD Projekt Red, a publicly traded company, can clear that mark when they ship a massive IP like The Witcher or Cyberpunk. But CD Projekt Red is around 1,200 employees. Microsoft Gaming is much, much larger so, comparing one focused studio to an entire publishing empire isn’t exactly fair.
So how do you close that gap? You either increase revenue or cut spending. I think the answer starts with fixing Game Pass.
Simplify Game Pass
Here’s my pitch: kill Game Pass Core and the middle tier. Get rid of the confusing lineup of plans and just have one. Your current Game Pass Ultimate becomes “Game Pass” and reduce it to $25/month.
Right now, Game Pass has too many tiers doing too little to differentiate themselves. Core exists to charge people for online multiplayer, which feels increasingly absurd when I’m only playing free-to-play games like Minecraft, Marvel Rivals, or Fortnite. I shouldn’t have to pay $10/month just to play online in games that are free everywhere else.
One plan. One price. Simple.
Fix the New Release Problem
Game Pass’s biggest issue is that day-one releases cannibalize game sales. Every copy someone plays through Game Pass instead of buying is lost revenue and for a division trying to double its margins, that’s a problem.
Here’s what I’d do: new releases still come to Game Pass on day one, but they rotate out after a couple of months. Want to keep playing? Buy the game. Then, after a couple of years, those titles can come back to the Game Pass catalog permanently as part of the back catalog.
This gives subscribers the chance to try new games, which is the whole appeal of the service. But it stops Game Pass from being a reason to never buy anything. Let me discover games through the service, but make me buy the ones I love if I want to keep playing before they hit the archives.
You get the best of both worlds: Game Pass stays exciting with new releases, and you stop bleeding game sales.
Replace xCloud with GeForce Now
Now I don’t know a ton about xCloud, but what I do know is that it’s not as good as GeForce Now. So why not just build GeForce Now into the Xbox ecosystem?
Microsoft doesn’t need to own every piece of the stack. If NVIDIA already has the best cloud gaming solution, partner with them. Integrate it into the Xbox app and console experience. The user doesn’t care whose servers are running the game, they care that it works well.
Make Games Portable Across Platforms
This is the big one. If I buy a game, I should be able to play it on any device I want; so on my Xbox, on PC through the Xbox app, or streamed through GeForce Now gaming. One purchase, play anywhere.
Right now, Steam lets me play my library on basically anything. That’s a huge reason people buy games there instead of the Microsoft Store. If Xbox matched that with true cross-platform ownership, it would actually give people a reason to buy from Microsoft.
Maybe this waits for the next Xbox hardware revision, but tell people now that it’s coming. Set the expectation. Give people a reason to start building their library in the Xbox ecosystem.
The Takeaway
The path to 30% margins isn’t about cutting studios or raising prices. It’s about making the Xbox ecosystem so compelling that people actually want to spend money in it. Simplify Game Pass, stop cannibalizing sales, partner where it makes sense, and make purchases feel valuable by letting people play anywhere.
Microsoft has the studios. They have the IP. They have the infrastructure. They just need to stop overcomplicating things and give gamers a reason to choose Xbox not just subscribe to it.
-
I built a frustration game disguised as a broken SaaS dashboard. Cookie banners, modals and upgrade prompts. Close one and two more spawn. The modern internet as a playable joke.
-
I built a hacker terminal typing game. Type Python symbols to “decrypt” corrupted code, hex becomes assembly, then pseudocode, then real source. Mess up and the screen glitches. A typing tutor for programmers that doesn’t feel like one.
/ Gaming / Programming / Python
-
Claude Code Prompts for Taming Your GitHub Repository Sprawl
Some useful Claude Code prompts for GitHub repository management.
1. Archive stale repositories
Using the GitHub CLI (gh), find all of my repositories that haven't been pushed to in over 5 years and archive them. List them first and ask for my confirmation before archiving. Use gh repo list <user> --limit 1000 --json name,pushedAt to get the data, then filter by date, and archive with gh repo archive <user>/<repo> --yes.2. Add missing descriptions
Using the GitHub CLI, find all of my repositories that have an empty or missing description. Use gh repo list <user> --limit 1000 --json name,description,url to get the data. For each repo missing a description, look at the repo's README and any other context to suggest an appropriate description. Present your suggestions to me for approval, then apply them using gh repo edit <user>/<repo> --description "<description>".3. Add missing topics/tags
Using the GitHub CLI, find all of my repositories that have no topics. Use gh repo list <user> --limit 1000 --json name,repositoryTopics, description,primaryLanguage to get the data. For each repo with no topics, analyze the repo name, description, and primary language to suggest relevant topics. Present your suggestions for approval, then apply them using gh api -X PUT repos/<user>/<repo>/topics -f '{"names":["tag1","tag2"]}'.To make #1 easier, repjan is a TUI tool that pulls all your repos into an interactive dashboard. It flags archive candidates based on inactivity and engagement, lets you filter and sort through everything, and batch archive in one sweep. If you’ve got hundreds of repos piling up, it’s way faster than doing it one by one.
/ Productivity / Claude-code / Developer-tools / Github
-
First in a series of posts on vibe coded projects.
SRE Commander — a keyboard-only RTS where you manage servers through traffic spikes, memory leaks, and cascading failures. Over-provision? Bleed cash. Under-provision? Everything buckles.
/ Gaming / Vibe-coding / Sre
-
Viral — you play as a virus, not a single ship, but a living swarm of particles that flock using boid behavior. Consume cells, grow your numbers, dodge antibodies. Simple loop: infect, grow, survive. Looks like a neon petri dish under a blacklight.
/ Gaming / Vibe-coding
-
What I Actually Mean When I Say "Vibe Coding"
Have you thought about what you actually mean when you tell someone you vibe code? The term gets thrown around a lot, but I think there’s a meaningful distinction worth spelling out. Here’s how I think about it.
Vibe coding, to me, doesn’t mean you’re using an LLM because at this point, it’s hard to avoid using an LLM no matter what type of work you’re doing. The distinction isn’t about the tools, it’s about the intent and the stakes.
Vibe Coding vs. Engineering Work
To me, vibe coding is when I have a simple idea and I want to build a self-contained app that expresses that idea. There’s no spec, no requirements doc, no code review. Just an idea and the energy to make it real.
Engineering work, whether that’s context engineering, spec-driven development, or the old-fashioned way; is when I’m working on open source or paid work. There’s structure, there are standards, and the code needs to hold up over time.
Both are fun. But I take vibe coding less seriously than engineering work.
Vibe coding is the creative playground. Engineering is the craft.
My Setup
All my vibe coded apps live in a single repository. The code isn’t public, but the results are. You can find all my ideas at logan.center.
I have it set up so that every time I add a new folder, it automatically gets its own subdomain.
I’m using Caddy for routing and deploying to Railway. I have an idea, create the app, and boom — it’s live.
I would like to open source a template version of this setup so other people could deploy to something like Railway easily, but I haven’t gotten around to building that yet. One day.
For my repository, I decided it would be fun to build everything in Svelte instead of React.
That’s why you may have seen a bunch of posts from me lately about learning how Svelte works. It’s been fun because the framework stays out of your way and lets you move fast, which is exactly what you want when you’re chasing the vibe.
So, for me, vibe coding is a specific thing: low stakes, high creativity, self-contained apps, and the freedom to just build without overthinking it.
I mean I’m not crazy, I still have testing setup…
/ Programming / Vibe-coding / svelte
-
Stop Using pip. Seriously.
If you’re writing Python in 2026, I need you to pretend that pip doesn’t exist. Use Poetry or uv instead.
Hopefully you’ve read my previous post on why testing matters. If you haven’t, go read that first. Back? Hopefully you are convinced.
If you’re writing Python, you should be writing tests, and you can’t do that properly with pip. It’s an unfortunate but true state of Python right now.
In order to write tests, you need dependencies, which is how we get to the root of the issue.
The Lock File Problem
The closest thing pip has to a lock file is
pip freeze > requirements.txt. But it just doesn’t cut the mustard. It’s just a flat list of pinned versions.A proper lock file captures the resolution graph, the full picture of how your dependencies relate to each other. It distinguishes between direct dependencies (the packages you asked for) and transitive dependencies (the packages they pulled in). A
requirements.txtdoesn’t do any of that.Ok, so? You might be asking yourself.
It means that you can’t guarantee that running
pip install -r requirements.txtsix months or six minutes from now will give you the same copy of all your dependencies.It’s not repeatable. It’s not deterministic. It’s not reliable.
The one constant in Code is that it changes. Without a lock file, you’re rolling the dice every time.
Everyone Else Figured This Out
Every other modern language ecosystem “solved” this problem years ago:
- JavaScript has
package-lock.json(npm) andpnpm-lock.yaml(pnpm) - Rust has
Cargo.lock - Go has
go.sum - Ruby has
Gemfile.lock - PHP has
composer.lock
Python’s built-in package manager just… doesn’t have this.
That’s a real problem when you’re trying to build reproducible environments, run tests in CI, or deploy with any confidence that what you tested locally is what’s running in production.
What to Use Instead
Both Poetry and uv solve the lock file problem and give you reproducible environments. They’re more alike than different — here’s what they share:
- Lock files with full dependency resolution graphs
- Separation of dev and production dependencies
- Virtual environment management
pyproject.tomlas the single config file- Package building and publishing to PyPI
Poetry is the more established option. It’s at version 2.3 (released January 2026), supports Python 3.10–3.14, and has been the go-to alternative to pip for years. It’s stable, well-documented, and has a large ecosystem of plugins.
uv is the newer option from Astral (the team behind Ruff). It’s written in Rust and is 10–100x faster than pip at dependency resolution. It can also manage Python versions directly, similar to mise or pyenv. It’s currently at version 0.10, so it hasn’t hit 1.0 yet, but gaining adoption fast.
You can’t go wrong with either. Pick one, use it, and stop using pip.
/ DevOps / Programming / Python
- JavaScript has
-
Why Testing Matters
There is a fundamental misunderstanding about testing in software development. The dirty and not-so-secret, secret, in software development is TESTING is more often than not seen as something that we do after the fact, despite the best efforts from the TDD and BDD crowd.
So why is that the case and why does TESTING matter?
All questions about software decisions lead to a maintainability answer.
If you write software that is intended to be used, and I don’t care how you write it, what language, what framework or what your background is; it should be tested or it should be deleted/archived.
That sounds harsh but it’s the truth.
If you intended to run the software beyond the moment you built it, then it needs to be maintained. It could be used by someone else, or even by you at a later date, it doesn’t matter. Test it.
if software.intended_to_run > once: testing = requiredThat’s just the reality of the craft. Here is why.
Testing Is Showing Your Work
Remember proofs in math class? Testing is the software equivalent. It’s how you show your work. It’s how you demonstrate that the thing you built actually does what you say it does, and will keep doing it tomorrow.
Chances are your project has dependencies. What happens to those dependencies a month from now? Five years from now? A decade?
Code gets updated. Libraries evolve. APIs change. Testing makes sure that those future dependency updates aren’t going to cause regression issues in your application.
It’s a bet against future problems. If I write tests now, I reduce the time I spend debugging later. That’s not idealism, it’s just math.
T = Σ(B · D) - CWhere B = probability of bug, D = debug time, C = cost of writing tests and T is time saved.
Protecting Your Team’s Work
If you’re working on a team at the ole' day job, you want to make sure that the code other people are adding isn’t breaking the stuff you’re working on or the stuff you worked on six months ago, add tests.
Tests give you that safety net. They’re the contract that says “this thing works, and if someone changes it in a way that breaks it, we’ll know immediately.”
Without tests, you’re essentially hoping for the best and hope isn’t good bet when it comes to the future of a software based business.
Your Customers Are Not Your QA Team
Auto-deploying to production without any testing or verification process? That’s just crazy. You shouldn’t be implicitly or explicitly asking your customers to test your software. It’s not their responsibility. It’s yours.
Your job is to produce software that’s as bug-free as possible. Software that people can rely on. Reliable, maintainable software, that’s what you owe the people using what you build.
Bringing Testing to the Table
Look, I get it. Writing tests isn’t the most fun part of the job. However, a lot has changed in the past couple of years. You might have heard about this whole AI thing? With the Agents we all have available to us, we can add tests with as little as 5 words.
“Write tests on new code.”
Looking back at that forumla for C, we can now see that the cost of writing tests is quickly approaching zero. It just takes a bit of time for the tests to be written, it just takes a bit of time to verify the tests the Agent added are useful.
Don’t worry about doing everything at the start and setup a full CI pipeline to run the tests. Just start with the 5 words and add the complicated bits later.
No excuses, just testing.
-
Serverless and Edge Computing: A Practical Guide
Serverless and edge computing have transformed how we deploy and scale web applications. Instead of managing servers, you write functions that automatically scale from zero to millions of users.
Edge computing takes this further by running code geographically close to users for minimal latency. Let’s break down how these technologies work and when you’d actually want to use them.
What is Serverless?
Serverless doesn’t mean “no servers”, it means you don’t manage them. The provider handles infrastructure, scaling, and maintenance. You just write “functions”.
The functions are stateless, auto-scaling, and you only pay for execution time. So what’s the tradeoff? Well there are several but the first one is Cold starts. The first request after idle time is slower because the container needs to spin up.
The serverless Platforms as a Service are sticky, preventing you easily moving to another platform.
They are stateless, meaning each invocation is independent and doesn’t retain any state between invocations.
In some cases, they don’t run Node, so they behave much differently when building locally, which complicates development and testing.
Each request is handled by a NEW function and so you can imagine that if you have a site that gets a lot of traffic, and makes a lot of requests, this will lead to expensive hosting bills or you playing the pauper on social media.
Traditional vs. Serverless vs. Edge
Think of it this way:
- Traditional servers are always running, always costing you money, and you handle all the scaling yourself. Great for predictable, high-traffic workloads. Lots of different options for hosting and scaling.
- Serverless (AWS Lambda, Vercel Functions, GCP Functions) spins up containers on demand and kills them when idle. Auto-scales from zero to infinity. Cold starts around 100-500ms.
- Edge (Cloudflare Workers, Vercel Edge) uses V8 Isolates instead of containers, running your code in 200+ locations worldwide. Cold starts under 1ms.
Cost Projections
Here’s how the costs break down at different scales:
Requests / Month ~RPS (Avg) AWS Lambda Cloudflare Workers VPS / K8s Cluster Winner 1 Million 0.4 $0.00 (Free Tier) $0.00 (Free Tier) $40–$60 (Min HA Setup) Serverless 10 Million 4.0 ~$12 ~$5 $40–$100 Serverless 100 Million 40 ~$120 ~$35 $80–$150 Tie / Workers 500 Million 200 ~$600 ~$155 $150–$300 VPS / Workers 1 Billion 400 ~$1,200+ ~$305 $200–$400 VPS / EC2 The Hub and Spoke Pattern
Also called the citadel pattern, this is where serverless and traditional infrastructure stop competing and start complementing each other. The idea is simple: keep a central hub (your main application running on containers or a VPS) and offload specific tasks to serverless “spokes” at the edge.
Your core API, database connections, and stateful logic stay on traditional infrastructure where they belong. But image resizing, auth token validation, A/B testing, geo-routing and rate limiting all move to edge functions that run close to the user.
When to Use Serverless
- Unpredictable or spiky traffic — APIs that go from 0 to 10,000 requests in minutes (webhooks, event-driven workflows)
- Lightweight, stateless tasks — image processing, PDF generation, sending emails, data transformation
- Low-traffic side projects — anything that sits idle most of the time and you don’t want to pay for an always-on server… and you don’t know how to setup a Coolify server.
- Edge logic — geolocation routing, header manipulation, request validation before it hits your origin
When to Use Containers / VPS
- Sustained high traffic — once you’re consistently above ~100M requests/month, a VPS is cheaper (see the table above)
- Stateful workloads — WebSocket connections, long-running processes, anything that needs to hold state between requests
- Database-heavy applications — connection pooling and persistent connections don’t play well with serverless cold starts
- Complex applications — monoliths or microservices that need shared memory, background workers, or cron jobs
The Hybrid Approach
The best architectures often use both. It depends on your specific use case and requirements. It depends on the team, the budget, and the complexity of your application.
Knowing the tradeoffs is the difference between a seasoned developer and a junior. It’s important that you make the right decisions based off your needs and constraints.
Good luck and godspeed!
/ DevOps / Development / Serverless / Cloud
-
Defending Your Node Modules: Security Tools and When to Rewrite Dependencies
This week I’ve been on a bit of a JavaScript kick; writing about why Vitest beats Jest, comparing package managers, diving into Svelte 5. But there’s one topic that we shouldn’t forget: security.
node_modulesis the black hole directory that we all joke about and pretend is fine.Let’s talk about how to actually defend against problems lurking in those deep depths when a rewrite might make sense.
The Security Toolkit You Actually Need
You’re going to need a mix of tools to both detect bad code and prevent it from running. No single tool covers everything (or should), so here are some options to consider:
Socket does behavioral analysis on packages. It looks at what the code is actually doing. Is it accessing the network? Reading environment variables? Running install scripts? These are the sketchy behaviors that signal a compromised or malicious package. Socket is great at catching supply chain attacks that traditional vulnerability scanners miss entirely.
Snyk handles vulnerability scanning. It checks your entire dependency tree against a massive database of known vulnerabilities and is really good at finding transitive problems, those vulnerabilities buried three or four levels deep in your dependency chain that you’d never find manually.
LavaMoat takes a different approach. It creates a runtime policy that prevents libraries from doing things they shouldn’t be doing, like making network requests when they’re supposed to be a string formatting utility. Think of it as a permissions system for your dependencies.
And then there’s Dependabot from GitHub, which automatically opens pull requests to update vulnerable dependencies. This is honestly the minimum of what you should be doing. If you’re not running Dependabot like service, start now.
Each of these tools catches different things. Socket finds malicious behavior, Snyk finds known vulnerabilities, LavaMoat enforces runtime boundaries, and Dependabot keeps things updated. Together, they give you solid coverage.
When to Vendor or Rewrite a Dependency
Now let’s talk about something I think more developers should be doing: auditing your dependencies and asking when a rewrite makes sense.
With AI tools available now, this has become incredibly practical. Here’s when I think you should seriously consider replacing a dependency with your own code:
-
You’re using 1% of the library. If you imported a massive package just to use one function, you don’t need the whole thing. Have your AI tool write a custom function that does exactly what you need. You shouldn’t be importing a huge library for a single utility. It’s … ahhh, well, stupide.
-
It’s a simple helper. Things like
isEven,leftPad, or a basic string formatter. AI can write these in seconds, and you eliminate an entire dependency from your tree. Fewer dependencies means a smaller attack surface. -
The package is abandoned. The last update was years ago, there’s a pile of open issues, and nobody’s home. You’re better off asking your LLM to rewrite the functionality for your specific project. Own the code yourself instead of depending on something that’s collecting dust.
When You Should Absolutely NOT Rewrite
This is just as important. Some things should stay as battle-tested community libraries, no matter how good your AI tools are:
-
Cryptography, authentication, and authorization. It would be incredibly foolish to try to rewrite bcrypt or roll your own JWT validation. These libraries have been audited, attacked, and hardened over years. Use them.
-
Complex parsers with extensive rule sets. A markdown parser, for example, has a ton of edge cases and rules that need to be exactly right. You don’t want to accidentally ship your own flavor of markdown. Same goes for HTML sanitizers, getting sanitization wrong means introducing XSS vulnerabilities. Trust the community libraries here.
-
Date and time math. Time zones are a deceptively hard problem in programming. Don’t rewrite
date-fnsordayjs. Just don’t. -
Libraries that wrap external APIs. If something integrates with Stripe, AWS, or any API that changes frequently, you do not want to maintain that yourself. The official SDK maintainers track API changes so you don’t have to. Just, no and thank you.
The pattern is pretty clear: if getting it wrong has security implications or if the domain is genuinely complex with lots of edge cases, use the established library. If it’s simple utility code or you’re barely using the package, consider a rewrite.
A Fun Side Project Idea
If you’re looking for yet another side project (YASP), that is one that would be a super useful CLI tool. I’d probably reach for Go and build a TUI tool that scans your
node_modulesand generates a list of rewrite recommendations.I think that’d be a really fun build, and honestly something the JavaScript ecosystem could use.
/ security / javascript / Node / Dependencies / Devtools
-
-
I don’t have a problem with .NET Core, but .NET Framework (ASP.NET Web Forms) is another story. UX and Anti-patterns abound. Can’t go directly to a link. Terrible SEO. Broken back button all over the place…. The
__VIEWSTATEin particular needs to die in a fire. -
Ditch Jest for Vitest: A Ready-Made Migration Prompt
If you’ve ever sat there watching Jest crawl through your TypeScript test suite, you know pain. I mean, I know your pain.
When Switching to Vitest, and the speed difference is genuinely dramatic. The answers to why it’s slow are easy to figure out. There’s plenty of explanations on that, so I’ll leave that to you to go look up why.
I put together a prompt you can hand to Claude (or any AI assistant) that will handle the migration for you. Let me know how it goes!
The Migration Prompt
Convert all Jest tests in this project to Vitest. Here's what to do: ## Setup 1. Remove Jest dependencies (`jest`, `ts-jest`, `@types/jest`, `babel-jest`, any jest presets) 2. Install Vitest: `pnpm add -D vitest` 3. Remove `jest.config.*` files 4. Add a `test` section to `vite.config.ts` (or create `vitest.config.ts` if no Vite config exists): import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globals: true, }, }) 5. Update the `test` script in `package.json` to `vitest` ## Test File Migration For every test file: 1. Replace imports — Remove any `import ... from '@jest/globals'`. If `globals: true` is set, no imports needed. Otherwise add: import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest' 2. Replace `jest` with `vi` everywhere: - jest.fn() → vi.fn() - jest.mock() → vi.mock() - jest.spyOn() → vi.spyOn() - jest.useFakeTimers() → vi.useFakeTimers() - jest.useRealTimers() → vi.useRealTimers() - jest.advanceTimersByTime() → vi.advanceTimersByTime() - jest.clearAllMocks() → vi.clearAllMocks() - jest.resetAllMocks() → vi.resetAllMocks() - jest.restoreAllMocks() → vi.restoreAllMocks() - jest.requireActual() → vi.importActual() (note: async in Vitest) 3. Fix mock hoisting — vi.mock() is hoisted automatically, but variables used in the mock factory must be prefixed with `vi` or declared inside the factory. 4. Fix jest.requireActual — This becomes vi.importActual() and returns a Promise: // Jest jest.mock('./utils', () => ({ ...jest.requireActual('./utils'), fetchData: jest.fn(), })) // Vitest vi.mock('./utils', async () => ({ ...(await vi.importActual('./utils')), fetchData: vi.fn(), })) 5. Snapshot tests work the same way. No changes needed. 6. Timer mocks — same API after the jest → vi rename. 7. Module mocks with __mocks__ directories work identically. ## TypeScript Config Add Vitest types to tsconfig.json: { "compilerOptions": { "types": ["vitest/globals"] } } ## After Migration 1. Delete leftover Jest config files 2. Update CI config to use `vitest run` 3. Run tests and fix any remaining failures
If you’re still on Jest and your TypeScript test suite is dragging, give Vitest a shot. The migration is low-risk, the speed improvements are real, and with the prompt above, the hardest parts are handled. ☕
/ Development / Testing / Typescript / Vitest / Jest
-
JavaScript Package Managers: NPM, Yarn, PNPM, and Bun Compared
If you’ve been writing JavaScript for any length of time, you’ve probably had opinions about package managers. Everyone has used npm because it’s the default. Maybe you switched to Yarn back in 2016 and haven’t looked back. These days, there are better options.
That may seem like a bold statement, but bear with me. This article is a mix of opinions and facts. The package manager landscape has changed quite a bit in the last decade, and it’s worth exploring. Let’s break it down.
NPM: The Default Everyone Knows
npm is the package manager that ships with Node. It works. Everyone knows it. Node modules are straightforward to reason about, and security has been improving over the years.
But npm has historically struggled with performance. That’s partly a design problem, it was so widely adopted that making fundamental speed improvements meant risking breakage for the massive ecosystem already depending on it. When you’re supporting millions of packages, you need to be careful in managing backward compatibility breaks, making optimization a lot harder.
This performance gap is exactly what opened the door for alternatives.
Yarn: The Pioneer That Over-Optimized
Yarn showed up in 2016, created by Facebook, and it genuinely pushed the ecosystem forward. It parallelized downloads, introduced offline caching, and most notably, introduced lock files to JavaScript. npm eventually adopted lock files too, so Yarn’s influence on the broader ecosystem is undeniable.
Lock files did exist for other languages before 2016, such as Ruby and PHP, but Yarn was the first JavaScript package manager to include it.
The problem came with Yarn 2. It’s a classic case of over-optimization.
Yarn 2 introduced Plug’n’Play mode, which replaces your
node_modulesfolder with zip files. We’re on Yarn 4 now, and while you can swap between modes, if you’re in the zip mode it becomes genuinely painful to inspect the actual JavaScript code you’re installing. You have to unzip things, dig through archives, and it just adds friction where there shouldn’t be any.If you enjoy making JavaScript development harder than it needs to be, Yarn’s PnP mode has you covered. It’s your Toxic coworkers favorite tool.
PNPM: The Clear Upgrade
If you look at the benchmarks, pnpm wins in almost every common scenario. Running install with a warm cache, lock file, and existing node modules? Faster than npm. A clean install with nothing cached? 7 seconds versus 30 seconds. That’s not a marginal improvement.
Speed isn’t even the best part. pnpm uses hard links from a centralized store instead of copying packages into every project’s
node_modules. Depending on the size of your projects, you can save up to 70% of your disk space compared to npm. If you’re working on multiple JavaScript projects (and who isn’t?), that adds up fast.pnpm also handles the node_modules structure in a way that’s strict by default, which means your code can’t accidentally import packages you haven’t explicitly declared as dependencies. It catches bugs that npm would let slide.
So pnpm is the clear winner, right? Well, there’s one more contender we haven’t talked about yet.
Bun: The Speed Demon
Bun was released in 2023, and noticeably faster than pnpm.
The reason comes down to architecture. pnpm is written in TypeScript and runs on Node, which means every time you run
pnpm install, your computer has to start the V8 engine, load all the JavaScript, compile it, and then ask the operating system to do the actual work. That’s a lot of overhead.Bun is a compiled binary written in Zig. It talks directly to your kernel, no middleman, no V8 engine slowing down every tiny decision. On top of that, Bun is hyper-focused on optimizing system calls. Instead of doing file operations one at a time (open file A, write file A, close file A, repeat a thousand times), it aggressively batches them together. The result is speed improvements not just in disk operations but in everything it does.
Earlier versions of Bun had an annoying quirk similar to Yarn, it used a binary lock file that was difficult to manually audit. That’s been fixed. Bun now uses a readable lock file, which removes the biggest objection people had.
Which begs the question… ?
So Why Isn’t Everyone Using Bun?
The short answer: it’s complicated. Bun isn’t just a package manager, it also replaces Node as your runtime. If you’re using Bun as your runtime, using it as your package manager makes total sense. Everything fits together.
But most teams are still on Node. And when you’re on Node, pnpm is the clearer choice for everyone involved. A new developer joining your team sees pnpm and immediately knows, “Oh, this is a JavaScript project, I know how this works.” Bun as a package manager on top of Node adds a layer of “wait, why are we using this?” that you have to explain.
Maybe that changes in the future as Bun’s runtime adoption grows. I’m sure the Bun team is working hard to make that transition as smooth as possible. But the reality right now is that most JavaScript projects are running on Node.
My Recommendation
If you’re starting a new project or looking to switch:
- Using Bun as your runtime? Use Bun for package management too. It’s the fastest option and everything integrates cleanly.
- On Node (most of us)? Use pnpm. It’s faster than npm, saves disk space, and is strict in ways that catch real bugs. Your team will thank you.
- Still on npm? You’re not doing anything wrong, but you’re leaving performance and disk space on the table for no real benefit.
- On Yarn PnP? I have questions, but I respect your commitment.
The JavaScript ecosystem moves fast, and if you haven’t revisited your package manager choice in a while, it might be worth running a quick benchmark on your own project. The numbers might surprise you.
/ Tools / Development / javascript
-
Why Svelte 5 Wants `let` Instead of `const` (And Why Your Linter Is Confused)
If you’ve been working with Svelte 5 and a linter like Biome, you might have run into this sitauation:
use
constinstead ofletHowever, Svelte actually needs that
let.Have you ever wondered why this is?
Here is an attempt to explain it to you.
Svelte 5’s reactivity model is different from the rest of the TypeScript world, and some of our tooling hasn’t quite caught up yet.
The
constRule Everyone KnowsIn standard TypeScript and React, the
prefer-construle is a solid best practice.If you declare a variable and never reassign it, use
const. It communicates intent clearly: this binding won’t change. You would thinkBut there is confusion when it comes to objects that are defined as
const.Let’s take a look at a React example:
// React — const makes perfect sense here const [count, setCount] = useState(0); const handleClick = () => setCount(count + 1);count is a number (primitive). setCount is a function.
Neither can modify itself so it makes sense to use
const.Svelte 5 Plays by Different Rules
Svelte 5 introduced runes — reactive primitives like
$state(),$derived(), and$props()that bring fine-grained reactivity directly into JavaScript.Svelte compiler transforms these declarations into getters and setters behind the scenes.
The value does get reassigned, even if your code looks like a simple variable.
<script lang="ts"> let count = $state(0); </script> <button onclick={() => count++}> Clicked {count} times </button>Biome Gets This Wrong
But they are trying to make it right. Biome added experimental Svelte support in v2.3.0, but it has a significant limitation: it only analyzes the
<script>block in isolation.It doesn’t see what happens in the template. So when Biome looks at this:
<script lang="ts"> let isOpen = $state(false); </script> <button onclick={() => isOpen = !isOpen}> Toggle </button>It only sees
let isOpen = $state(false)and thinks: “this variable is never reassigned, useconst.”It completely misses the
isOpen = !isOpenhappening in the template markup.If you run
biome check --write, it will automatically changelettoconstand break your app.The Biome team has acknowledged this as an explicit limitation of their partial Svelte support.
For now, the workaround is to either disable the
useConstrule for Svelte files or addbiome-ignorecomments where needed.What About ESLint?
The Svelte ESLint plugin community has proposed two new rules to handle this properly:
svelte/rune-prefer-let(which recommendsletfor rune declarations) and a Svelte-awaresvelte/prefer-constthat understands reactive declarations. These would give you proper linting without the false positives.My Take
Svelte 5’s runes are special and deserve their own way of handling variable declarations.
React hooks are different.
Svelte’s compiler rewrites your variable.
I like Svelte.
Please fix Biome.
Don’t make me use eslint.
/ Programming / svelte / Typescript / Biome / Linting
-
What can you change in the next 42 days?
- career
- health
- growth
The secret isn’t motivation. It’s just doing the thing, even when you don’t feel like it.
/ Habits
-
My current blog streak
🔥 Current streak: 42 days (started Jan 4)
- 🏆 Longest streak: 42 days
- 📝 Total posts: 42 (unique publishing days)
- 🎯 Year goal: 11.5% (42/365)
I don’t think this means anything though. 🤔
Maybe people will care at some point?
If I care, that’s all that matters.
-
The BEST Ghostty version.
Can we just like make a new semver so the 1.2.3 never changes.
Our versions don’t make sense.
Welcome to prompting 101.
Make a new semver.
Email me your book reports.
Or post in the comments.
-
Don’t sleep on OpenClaw. There are a ton of people building with it right now who aren’t talking about it yet. The potential is real, and when those projects start surfacing, it’s going to turn heads. Sometimes the most exciting stuff happens quietly before it hits the mainstream.
/ AI / Open-source / Openclaw
-
REPL-Driven Development Is Back (Thanks to AI)
So you’ve heard of TDD. Maybe BDD. But have you heard of RDD?
REPL-driven development. I think most programmers these days don’t work this way. The closest equivalent most people are familiar with is something like Python notebooks—Jupyter or Colab.
But RDD is actually pretty old. Back in the 70s and 80s, Lisp and Smalltalk were basically built around the REPL. You’d write code, run it immediately, see the result, and iterate. The feedback loop was instant.
Then the modern era of software happened. We moved to a file-based workflow, probably stemming from Unix, C, and Java. You write source code in files. There’s often a compilation step. You run the whole thing.
The feedback loop got slower, more disconnected. Some languages we use today like Python, Ruby, JavaScript, PHP include a REPL, but that’s not usually how we develop. We write files, run tests, refresh browsers.
Here’s what’s interesting: AI coding assistants are making these interactive loops relevant again.
The new RDD is natural language as a REPL.
Think about it. The traditional REPL loop was:
- Type code
- System evaluates it
- See the result
- Iterate
The AI-assisted loop is almost identical:
- Type (or speak) your intent in natural language
- AI interprets and generates code
- AI runs it and shows you the result
- Iterate
You describe what you want. The AI writes the code. It executes. You see what happened. If it’s not right, you clarify, and the loop continues.
This feels fundamentally different from the file-based workflow most of us grew up with. You’re not thinking about which file to open, You’re thinking about what you want to happen, and you’re having a conversation until it does.
Of course, this isn’t a perfect analogy. With a traditional REPL, you have more control. You understood exactly what was being evaluated because you wrote it.
>>> while True: ... history.repeat()/ AI / Programming / Development
-
I usually brainstorm spec docs using Gemini or Claude, so if you are like me, this prompt is interesting insight into your software decisions.
Based off our previous chats and the previous documents you've helped me with, provide a detailed summary of all my software decisions and preferences when it comes to building different types of applications./ AI / Development