javascript
-
I Wrote Multiple CUE Parsers and Benchmarked Them Against JSON
It started yesterday because I wanted to fix JSON.
Well, not fix exactly. More like figure out if there was something better for client-side parsing in the browser. I’d been looking at comparing JSON versus MessagePack when I stumbled into CUE, a configuration language that embeds schema definitions directly in the data.
I posted about it and then was like well what if I just actually did it. There was just one problem: the core CUE spec is written in Go, which obviously doesn’t help when you need something running in React.
So I wrote a TypeScript parser for CUE. You can find it at cue-ts.
Then I built a benchmark app with 11 different parsing and deserialization approaches, all running client-side in a Vite + React app. We’ll see how that turned out here.
The Contenders
I tested across three ecosystems:
JSON: Native
JSON.parse, JSON + Zod validation, and JSON + Ajv (both pre-compiled and per-call schema compilation).CUE: Full AST parsing, a TypeScript deserializer, a fused single-pass scanner, pre-compiled CUE schemas, and even a Rust-to-WASM CUE parser.
Binary: MessagePack decoding, with and without Zod validation.
Payloads ranged from ~1KB to ~460KB across CUE, JSON, and MessagePack formats.
Understanding the Schema Problem
Before we look at numbers, we need to talk about schema validation.
With Zod, your schema is TypeScript code. It’s set at compile time, so there’s no runtime schema parsing overhead. With Ajv in compiled mode, it takes a JSON Schema document and generates an optimized JavaScript validation function. You do this once, and the runtime cost is essentially zero.
But CUE does something different. It embeds schema definitions directly in the data file:
#User: { email: string & =~"^[^@]+@[^@]+$" role: "admin" | "editor" | "viewer" age: int & >=0 } user: #User & { email: "[email protected]" role: "admin" age: 30 }Schema and data together, processed in one pass. Sounds elegant, right? The question is whether that elegance costs you speed. 😅
The Results
Here’s the 10KB payload benchmark on Chromium/V8:
Strategy Median vs JSON.parse JSON + Ajv (compiled) 18 μs ~equivalent JSON.parse 18 μs baseline JSON + Zod 28 μs 1.6x slower MsgPack Decode 30 μs 1.7x slower CUE (compiled schema) 112 μs 6.2x slower CUE Fast Deserialize 114 μs 6.3x slower CUE Deserialize (WASM) 755 μs 41.9x slower JSON + Ajv compiled is equivalent to bare
JSON.parse. The pre-compiled validator adds essentially zero overhead. Case closed?Not quite.
The Fair Fight
Those benchmarks aren’t comparing the same thing. Zod and Ajv compiled both pre-process their schemas before the benchmark runs. CUE processes its schema on every call. So I ran the fair comparison, schema processing included:
Strategy Median CUE Fast Deserialize 114 μs CUE (compiled schema) 112 μs JSON + Ajv (interpret) 4,840 μs When JSON has to compile its schema per call, CUE is 43x faster. Cool? All that work really paid off?
Well, the bottleneck for CUE isn’t schema processing. It’s parsing the text format itself. I tried basically every optimization I could think of. The deserializer spends nearly all its time scanning characters, building strings and numbers. Without native C++ code, which is what
JSON.parsegets for free from the browser engine, a TypeScript parser just can’t close that gap.MessagePack: Not Worth It
I included MessagePack benchmarks to see if smaller binary payloads would translate to faster parsing. They don’t. MessagePack is consistently slower than
JSON.parse, even with smaller payloads. The overhead of maintaining a separate binary serialization format on the client side just isn’t worth it for most use cases.Cross-Browser Highlights
I ran a few tests across engines. Some notable findings:
- Safari (JSC): Zod validation is essentially free. JSON + Zod matches bare
JSON.parse - Chrome (V8): CUE’s fast deserializer benefits from
charCodeAtoptimizations - Firefox (SpiderMonkey): Most consistent results across the board
So What’s the Answer?
If you can compile your schema ahead of time with Ajv or Zod, do that and use JSON.
It’s not even close. Seriously.
JSON.parsebenefits from decades of native browser optimization that no TypeScript parser can match, and pre-compiled validation adds essentially zero overhead.CUE is faster in one specific scenario: when you need to process schemas dynamically on every call. 43x faster than per-call Ajv compilation is real, but how often does that actually come up in production? It’s hard to say.
Use Case Pick This Why Hot-path APIs JSON + Ajv (compiled) Fastest possible TypeScript projects JSON + Zod Best DX Dynamic schemas CUE 43x faster than per-call Ajv Config files CUE Single source of truth Bandwidth-constrained MsgPack 30-60% smaller payloads One thing I considered but didn’t build for this post: a conversion layer that stores everything as CUE on the server, then compiles it down to JSON with Zod or Ajv schemas for the browser. You’d get CUE’s authoring experience with JSON’s runtime speed. That feels like an interesting project, and maybe I’ll get to it next.
Both the cue-ts parser and the benchmark app are open source. Try them yourself and let me know what you think.
/ Programming / Webdev / javascript / Benchmarks
- Safari (JSC): Zod validation is essentially free. JSON + Zod matches bare
-
From SUnit to Vitest: A Brief History of JavaScript Testing
I care a lot about testing. I don’t know if that’s obvious yet, but hopefully it’s obvious. I wanted to trace the lineage of the testing tools we use today, because I think understanding where they came from helps you appreciate why things work the way they do.
Where It All Started
Automated testing as we know it really started with SUnit for the Smalltalk language. Kent Beck created it back in 1989, and it established the patterns that every test framework still borrows from today.
In 1997, Kent Beck and Erich Gamma ported those ideas to Java and created JUnit. JUnit was, or is, incredibly influential to pretty much every unit testing framework you’ve ever used. The test runner, assertions, setup and teardown, all of that traces back to JUnit.
But I’m going to focus on the JavaScript side of things here.
Jest: The Facebook Era
Jest was originally created at Facebook in 2011 as part of a major platform rewrite. It became the dominant testing framework for React and Node.js codebases, and in 2022, Facebook released it to the OpenJS Foundation.
Jest works well, but it carries some baggage. It requires a transpilation pipeline, something that was common a decade ago but feels burdensome now. If you want to use ESM modules, there’s an extra step involved. It’s adds friction.
So what else is there?
Vitest: The Modern Alternative
Vitest is a modern alternative, built on top of Vite. It supports ESM modules and TypeScript out of the box; so no transpilation step needed. And because Vite has HMR (hot module replacement), the watch mode for rerunning tests is very fast.
Vitest was initially created by Anthony Fu and the company behind Vue.js. The initial commit was in December 2021, so it’s a relatively recent project. They’ve made incredible progress since then.
Vitest uses Vite under the hood. So lets look at that briefly.
Why Vite Is So Fast
Vite’s speed comes from esbuild, which is written in Go. It compiles directly to native machine code, so it bypasses the JavaScript engine’s overhead entirely. It can transform TypeScript significantly faster because it doesn’t need to go through the JS engine. And because it’s Go, it’s multithreaded.
But things are changing. In Vite 8, the bundler is moving from esbuild to Rolldown. This is new tool written in Rust that combines the best of esbuild and Rollup.
Why? Currently, Vite uses esbuild during development but switches to Rollup for production builds. Two different tools for two different use cases. Rolldown unifies both into a single tool that handles dev and production.
What Did We Learn?
Hopefully something! How about a mini review to nail it home:
- 1989: SUnit (Smalltalk) — Kent Beck starts it all
- 1997: JUnit (Java) — the template everything else follows
- 2011: Jest — Facebook’s testing framework, now under OpenJS Foundation
- 2021: Vitest — modern, fast, ESM-native testing built on Vite
- Coming soon: Rolldown replaces esbuild + Rollup in Vite 8
That’s all for now!
/ Programming / Testing / javascript
-
Garbage Collection: How Python, JavaScript, and Go Clean Up After Themselves
It’s Garbage day for me IRL and I wanted to learn more about garbage collection in programming. So guess what? Now you get to learn more about it too.
We’re going to focus on three languages I work with mostly, Python, JavaScript, and Go. We will skip the rest for now.
Python: Reference Counting
Python’s approach is the most straightforward of the three. Every object keeps track of how many things are pointing to it. When that count drops to zero, the object gets cleaned up immediately. Simple.
There’s a catch, though. If two objects reference each other but nothing else references either of them, the count never hits zero. That’s a reference cycle, and Python handles it with a secondary cycle detector that periodically scans for these orphaned clusters. But for the vast majority of objects, reference counting does the job without any fancy algorithms.
JavaScript (V8): Generational Garbage Collection
Most JavaScript you encounter is running on V8… so Chrome based browsers, Node and Deno. V8 uses a generational strategy based on a simple observation: most objects die young.
V8 splits memory into two main areas:
- The Nursery (Young Generation): New objects land here. This space is split into two halves and uses a scavenger algorithm. It’s fast because it only deals with short-lived variables — and most variables are short-lived.
- Old Space (Old Generation): No, not the deodorant company. Objects that survive a couple of scavenger rounds get promoted here. Old space uses a mark-and-sweep algorithm, which is slower but handles long-lived objects more efficiently.
First it asks, “How long has this object been around?” New stuff gets the quick treatment, and anything that sticks around gets put out to the farm, to be delt where time is less of a premium. It’s a smart tradeoff between speed and memory efficiency.
Go: Tricolor Mark-and-Sweep
Go’s garbage collector also uses mark-and-sweep, but with a twist called tricolor marking. Here’s how it works:
- White objects: These might be garbage but haven’t been checked yet. Everything starts as white.
- Gray objects: The collector has reached these, but it hasn’t scanned their children (the things they reference) yet.
- Black objects: These are confirmed alive — the collector has scanned them and all their references.
The collector starts from known root objects, marks them gray, then works through the gray set — scanning each object’s references and marking them gray too, while the scanned object itself turns black. When there are no more gray objects, anything still white is unreachable and gets cleaned up.
Go’s approach is notable because it runs concurrently. This helps with latency while the GC is running.
Garbage, Collected
Each approach reflects the language’s priorities:
- Python optimizes for simplicity and predictability. Objects get cleaned up correctly when they’re no longer needed
- JavaScript optimizes for speed in interactive applications. Quick cleanup for short-lived objects, thorough cleanup for the rest
- Go optimizes for low latency. Concurrent collection is great for server side processes
References
- Design of CPython’s Garbage Collector — Python Developer’s Guide deep dive into reference counting and cycle detection
- Orinoco: Young Generation Garbage Collection — V8’s parallel scavenger and generational GC design
- A Guide to the Go Garbage Collector — Official Go documentation on the concurrent tricolor collector
/ Programming / Golang / Python / javascript / Garbage-collection
-
I compared npm, Yarn, pnpm, and Bun. TLDR version: pnpm wins for most teams, Bun wins if you’re already on the runtime.
Has anyone switched their whole team to Bun yet? How’d that go?
/ Programming / Tools / javascript
-
I wrote about securing node_modules. Socket, Snyk, Dependabot — each catches different things. Hopefully answering when to use AI to rewrite simple deps you barely use.
Anyone want to build that CLI?
/ Programming / security / javascript
-
Defending Your Node Modules: Security Tools and When to Rewrite Dependencies
This week I’ve been on a bit of a JavaScript kick; writing about why Vitest beats Jest, comparing package managers, diving into Svelte 5. But there’s one topic that we shouldn’t forget: security.
node_modulesis the black hole directory that we all joke about and pretend is fine.Let’s talk about how to actually defend against problems lurking in those deep depths when a rewrite might make sense.
The Security Toolkit You Actually Need
You’re going to need a mix of tools to both detect bad code and prevent it from running. No single tool covers everything (or should), so here are some options to consider:
Socket does behavioral analysis on packages. It looks at what the code is actually doing. Is it accessing the network? Reading environment variables? Running install scripts? These are the sketchy behaviors that signal a compromised or malicious package. Socket is great at catching supply chain attacks that traditional vulnerability scanners miss entirely.
Snyk handles vulnerability scanning. It checks your entire dependency tree against a massive database of known vulnerabilities and is really good at finding transitive problems, those vulnerabilities buried three or four levels deep in your dependency chain that you’d never find manually.
LavaMoat takes a different approach. It creates a runtime policy that prevents libraries from doing things they shouldn’t be doing, like making network requests when they’re supposed to be a string formatting utility. Think of it as a permissions system for your dependencies.
And then there’s Dependabot from GitHub, which automatically opens pull requests to update vulnerable dependencies. This is honestly the minimum of what you should be doing. If you’re not running Dependabot like service, start now.
Each of these tools catches different things. Socket finds malicious behavior, Snyk finds known vulnerabilities, LavaMoat enforces runtime boundaries, and Dependabot keeps things updated. Together, they give you solid coverage.
When to Vendor or Rewrite a Dependency
Now let’s talk about something I think more developers should be doing: auditing your dependencies and asking when a rewrite makes sense.
With AI tools available now, this has become incredibly practical. Here’s when I think you should seriously consider replacing a dependency with your own code:
-
You’re using 1% of the library. If you imported a massive package just to use one function, you don’t need the whole thing. Have your AI tool write a custom function that does exactly what you need. You shouldn’t be importing a huge library for a single utility. It’s … ahhh, well, stupide.
-
It’s a simple helper. Things like
isEven,leftPad, or a basic string formatter. AI can write these in seconds, and you eliminate an entire dependency from your tree. Fewer dependencies means a smaller attack surface. -
The package is abandoned. The last update was years ago, there’s a pile of open issues, and nobody’s home. You’re better off asking your LLM to rewrite the functionality for your specific project. Own the code yourself instead of depending on something that’s collecting dust.
When You Should Absolutely NOT Rewrite
This is just as important. Some things should stay as battle-tested community libraries, no matter how good your AI tools are:
-
Cryptography, authentication, and authorization. It would be incredibly foolish to try to rewrite bcrypt or roll your own JWT validation. These libraries have been audited, attacked, and hardened over years. Use them.
-
Complex parsers with extensive rule sets. A markdown parser, for example, has a ton of edge cases and rules that need to be exactly right. You don’t want to accidentally ship your own flavor of markdown. Same goes for HTML sanitizers, getting sanitization wrong means introducing XSS vulnerabilities. Trust the community libraries here.
-
Date and time math. Time zones are a deceptively hard problem in programming. Don’t rewrite
date-fnsordayjs. Just don’t. -
Libraries that wrap external APIs. If something integrates with Stripe, AWS, or any API that changes frequently, you do not want to maintain that yourself. The official SDK maintainers track API changes so you don’t have to. Just, no and thank you.
The pattern is pretty clear: if getting it wrong has security implications or if the domain is genuinely complex with lots of edge cases, use the established library. If it’s simple utility code or you’re barely using the package, consider a rewrite.
A Fun Side Project Idea
If you’re looking for yet another side project (YASP), that is one that would be a super useful CLI tool. I’d probably reach for Go and build a TUI tool that scans your
node_modulesand generates a list of rewrite recommendations.I think that’d be a really fun build, and honestly something the JavaScript ecosystem could use.
/ security / javascript / Node / Dependencies / Devtools
-
-
Choosing the Right Package Manager: npm vs Yarn vs pnpm vs Bun
npm, Yarn, pnpm, and Bun each take a different approach to managing JavaScript dependencies. Compare their speed, disk usage, and deployment impact to find the right fit for your project.
/ Development / links / javascript / Performance
-
JavaScript Package Managers: NPM, Yarn, PNPM, and Bun Compared
If you’ve been writing JavaScript for any length of time, you’ve probably had opinions about package managers. Everyone has used npm because it’s the default. Maybe you switched to Yarn back in 2016 and haven’t looked back. These days, there are better options.
That may seem like a bold statement, but bear with me. This article is a mix of opinions and facts. The package manager landscape has changed quite a bit in the last decade, and it’s worth exploring. Let’s break it down.
NPM: The Default Everyone Knows
npm is the package manager that ships with Node. It works. Everyone knows it. Node modules are straightforward to reason about, and security has been improving over the years.
But npm has historically struggled with performance. That’s partly a design problem, it was so widely adopted that making fundamental speed improvements meant risking breakage for the massive ecosystem already depending on it. When you’re supporting millions of packages, you need to be careful in managing backward compatibility breaks, making optimization a lot harder.
This performance gap is exactly what opened the door for alternatives.
Yarn: The Pioneer That Over-Optimized
Yarn showed up in 2016, created by Facebook, and it genuinely pushed the ecosystem forward. It parallelized downloads, introduced offline caching, and most notably, introduced lock files to JavaScript. npm eventually adopted lock files too, so Yarn’s influence on the broader ecosystem is undeniable.
Lock files did exist for other languages before 2016, such as Ruby and PHP, but Yarn was the first JavaScript package manager to include it.
The problem came with Yarn 2. It’s a classic case of over-optimization.
Yarn 2 introduced Plug’n’Play mode, which replaces your
node_modulesfolder with zip files. We’re on Yarn 4 now, and while you can swap between modes, if you’re in the zip mode it becomes genuinely painful to inspect the actual JavaScript code you’re installing. You have to unzip things, dig through archives, and it just adds friction where there shouldn’t be any.If you enjoy making JavaScript development harder than it needs to be, Yarn’s PnP mode has you covered. It’s your Toxic coworkers favorite tool.
PNPM: The Clear Upgrade
If you look at the benchmarks, pnpm wins in almost every common scenario. Running install with a warm cache, lock file, and existing node modules? Faster than npm. A clean install with nothing cached? 7 seconds versus 30 seconds. That’s not a marginal improvement.
Speed isn’t even the best part. pnpm uses hard links from a centralized store instead of copying packages into every project’s
node_modules. Depending on the size of your projects, you can save up to 70% of your disk space compared to npm. If you’re working on multiple JavaScript projects (and who isn’t?), that adds up fast.pnpm also handles the node_modules structure in a way that’s strict by default, which means your code can’t accidentally import packages you haven’t explicitly declared as dependencies. It catches bugs that npm would let slide.
So pnpm is the clear winner, right? Well, there’s one more contender we haven’t talked about yet.
Bun: The Speed Demon
Bun was released in 2023, and noticeably faster than pnpm.
The reason comes down to architecture. pnpm is written in TypeScript and runs on Node, which means every time you run
pnpm install, your computer has to start the V8 engine, load all the JavaScript, compile it, and then ask the operating system to do the actual work. That’s a lot of overhead.Bun is a compiled binary written in Zig. It talks directly to your kernel, no middleman, no V8 engine slowing down every tiny decision. On top of that, Bun is hyper-focused on optimizing system calls. Instead of doing file operations one at a time (open file A, write file A, close file A, repeat a thousand times), it aggressively batches them together. The result is speed improvements not just in disk operations but in everything it does.
Earlier versions of Bun had an annoying quirk similar to Yarn, it used a binary lock file that was difficult to manually audit. That’s been fixed. Bun now uses a readable lock file, which removes the biggest objection people had.
Which begs the question… ?
So Why Isn’t Everyone Using Bun?
The short answer: it’s complicated. Bun isn’t just a package manager, it also replaces Node as your runtime. If you’re using Bun as your runtime, using it as your package manager makes total sense. Everything fits together.
But most teams are still on Node. And when you’re on Node, pnpm is the clearer choice for everyone involved. A new developer joining your team sees pnpm and immediately knows, “Oh, this is a JavaScript project, I know how this works.” Bun as a package manager on top of Node adds a layer of “wait, why are we using this?” that you have to explain.
Maybe that changes in the future as Bun’s runtime adoption grows. I’m sure the Bun team is working hard to make that transition as smooth as possible. But the reality right now is that most JavaScript projects are running on Node.
My Recommendation
If you’re starting a new project or looking to switch:
- Using Bun as your runtime? Use Bun for package management too. It’s the fastest option and everything integrates cleanly.
- On Node (most of us)? Use pnpm. It’s faster than npm, saves disk space, and is strict in ways that catch real bugs. Your team will thank you.
- Still on npm? You’re not doing anything wrong, but you’re leaving performance and disk space on the table for no real benefit.
- On Yarn PnP? I have questions, but I respect your commitment.
The JavaScript ecosystem moves fast, and if you haven’t revisited your package manager choice in a while, it might be worth running a quick benchmark on your own project. The numbers might surprise you.
/ Tools / Development / javascript
-
JavaScript Still Doesn't Have Types (And That's Probably Fine)
Here’s the thing about JavaScript and types: it doesn’t have them, and it probably won’t any time soon.
Back in 2022, there was a proposal to add TypeScript-like type syntax directly to JavaScript. The idea was being able to write type annotations without needing a separate compilation step. But the proposal stalled because the JavaScript community couldn’t reach consensus on implementation details.
The core concern? Performance. JavaScript is designed to be lightweight and fast, running everywhere from browsers to servers to IoT devices. Adding a type system directly into the language could slow things down, and that’s a tradeoff many aren’t willing to make.
So the industry has essentially accepted that if you want types in JavaScript, you use TypeScript. And honestly? That’s fine.
TypeScript: JavaScript’s Type System
TypeScript has become the de facto standard for typed JavaScript development. Here’s what it looks like:
// TypeScript Example let name: string = "John"; let age: number = 30; let isStudent: boolean = false; // Function with type annotations function greet(name: string): string { return `Hello, ${name}!`; } // Array with type annotation let numbers: number[] = [1, 2, 3]; // Object with type annotation let person: { name: string; age: number } = { name: "Alice", age: 25 };TypeScript compiles down to plain JavaScript, so you get the benefits of static type checking during development without any runtime overhead. The types literally disappear when your code runs.
The Python Parallel
You might be interested to know that the closest parallel to this JavaScript/TypeScript situation is actually Python.
Modern Python has types, but they’re not enforced by the language itself. Instead, you use third-party tools like mypy for static analysis and pydantic for runtime validation. There’s actually a whole ecosystem of libraries supporting types in Python in various ways, which can get a bit confusing.
Here’s how Python’s type annotations look:
# Python Example name: str = "John" age: int = 30 is_student: bool = False # Function with type annotations def greet(name: str) -> str: return f"Hello, {name}!" # List with type annotation numbers: list[int] = [1, 2, 3] # Dictionary with type annotation person: dict[str, int] = {"name": "Alice", "age": 25}Look familiar? The syntax is surprisingly similar to TypeScript. Both languages treat types as annotations that help developers and tools understand the code, but neither strictly enforces them at runtime (unless you add additional tooling).
What This Means for You
If you’re writing JavaScript, stop, and use TypeScript. It’s mature and widely adopted. Now also you can run TypeScript directly in some runtimes like Bun or Deno.
Type systems were originally omitted from many of these languages because the creators wanted to establish a low barrier to entry, making it significantly easier for people to adopt the language.
Additionally, computers at the time were much slower, and compiling code with rigorous type systems took a long time, so creators prioritized the speed of the development loop over strict safety.
However, with the power of modern computers, compilation speed is no longer a concern. Furthermore, the type systems themselves have improved significantly in efficiency and design.
Since performance is no longer an issue, the industry has shifted back toward using types to gain better structure and safety without the historical downsides.
/ Programming / Python / javascript / Typescript
-
jQuery 4.0 was released on the 17th and they removed IE10 support. IE10 was first deprecated by Microsoft in January 2016 and fully retired in 2020. You might wonder, “What are they doing still supporting Internet Explorer?” They did say they were going to fully remove support in version 5.
-
Headless components for Svelte - flexible, unstyled, and accessible primitives that provide the foundation for building your own high-quality component library.
/ links / UIkit / javascript / svelte