javascript
-
What I Learned Building My First Chrome Extension
I built a Chrome extension to navigate Letterboxd movie lists with keyboard shortcuts. Rate, like, watch, next. Here’s what I learned.
The Idea
I going through the Letterboxd lists, but wanted a better way. “Top 250 Films,” curated genre lists, friends' recommendations. The flow becomes tedious: click a movie, rate it, go back to the list, find where you were, click the next one. I wanted to load a list into a queue and step through movies one by one with keyboard shortcuts.
So I did what any reasonable person would do and built a Chrome extension for it.
The Framework Graveyard
The Chrome extension ecosystem has a framework problem. CRXJS, the most popular Vite plugin for extensions, was being archived. Its successor, vite-plugin-web-extension, was deprecated in favor of WXT. WXT is solid but it’s another abstraction layer that could go the same way.
I went with plain Vite and manual configuration. Four separate Vite configs, one per entry point (content script, background worker, popup, manager page). A simple build script that runs them sequentially and copies the manifest. No framework dependency that could die on me.
For the UI I used React and TypeScript. Not because the extension needed React, most of the work is content scripts and background messaging, but the popup and settings page benefit from component structure.
Four Separate Worlds
One thing I learned was a Chrome extension isn’t one app. It’s four separate JavaScript contexts that can’t directly share state:
- Content scripts run on the webpage (letterboxd.com). They can read and modify the DOM but can’t access chrome.tabs or other extension APIs.
- Background service worker runs independently. It handles messaging, storage, and tab navigation. It can die at any time and restart.
- Popup is a tiny React app that opens and closes with the extension icon. It loses all state when closed.
- Extension page (the manager) is a full tab running your own HTML. It persists as long as the tab is open.
They communicate through
chrome.runtime.sendMessageandchrome.storage.local. This is an important architectural challenge you need to be aware of. If you’ve never built an extension before, it could trip you up.Letterboxd’s DOM Is a Moving Target
The existing open-source Letterboxd Shortcuts extension uses selectors like
.ajax-click-action.-liketo click the like button. Those selectors don’t exist anymore. Letterboxd has migrated to React components, and the sidebar buttons (watch, like, watchlist, rate) are loaded asynchronously via CSI (Client Side Includes). They’re not in the initial HTML at all.I had to inspect the actual loaded DOM to find the current selectors:
.watch-link,.like-link,a.action.-watchlist. The rating widget still uses the old.rateitpattern with adata-rate-actionattribute and CSRF token POST.If you’re building an extension that interacts with a third-party site’s DOM, expect the selectors to break. Build your DOM interaction layer as a thin, isolated module so you can update selectors without touching the rest of the codebase.
Service Workers Can’t Use DOMParser
My list scraper used
DOMParserto parse HTML responses. Works fine in tests (jsdom), works fine in content scripts (browser context), fails completely in the background service worker. Service workers don’t have access to DOM APIs.I rewrote the parser to use regex. Less elegant but it works everywhere. If I were doing it again, I’d run the parsing in a content script and message the results back to the background worker.
The Build System Is Simpler Than You Think
I expected the multi-entry-point build to be painful. It wasn’t. Each Vite config is about 20 lines. Content script and background worker build as IIFE (single file, no imports). Popup and manager build as standard React apps. The build script is 30 lines of
execFileSynccalls.One gotcha: asset paths. Vite defaults to absolute paths (
/assets/index.js), but extension popups and pages need relative paths (./assets/index.js). Addingbase: './'to the popup and manager configs fixed it.TDD Was Worth It (For the Right Parts)
The extension has four pure logic modules: rating double-tap behavior, auto-advance detection, queue state operations, and keyboard shortcut matching. These are the core of the extension and they’re completely testable without a browser.
Writing tests first caught edge cases I wouldn’t have thought of. What happens when you press the same rating key on a movie that was already rated in a previous session? What if the queue is empty and someone hits “next”? The tests document these decisions.
For DOM interaction code, the Letterboxd API layer, overlays, CSI-loaded content, unit testing isn’t practical. I tested those manually.
What I’d Do Differently … or might change
Start with the DOM. I built the pure logic first and the DOM interaction last. This meant I didn’t discover the CSI loading issue, the changed selectors, or the DOMParser problem until the end. Next time I’d build a minimal content script first, verify it can interact with the target site, then build the logic on top.
Use fewer Vite configs. Four config files with duplicated path aliases is annoying. A single config with a build mode flag, or a shared config factory function, would be cleaner.
Consider the popup lifecycle earlier. Popups close when you click outside them. Any state they hold is gone. I designed around this (the popup is stateless, it queries the background on every open), but it’s easy to get wrong if you don’t plan for it.
The Result
The extension loads any Letterboxd list into a queue, navigates through movies one by one, and lets me rate/like/watch/watchlist with single keystrokes. Auto-advance moves to the next movie when I’ve completed my actions. A dark-themed manager page shows the full queue and lets me customize every shortcut.
It’s a personal tool right now, so not published to the Chrome Web Store. But it’s made going through movie lists is pretty cool. Sometimes the best software is the kind you build for yourself!
If you’re a developer, I’d appreciate a follow. You can subscribe with your email below. The emails go out once a week. Or you can find me on Mastodon at @[email protected].
-
Why javascript:void(0) Needs to Stay in the Past
A few months ago, I ran across a
javascript:void(0)in the wild. I’m not going to get into the specific context because it doesn’t really matter, also it gets a bit too personal for this blog post. But I took a screenshot, thought “huh, that’s weird,” and promptly forgot about it.Then I was scrolling back through my screenshots and found it again. So here we are. Let’s talk about
javascript:void(0)and why you shouldn’t be using it in 2026.Here’s the screenshot that started this:

What Does It Actually Do?
If you’ve never encountered this pattern before, here’s the quick version.
voidis a JavaScript operator, so not a function, and its only job is to evaluate the expression next to it, throw away the result, and returnundefined. That’s it. That’s the whole job.When a browser receives
undefinedfrom clicking a link, it does nothing. No navigation, no page refresh. It’s a way to override the browser’s default behavior for an anchor tag.In practice, it looked like this:
<a href="javascript:void(0);" onclick="openModal()">Click Me</a>The
hrefprevents the browser from doing anything, and theonclickfires whatever JavaScript you actually wanted to run. Clever? Sure. A good idea today? No.Why Did We Use It?
Back in the day, we wanted to stop the browser from doing its default thing, following a link, so we could trigger events and make web pages more interactive. This was typically done on anchor tags because, well, that’s what we had. JavaScript didn’t give us a better way to handle it at the time, so
javascript:void(0)became the go-to pattern.It worked. But “it works” and “it’s a good idea” are two very different things.
Three Reasons to Stop Using It
1. It Breaks the Anchor Tag’s Purpose
The biggest issue is that
javascript:void(0)completely overrides what an anchor tag is supposed to do. An<a>tag exists to link to things. When you stuff JavaScript into thehref, you’re hijacking the element’s entire reason for existing.We’ve moved on from needing to do this. If you want something clickable that triggers behavior, use a
<button>. If you want a link that also has JavaScript behavior, give it a real URL as a fallback.2. Separation of Concerns
Modern best practices tell us that HTML should define the structure of the page, and JavaScript should define the behavior. When you’ve got JavaScript living inside an
hrefattribute or relying on inlineonclickhandlers, you’re mixing the two in ways that make code harder to maintain and reason about.The better approach? Use
event.preventDefault()in your JavaScript:<a href="/fallback-page" id="myLink">Click Me</a>document.getElementById('myLink').addEventListener('click', function(event) { event.preventDefault(); openModal(); });This way, if JavaScript is disabled or fails to load, the link still works. There’s a fallback behavior, which matters for accessibility and backwards compatibility. The HTML stays clean, and the behavior lives where it belongs, in your JavaScript files.
Now, I will say that plenty of modern front-end frameworks add their own semantic patterns and play pretty loosey-goosey with this separation of concerns rule. But even React’s
onClickhandlers and Vue’s@clickdirectives are compiled and managed in a way that’s fundamentally different from jamming raw JavaScript into an HTML attribute.3. Content Security Policy Will Block It
I’d like to believe Security still matters in 2026 so lets talk about the Content Security Policy (CSP).
CSP is a set of rules that a web server sends to the browser via HTTP headers, telling the browser what resources the page is allowed to load or execute. Before CSP, browsers just assumed that if code was in the HTML document, it was meant to be there. Web pages were incredibly vulnerable to cross-site scripting (XSS) attacks.
With CSP, the server tells the browser: “Only execute JavaScript if it comes from my own domain. Do not execute any code written directly inside the HTML file.”
A proper CSP header looks something like this:
Content-Security-Policy: default-src 'self'; script-src 'self';This is great for security. But guess what
javascript:void(0)is? Inline JavaScript. A strict CSP will block it.So if you see a site still using
javascript:void(0), check the response headers. Chances are you’ll find something like:Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline';See that
'unsafe-inline'addition? That’s the security risk. By addingunsafe-inline, the developer is telling the browser to trust all inline scripts. Every single one. So if an attacker manages to inject JavaScript onto the page, the browser will execute it without hesitation.You’re weakening your entire site’s security posture just to keep a legacy pattern alive. That’s not a tradeoff worth making.
You Probably Don’t Even Need to Think About This
If you’re working with any modern JavaScript framework, this problem is already solved for you.
React, Svelte, Vue, Solid, whatever you’re using, they all ship components that handle default browser behavior the right way. Take forms as an example. The raw HTML
<form>element will, by default, submit and trigger a full page navigation. That’s why developers used to manually callevent.preventDefault()everywhere. But now, frameworks like Next.js, Remix, and SvelteKit give you a<Form>component (or equivalent) that overrides that default behavior for you. No page reload. No manual prevention.The same applies to links, buttons, and pretty much any interactive element. The framework’s component handles the wiring so you don’t have to remember the low-level browser quirks. You import the component, use it, and move on.
That’s the real reason
javascript:void(0)feels so out of place in 2026. It’s not just that we have better patterns available, but the tooling has abstracted the problem away entirely. The history is worth knowing, because understanding why things work the way they do makes you a better developer! -
Svelte 5 Runes: A React Developer's Guide to Reactivity
Continuing my series on Svelte topics, today we’re talking about runes. If you’re coming from React, this is is going to be a different way to work with reactivity in modern JavaScript.
These blog posts might be what is considered the basics, but it helps me learn and think through the topics if I work on blog posts around the important things that every developer needs to know.
What Are Runes?
In Svelte 5, runes are special symbols that start with the dollar sign (
$). They look like regular JavaScript functions, but they’re actually compiler directives, reserved keywords that tell the Svelte compiler how to wire up reactivity during the build step.If you’ve used decorators in Python or macros in other languages, runes fill a similar role. They look like standard JavaScript, but the compiler transforms them into something more powerful behind the scenes.
Let’s walk through the four runes you’ll use most.
$state— The Engine of Reactivity$stateis the foundation. It declares reactive state in your component.<script> let count = $state(0); </script> <button onclick={() => count++}>{count}</button>In React,
useStatereturns an immutable value and a setter function, so you always need thatsetCountcall. In Svelte,$statereturns a deeply reactive proxy. You just mutate the value directly, and the compiler handles the rest. No setter function, no spread operators for nested objects. It just works.$derived— Computed Values Without Dependency ArraysIn React, you’d reach for
useMemohere, and you’d need to explicitly declare a dependency array so React knows when to recalculate.<script> let count = $state(0); let doubled = $derived(count * 2); </script>Dependency arrays are prone to human error. We forget what depends on what, and that leads to stale data or unnecessary recalculations.
$derivedautomatically tracks whatever state variables are used inside of it. No dependency array needed. It just reads what it reads, and recalculates when those values change.$effect— Side Effects That Actually Make SenseThis is the equivalent of
useEffectin React, which is notoriously tricky. Missing dependencies, stale closures, infinite loops… all the big gotchas are in useEffect calls.In Svelte,
$effectis used to synchronize state with external systems, like writing to local storage or updating a canvas:<script> let theme = $state('dark'); $effect(() => { localStorage.setItem('theme', theme); }); </script>Just like
$derived, it automatically tracks its dependencies and only runs when the state it reads actually changes. No dependency array, no cleanup function gotchas. It runs when it needs to run. That’s it.$props— Clean Component InterfacesEvery framework needs a way to pass data into components. In Svelte 5,
$propsmakes this look like standard JavaScript object destructuring:<script> let { name, age, role = 'viewer' } = $props(); </script> <p>{name} ({age}) - {role}</p>Default values, rest parameters, renaming … it all works exactly how you’d expect from CommonJS. If you know destructuring, you already know
$props. It’s readable, predictable, and there’s nothing new to learn.Runes Me Over
You’ve probably noticed a theme. Svelte 5 runes eliminate a whole class of bugs that come from manually managing dependencies. React makes you think about when things should update. Svelte’s goal is to figure it out for you at compile time.
/ Web-development / javascript / svelte / React
-
Svelte vs React: State Management Without the Ceremony
Continuing my Svelte deep-dive series, let’s talk about state management and reactivity. This is where the differences between React and Svelte can ‘feel’ much different.
React’s State Ceremony
In React, state requires a specific ritual. You declare state with
useState, which gives you a getter and a setter:const [count, setCount] = useState(0); function increment() { setCount(count + 1); }Want to update a variable? You have to call a function. You can’t just reassign
count. React won’t know anything changed. This is fine once you internalize it, but it adds ceremony to what should be a simple operation.Then there’s
useEffect, which is where things get tricky. You need to understand dependency arrays, and if you get them wrong, you’re looking at infinite loops or stale data:useEffect(() => { document.title = `Count: ${count}`; }, [count]); // forget this array and enjoy your infinite loopSome of
useEffectusage is actually unnecessary and likely using it wrong. If you’re using it for data transformations, derived values from state or props, or responding to user events, you’re probably reaching for the wrong tool.The React docs themselves will tell you that you might not need an effect. It’s a common source of bugs and confusion, especially for developers who are still building their mental model of React’s render cycle.
Svelte: Reactivity Through the Language Itself
Svelte takes a fundamentally different approach. Reactivity is baked into the language semantics. Want to declare state? Just declare a variable:
<script> let count = $state(0); function increment() { count += 1; } </script> <button onclick={increment}>{count}</button>That’s it. You assign a new value, and the DOM updates. The Svelte compiler sees your assignments and automatically generates the code to update exactly the parts of the DOM that depend on that variable. No virtual DOM diffing, no setter functions, no dependency arrays to manage.
Need a derived value? Svelte has you covered with
$derived:<script> let count = $state(0); let doubled = $derived(count * 2); </script> <p>{count} doubled is {doubled}</p>In React, you’d either compute this inline, use
useMemowith a dependency array, or… if you didn’t know better reach foruseEffectand a second piece of state (please don’t do this).Svelte’s
$effectrune exists for side effects like updatingdocument.titleor logging, but you should reach for it far less often thanuseEffectin React. The compiler handles most of whatuseEffectgets used for automatically.More Svelte comparisons coming as I keep digging in. Thanks for Svelting with me.
/ javascript / svelte / React / Web development
-
Svelte vs React: The Virtual DOM Tax You Might Not Need
I’m diving more into Svelte and SvelteKit lately, and I’m going to be writing a few posts about it as I learn. Fair warning: some of these will be general knowledge posts, but writing things out helps me internalize the details.
The Virtual DOM Question
It’s well known that React relies on a virtual DOM. The basic idea is that React maintains a copy of the DOM in memory, diffs it against the actual DOM, and then batches the changes to update the real thing. This works, but having to maintain this virtual DOM can lead to complications and confusion around what triggers a render or a re-render. If you’ve ever stared at a
useEffectdependency array wondering why your component is re-rendering, you know what I mean.Svelte takes a completely different approach. It’s not a library you ship to the browser, it’s a compiler step. You write Svelte code, and it compiles down to highly optimized vanilla JavaScript that surgically updates the DOM directly. No virtual DOM to maintain. No diffing algorithm running in the background. The framework essentially disappears at build time, and what you’re left with is just… JavaScript.
Templating That Feels Like the Web
I like how Svelte handles the relationship between HTML, CSS, and JavaScript. React forces you to write HTML inside JavaScript using JSX. You get used to it, sure, but it’s a specific way of thinking about your UI that can take some getting used to.
Svelte flips this around. Your
.sveltefiles are structured more like traditional web pages — you’ve got<script>tags for your JavaScript, regular HTML for markup, and<style>tags for CSS. Everything lives in one file, but there’s a clear separation between the three concerns.If you’ve ever worked with Django templates, Laravel Blade, or Ruby on Rails views, this will feel immediately familiar. It’s a lot closer to how the web actually works than JSX’s “everything is JavaScript” approach. For someone coming from those backgrounds, the learning curve is noticeably gentler.
More Svelte posts coming as I dig deeper. That’s all for now!
/ javascript / svelte / React / Web development
-
Turborepo is a build system optimized for JavaScript and TypeScript, written in Rust.
/ links / javascript
-
From SUnit to Vitest: A Brief History of JavaScript Testing
I care a lot about testing. I don’t know if that’s obvious yet, but hopefully it’s obvious. I wanted to trace the lineage of the testing tools we use today, because I think understanding where they came from helps you appreciate why things work the way they do.
Where It All Started
Automated testing as we know it really started with SUnit for the Smalltalk language. Kent Beck created it back in 1989, and it established the patterns that every test framework still borrows from today.
In 1997, Kent Beck and Erich Gamma ported those ideas to Java and created JUnit. JUnit was, or is, incredibly influential to pretty much every unit testing framework you’ve ever used. The test runner, assertions, setup and teardown, all of that traces back to JUnit.
But I’m going to focus on the JavaScript side of things here.
Jest: The Facebook Era
Jest was originally created at Facebook in 2011 as part of a major platform rewrite. It became the dominant testing framework for React and Node.js codebases, and in 2022, Facebook released it to the OpenJS Foundation.
Jest works well, but it carries some baggage. It requires a transpilation pipeline, something that was common a decade ago but feels burdensome now. If you want to use ESM modules, there’s an extra step involved. It’s adds friction.
So what else is there?
Vitest: The Modern Alternative
Vitest is a modern alternative, built on top of Vite. It supports ESM modules and TypeScript out of the box; so no transpilation step needed. And because Vite has HMR (hot module replacement), the watch mode for rerunning tests is very fast.
Vitest was initially created by Anthony Fu and the company behind Vue.js. The initial commit was in December 2021, so it’s a relatively recent project. They’ve made incredible progress since then.
Vitest uses Vite under the hood. So lets look at that briefly.
Why Vite Is So Fast
Vite’s speed comes from esbuild, which is written in Go. It compiles directly to native machine code, so it bypasses the JavaScript engine’s overhead entirely. It can transform TypeScript significantly faster because it doesn’t need to go through the JS engine. And because it’s Go, it’s multithreaded.
But things are changing. In Vite 8, the bundler is moving from esbuild to Rolldown. This is new tool written in Rust that combines the best of esbuild and Rollup.
Why? Currently, Vite uses esbuild during development but switches to Rollup for production builds. Two different tools for two different use cases. Rolldown unifies both into a single tool that handles dev and production.
What Did We Learn?
Hopefully something! How about a mini review to nail it home:
- 1989: SUnit (Smalltalk) — Kent Beck starts it all
- 1997: JUnit (Java) — the template everything else follows
- 2011: Jest — Facebook’s testing framework, now under OpenJS Foundation
- 2021: Vitest — modern, fast, ESM-native testing built on Vite
- Coming soon: Rolldown replaces esbuild + Rollup in Vite 8
That’s all for now!
/ Programming / Testing / javascript
-
Garbage Collection: How Python, JavaScript, and Go Clean Up After Themselves
It’s Garbage day for me IRL and I wanted to learn more about garbage collection in programming. So guess what? Now you get to learn more about it too.
We’re going to focus on three languages I work with mostly, Python, JavaScript, and Go. We will skip the rest for now.
Python: Reference Counting
Python’s approach is the most straightforward of the three. Every object keeps track of how many things are pointing to it. When that count drops to zero, the object gets cleaned up immediately. Simple.
There’s a catch, though. If two objects reference each other but nothing else references either of them, the count never hits zero. That’s a reference cycle, and Python handles it with a secondary cycle detector that periodically scans for these orphaned clusters. But for the vast majority of objects, reference counting does the job without any fancy algorithms.
JavaScript (V8): Generational Garbage Collection
Most JavaScript you encounter is running on V8… so Chrome based browsers, Node and Deno. V8 uses a generational strategy based on a simple observation: most objects die young.
V8 splits memory into two main areas:
- The Nursery (Young Generation): New objects land here. This space is split into two halves and uses a scavenger algorithm. It’s fast because it only deals with short-lived variables — and most variables are short-lived.
- Old Space (Old Generation): No, not the deodorant company. Objects that survive a couple of scavenger rounds get promoted here. Old space uses a mark-and-sweep algorithm, which is slower but handles long-lived objects more efficiently.
First it asks, “How long has this object been around?” New stuff gets the quick treatment, and anything that sticks around gets put out to the farm, to be delt where time is less of a premium. It’s a smart tradeoff between speed and memory efficiency.
Go: Tricolor Mark-and-Sweep
Go’s garbage collector also uses mark-and-sweep, but with a twist called tricolor marking. Here’s how it works:
- White objects: These might be garbage but haven’t been checked yet. Everything starts as white.
- Gray objects: The collector has reached these, but it hasn’t scanned their children (the things they reference) yet.
- Black objects: These are confirmed alive — the collector has scanned them and all their references.
The collector starts from known root objects, marks them gray, then works through the gray set — scanning each object’s references and marking them gray too, while the scanned object itself turns black. When there are no more gray objects, anything still white is unreachable and gets cleaned up.
Go’s approach is notable because it runs concurrently. This helps with latency while the GC is running.
Garbage, Collected
Each approach reflects the language’s priorities:
- Python optimizes for simplicity and predictability. Objects get cleaned up correctly when they’re no longer needed
- JavaScript optimizes for speed in interactive applications. Quick cleanup for short-lived objects, thorough cleanup for the rest
- Go optimizes for low latency. Concurrent collection is great for server side processes
References
- Design of CPython’s Garbage Collector — Python Developer’s Guide deep dive into reference counting and cycle detection
- Orinoco: Young Generation Garbage Collection — V8’s parallel scavenger and generational GC design
- A Guide to the Go Garbage Collector — Official Go documentation on the concurrent tricolor collector
/ Programming / Golang / Python / javascript / Garbage-collection
-
I compared npm, Yarn, pnpm, and Bun. TLDR version: pnpm wins for most teams, Bun wins if you’re already on the runtime.
Has anyone switched their whole team to Bun yet? How’d that go?
/ Programming / Tools / javascript
-
I wrote about securing node_modules. Socket, Snyk, Dependabot — each catches different things. Hopefully answering when to use AI to rewrite simple deps you barely use.
Anyone want to build that CLI?
/ Programming / security / javascript
-
Defending Your Node Modules: Security Tools and When to Rewrite Dependencies
This week I’ve been on a bit of a JavaScript kick; writing about why Vitest beats Jest, comparing package managers, diving into Svelte 5. But there’s one topic that we shouldn’t forget: security.
node_modulesis the black hole directory that we all joke about and pretend is fine.Let’s talk about how to actually defend against problems lurking in those deep depths when a rewrite might make sense.
The Security Toolkit You Actually Need
You’re going to need a mix of tools to both detect bad code and prevent it from running. No single tool covers everything (or should), so here are some options to consider:
Socket does behavioral analysis on packages. It looks at what the code is actually doing. Is it accessing the network? Reading environment variables? Running install scripts? These are the sketchy behaviors that signal a compromised or malicious package. Socket is great at catching supply chain attacks that traditional vulnerability scanners miss entirely.
Snyk handles vulnerability scanning. It checks your entire dependency tree against a massive database of known vulnerabilities and is really good at finding transitive problems, those vulnerabilities buried three or four levels deep in your dependency chain that you’d never find manually.
LavaMoat takes a different approach. It creates a runtime policy that prevents libraries from doing things they shouldn’t be doing, like making network requests when they’re supposed to be a string formatting utility. Think of it as a permissions system for your dependencies.
And then there’s Dependabot from GitHub, which automatically opens pull requests to update vulnerable dependencies. This is honestly the minimum of what you should be doing. If you’re not running Dependabot like service, start now.
Each of these tools catches different things. Socket finds malicious behavior, Snyk finds known vulnerabilities, LavaMoat enforces runtime boundaries, and Dependabot keeps things updated. Together, they give you solid coverage.
When to Vendor or Rewrite a Dependency
Now let’s talk about something I think more developers should be doing: auditing your dependencies and asking when a rewrite makes sense.
With AI tools available now, this has become incredibly practical. Here’s when I think you should seriously consider replacing a dependency with your own code:
-
You’re using 1% of the library. If you imported a massive package just to use one function, you don’t need the whole thing. Have your AI tool write a custom function that does exactly what you need. You shouldn’t be importing a huge library for a single utility. It’s … ahhh, well, stupide.
-
It’s a simple helper. Things like
isEven,leftPad, or a basic string formatter. AI can write these in seconds, and you eliminate an entire dependency from your tree. Fewer dependencies means a smaller attack surface. -
The package is abandoned. The last update was years ago, there’s a pile of open issues, and nobody’s home. You’re better off asking your LLM to rewrite the functionality for your specific project. Own the code yourself instead of depending on something that’s collecting dust.
When You Should Absolutely NOT Rewrite
This is just as important. Some things should stay as battle-tested community libraries, no matter how good your AI tools are:
-
Cryptography, authentication, and authorization. It would be incredibly foolish to try to rewrite bcrypt or roll your own JWT validation. These libraries have been audited, attacked, and hardened over years. Use them.
-
Complex parsers with extensive rule sets. A markdown parser, for example, has a ton of edge cases and rules that need to be exactly right. You don’t want to accidentally ship your own flavor of markdown. Same goes for HTML sanitizers, getting sanitization wrong means introducing XSS vulnerabilities. Trust the community libraries here.
-
Date and time math. Time zones are a deceptively hard problem in programming. Don’t rewrite
date-fnsordayjs. Just don’t. -
Libraries that wrap external APIs. If something integrates with Stripe, AWS, or any API that changes frequently, you do not want to maintain that yourself. The official SDK maintainers track API changes so you don’t have to. Just, no and thank you.
The pattern is pretty clear: if getting it wrong has security implications or if the domain is genuinely complex with lots of edge cases, use the established library. If it’s simple utility code or you’re barely using the package, consider a rewrite.
A Fun Side Project Idea
If you’re looking for yet another side project (YASP), that is one that would be a super useful CLI tool. I’d probably reach for Go and build a TUI tool that scans your
node_modulesand generates a list of rewrite recommendations.I think that’d be a really fun build, and honestly something the JavaScript ecosystem could use.
/ security / javascript / Node / Dependencies / Devtools
-
-
Choosing the Right Package Manager: npm vs Yarn vs pnpm vs Bun
npm, Yarn, pnpm, and Bun each take a different approach to managing JavaScript dependencies. Compare their speed, disk usage, and deployment impact to find the right fit for your project.
/ Development / links / javascript / Performance
-
JavaScript Package Managers: NPM, Yarn, PNPM, and Bun Compared
If you’ve been writing JavaScript for any length of time, you’ve probably had opinions about package managers. Everyone has used npm because it’s the default. Maybe you switched to Yarn back in 2016 and haven’t looked back. These days, there are better options.
That may seem like a bold statement, but bear with me. This article is a mix of opinions and facts. The package manager landscape has changed quite a bit in the last decade, and it’s worth exploring. Let’s break it down.
NPM: The Default Everyone Knows
npm is the package manager that ships with Node. It works. Everyone knows it. Node modules are straightforward to reason about, and security has been improving over the years.
But npm has historically struggled with performance. That’s partly a design problem, it was so widely adopted that making fundamental speed improvements meant risking breakage for the massive ecosystem already depending on it. When you’re supporting millions of packages, you need to be careful in managing backward compatibility breaks, making optimization a lot harder.
This performance gap is exactly what opened the door for alternatives.
Yarn: The Pioneer That Over-Optimized
Yarn showed up in 2016, created by Facebook, and it genuinely pushed the ecosystem forward. It parallelized downloads, introduced offline caching, and most notably, introduced lock files to JavaScript. npm eventually adopted lock files too, so Yarn’s influence on the broader ecosystem is undeniable.
Lock files did exist for other languages before 2016, such as Ruby and PHP, but Yarn was the first JavaScript package manager to include it.
The problem came with Yarn 2. It’s a classic case of over-optimization.
Yarn 2 introduced Plug’n’Play mode, which replaces your
node_modulesfolder with zip files. We’re on Yarn 4 now, and while you can swap between modes, if you’re in the zip mode it becomes genuinely painful to inspect the actual JavaScript code you’re installing. You have to unzip things, dig through archives, and it just adds friction where there shouldn’t be any.If you enjoy making JavaScript development harder than it needs to be, Yarn’s PnP mode has you covered. It’s your Toxic coworkers favorite tool.
PNPM: The Clear Upgrade
If you look at the benchmarks, pnpm wins in almost every common scenario. Running install with a warm cache, lock file, and existing node modules? Faster than npm. A clean install with nothing cached? 7 seconds versus 30 seconds. That’s not a marginal improvement.
Speed isn’t even the best part. pnpm uses hard links from a centralized store instead of copying packages into every project’s
node_modules. Depending on the size of your projects, you can save up to 70% of your disk space compared to npm. If you’re working on multiple JavaScript projects (and who isn’t?), that adds up fast.pnpm also handles the node_modules structure in a way that’s strict by default, which means your code can’t accidentally import packages you haven’t explicitly declared as dependencies. It catches bugs that npm would let slide.
So pnpm is the clear winner, right? Well, there’s one more contender we haven’t talked about yet.
Bun: The Speed Demon
Bun was released in 2023, and noticeably faster than pnpm.
The reason comes down to architecture. pnpm is written in TypeScript and runs on Node, which means every time you run
pnpm install, your computer has to start the V8 engine, load all the JavaScript, compile it, and then ask the operating system to do the actual work. That’s a lot of overhead.Bun is a compiled binary written in Zig. It talks directly to your kernel, no middleman, no V8 engine slowing down every tiny decision. On top of that, Bun is hyper-focused on optimizing system calls. Instead of doing file operations one at a time (open file A, write file A, close file A, repeat a thousand times), it aggressively batches them together. The result is speed improvements not just in disk operations but in everything it does.
Earlier versions of Bun had an annoying quirk similar to Yarn, it used a binary lock file that was difficult to manually audit. That’s been fixed. Bun now uses a readable lock file, which removes the biggest objection people had.
Which begs the question… ?
So Why Isn’t Everyone Using Bun?
The short answer: it’s complicated. Bun isn’t just a package manager, it also replaces Node as your runtime. If you’re using Bun as your runtime, using it as your package manager makes total sense. Everything fits together.
But most teams are still on Node. And when you’re on Node, pnpm is the clearer choice for everyone involved. A new developer joining your team sees pnpm and immediately knows, “Oh, this is a JavaScript project, I know how this works.” Bun as a package manager on top of Node adds a layer of “wait, why are we using this?” that you have to explain.
Maybe that changes in the future as Bun’s runtime adoption grows. I’m sure the Bun team is working hard to make that transition as smooth as possible. But the reality right now is that most JavaScript projects are running on Node.
My Recommendation
If you’re starting a new project or looking to switch:
- Using Bun as your runtime? Use Bun for package management too. It’s the fastest option and everything integrates cleanly.
- On Node (most of us)? Use pnpm. It’s faster than npm, saves disk space, and is strict in ways that catch real bugs. Your team will thank you.
- Still on npm? You’re not doing anything wrong, but you’re leaving performance and disk space on the table for no real benefit.
- On Yarn PnP? I have questions, but I respect your commitment.
The JavaScript ecosystem moves fast, and if you haven’t revisited your package manager choice in a while, it might be worth running a quick benchmark on your own project. The numbers might surprise you.
/ Tools / Development / javascript
-
JavaScript Still Doesn't Have Types (And That's Probably Fine)
Here’s the thing about JavaScript and types: it doesn’t have them, and it probably won’t any time soon.
Back in 2022, there was a proposal to add TypeScript-like type syntax directly to JavaScript. The idea was being able to write type annotations without needing a separate compilation step. But the proposal stalled because the JavaScript community couldn’t reach consensus on implementation details.
The core concern? Performance. JavaScript is designed to be lightweight and fast, running everywhere from browsers to servers to IoT devices. Adding a type system directly into the language could slow things down, and that’s a tradeoff many aren’t willing to make.
So the industry has essentially accepted that if you want types in JavaScript, you use TypeScript. And honestly? That’s fine.
TypeScript: JavaScript’s Type System
TypeScript has become the de facto standard for typed JavaScript development. Here’s what it looks like:
// TypeScript Example let name: string = "John"; let age: number = 30; let isStudent: boolean = false; // Function with type annotations function greet(name: string): string { return `Hello, ${name}!`; } // Array with type annotation let numbers: number[] = [1, 2, 3]; // Object with type annotation let person: { name: string; age: number } = { name: "Alice", age: 25 };TypeScript compiles down to plain JavaScript, so you get the benefits of static type checking during development without any runtime overhead. The types literally disappear when your code runs.
The Python Parallel
You might be interested to know that the closest parallel to this JavaScript/TypeScript situation is actually Python.
Modern Python has types, but they’re not enforced by the language itself. Instead, you use third-party tools like mypy for static analysis and pydantic for runtime validation. There’s actually a whole ecosystem of libraries supporting types in Python in various ways, which can get a bit confusing.
Here’s how Python’s type annotations look:
# Python Example name: str = "John" age: int = 30 is_student: bool = False # Function with type annotations def greet(name: str) -> str: return f"Hello, {name}!" # List with type annotation numbers: list[int] = [1, 2, 3] # Dictionary with type annotation person: dict[str, int] = {"name": "Alice", "age": 25}Look familiar? The syntax is surprisingly similar to TypeScript. Both languages treat types as annotations that help developers and tools understand the code, but neither strictly enforces them at runtime (unless you add additional tooling).
What This Means for You
If you’re writing JavaScript, stop, and use TypeScript. It’s mature and widely adopted. Now also you can run TypeScript directly in some runtimes like Bun or Deno.
Type systems were originally omitted from many of these languages because the creators wanted to establish a low barrier to entry, making it significantly easier for people to adopt the language.
Additionally, computers at the time were much slower, and compiling code with rigorous type systems took a long time, so creators prioritized the speed of the development loop over strict safety.
However, with the power of modern computers, compilation speed is no longer a concern. Furthermore, the type systems themselves have improved significantly in efficiency and design.
Since performance is no longer an issue, the industry has shifted back toward using types to gain better structure and safety without the historical downsides.
/ Programming / Python / javascript / Typescript
-
jQuery 4.0 was released on the 17th and they removed IE10 support. IE10 was first deprecated by Microsoft in January 2016 and fully retired in 2020. You might wonder, “What are they doing still supporting Internet Explorer?” They did say they were going to fully remove support in version 5.
-
Headless components for Svelte - flexible, unstyled, and accessible primitives that provide the foundation for building your own high-quality component library.
/ links / UIkit / javascript / svelte