Development
-
Why Ghostty Is My Terminal for Agentic Work
I love Ghostty for agentic work (mostly Claude Code). It doesn’t try to bake in its own agentic coding environment. It’s completely unopinionated about how you use it. It is exactly what I want from a terminal.
It’s open source, primarly made by one person, Mitchell Hashimoto, who doesn’t ask you for any money. No outside investment, no employees. Just a really solid (I think the best?!) terminal emulator.
Sometimes I do wish it had slightly better navigation, and system notifications was easier to figure out, but this is minor stuff and not a blocker for me being productive or enjoying the work.
The Warp’ed De-Tour
I used to use Warp before Ghostty. I’ll still open it occasionally to see what they’re working on. Warp has some interesting ideas, they’re trying to replace your IDE and be your entire agentic development environment. The problem is they seem to have too many features now for general use. I think this approach will turn off both the IDE crowd and the Neovim crowd simultaneously. So, I keep going back to Ghostty.
We now have a new contender.
Enter Cmux
Cmux is a newer option that actually solves those two minor problems I had with Ghostty. It has better navigation with side tabs, and notifications work out of the box. It’s open source and free to use, and it’s built on Ghostty under the hood, so the core terminal experience is solid.
There’s a small AI company behind it. It looks like their Y Combinator batch was in 2024, and they’re trying to build some kind of product on top of Cmux, possibly memory-related. Though with Claude Code getting better at memory and plenty of free memory frameworks already out there, I’m not sure where that is headed. This CMUX project could be the start of a pivot?
The repo is kind of a mess, they have their website mixed in with the application code. And they offer something called a “Founder’s Edition” for $30/month… which I don’t know how that makes any sense when Warp is $20/month, Zed is $10/month, and Cursor is $20/month.
However it’s optional and the free version of Cmux is really good right now; but I’m be doubtful it exists in five or ten years. My guess is their exit strategy is to get acquired by a model provider, given that they have taken investment.
I am having fun with cmux, so check it out if you haven’t yet!
/ Tools / Development / Terminal
-
Claude Code Skills vs Plugins: What's the Difference?
If you’ve been building with Claude Code, you’ve probably seen the terms “skill,” “plugin,” and “agent” thrown around. They’re related but distinct concepts, and understanding the difference will help you build better tooling. Let’s focus on skills versus plugins since those two are the most closely related.
Skills: Reusable Slash Commands
Skills are user-invocable slash commands, essentially reusable prompts that run directly in your main conversation. You trigger them with
/skill-nameand they execute inline. They can be workflows or common tasks that are done frequently.Skills can live inside your
.claude/skills/folder, or they can live inside a plugin (where they’re called “commands” instead). Same concept, different home.The important frontmatter you should pay attention to is the
allowed-toolsproperty. This defines which tool calls the skill can access, and there are three formats you can use:- Comma-separated names —
Bash, Read, Grep - Comma-separated with filters —
Bash(gh pr view:*), Bash(gh pr diff:*) - JSON array —
["Bash", "Glob", "Grep"]
I don’t think there’s a meaningful speed difference between them? The filtered format might take slightly longer to parse if you have a huge list, but in practice it’s negligible. Pick whichever is most readable for your use case.
The real power here is that skills can define tool calls and launch subagents. That turns a simple slash command into something that can orchestrate complex workflows.
Plugins: The Full Package
A plugin is a bigger container. It can bundle commands (skills), agents, hooks, and MCP servers together as a single distributable unit. Every plugin needs a
.claude-plugin/plugin.jsonfile; which is just a name, description, and author.Plugins are a good way to bundle agents with skills. If your workflow needs a specialized agent that gets triggered by a slash command, a plugin is a good option for that.
Pushing the Boundaries of Standalone Skills
However, I wanted to experiment with what’s actually possible using standalone skills, so I built upkeep. It turns out that you can bundle actual compiled binaries inside a skill directory and call them from the skill. That opens up a lot of possibilities.
Here’s how I did it:
- The skill has a prerequisite section that checks for a
bin/folder containing the binary - A workflow calls the binary, passing in the commands to run
- Each step defines what we expect back from the binary
You can see the full implementation in the SKILL.md file. It’s a pattern that lets you distribute real functionality, not just prompts, through the skill.
Quick Summary
- Skills are slash commands. Reusable prompts with tool access that run in your conversation.
- Plugins bundle skills, agents, hooks, and MCP servers together with a
plugin.json. - Skills are more flexible than you might expect, you can call subagents, distribute binaries, and build real workflows.
If you’re just getting started, skills are the easier entry point. When you need to package multiple pieces together or distribute agents alongside commands, that’s when you reach for a plugin.
Have fun building!
/ AI / Development / Claude-code
- Comma-separated names —
-
You move fast. Cloud development cycles do not.
Mixing and matching optimization strategies won’t fix your slow development loop. LocalStack streamlines your feedback loop, bringing the cloud directly to your laptop. Same production behavior. Faster feedback. Fully under your control.
/ DevOps / Development / links / localcloud / docker
-
Serverless and Edge Computing: A Practical Guide
Serverless and edge computing have transformed how we deploy and scale web applications. Instead of managing servers, you write functions that automatically scale from zero to millions of users.
Edge computing takes this further by running code geographically close to users for minimal latency. Let’s break down how these technologies work and when you’d actually want to use them.
What is Serverless?
Serverless doesn’t mean “no servers”, it means you don’t manage them. The provider handles infrastructure, scaling, and maintenance. You just write “functions”.
The functions are stateless, auto-scaling, and you only pay for execution time. So what’s the tradeoff? Well there are several but the first one is Cold starts. The first request after idle time is slower because the container needs to spin up.
The serverless Platforms as a Service are sticky, preventing you easily moving to another platform.
They are stateless, meaning each invocation is independent and doesn’t retain any state between invocations.
In some cases, they don’t run Node, so they behave much differently when building locally, which complicates development and testing.
Each request is handled by a NEW function and so you can imagine that if you have a site that gets a lot of traffic, and makes a lot of requests, this will lead to expensive hosting bills or you playing the pauper on social media.
Traditional vs. Serverless vs. Edge
Think of it this way:
- Traditional servers are always running, always costing you money, and you handle all the scaling yourself. Great for predictable, high-traffic workloads. Lots of different options for hosting and scaling.
- Serverless (AWS Lambda, Vercel Functions, GCP Functions) spins up containers on demand and kills them when idle. Auto-scales from zero to infinity. Cold starts around 100-500ms.
- Edge (Cloudflare Workers, Vercel Edge) uses V8 Isolates instead of containers, running your code in 200+ locations worldwide. Cold starts under 1ms.
Cost Projections
Here’s how the costs break down at different scales:
Requests / Month ~RPS (Avg) AWS Lambda Cloudflare Workers VPS / K8s Cluster Winner 1 Million 0.4 $0.00 (Free Tier) $0.00 (Free Tier) $40–$60 (Min HA Setup) Serverless 10 Million 4.0 ~$12 ~$5 $40–$100 Serverless 100 Million 40 ~$120 ~$35 $80–$150 Tie / Workers 500 Million 200 ~$600 ~$155 $150–$300 VPS / Workers 1 Billion 400 ~$1,200+ ~$305 $200–$400 VPS / EC2 The Hub and Spoke Pattern
Also called the citadel pattern, this is where serverless and traditional infrastructure stop competing and start complementing each other. The idea is simple: keep a central hub (your main application running on containers or a VPS) and offload specific tasks to serverless “spokes” at the edge.
Your core API, database connections, and stateful logic stay on traditional infrastructure where they belong. But image resizing, auth token validation, A/B testing, geo-routing and rate limiting all move to edge functions that run close to the user.
When to Use Serverless
- Unpredictable or spiky traffic — APIs that go from 0 to 10,000 requests in minutes (webhooks, event-driven workflows)
- Lightweight, stateless tasks — image processing, PDF generation, sending emails, data transformation
- Low-traffic side projects — anything that sits idle most of the time and you don’t want to pay for an always-on server… and you don’t know how to setup a Coolify server.
- Edge logic — geolocation routing, header manipulation, request validation before it hits your origin
When to Use Containers / VPS
- Sustained high traffic — once you’re consistently above ~100M requests/month, a VPS is cheaper (see the table above)
- Stateful workloads — WebSocket connections, long-running processes, anything that needs to hold state between requests
- Database-heavy applications — connection pooling and persistent connections don’t play well with serverless cold starts
- Complex applications — monoliths or microservices that need shared memory, background workers, or cron jobs
The Hybrid Approach
The best architectures often use both. It depends on your specific use case and requirements. It depends on the team, the budget, and the complexity of your application.
Knowing the tradeoffs is the difference between a seasoned developer and a junior. It’s important that you make the right decisions based off your needs and constraints.
Good luck and godspeed!
/ DevOps / Development / Serverless / Cloud
-
Ditch Jest for Vitest: A Ready-Made Migration Prompt
If you’ve ever sat there watching Jest crawl through your TypeScript test suite, you know pain. I mean, I know your pain.
When Switching to Vitest, and the speed difference is genuinely dramatic. The answers to why it’s slow are easy to figure out. There’s plenty of explanations on that, so I’ll leave that to you to go look up why.
I put together a prompt you can hand to Claude (or any AI assistant) that will handle the migration for you. Let me know how it goes!
The Migration Prompt
Convert all Jest tests in this project to Vitest. Here's what to do: ## Setup 1. Remove Jest dependencies (`jest`, `ts-jest`, `@types/jest`, `babel-jest`, any jest presets) 2. Install Vitest: `pnpm add -D vitest` 3. Remove `jest.config.*` files 4. Add a `test` section to `vite.config.ts` (or create `vitest.config.ts` if no Vite config exists): import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globals: true, }, }) 5. Update the `test` script in `package.json` to `vitest` ## Test File Migration For every test file: 1. Replace imports — Remove any `import ... from '@jest/globals'`. If `globals: true` is set, no imports needed. Otherwise add: import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest' 2. Replace `jest` with `vi` everywhere: - jest.fn() → vi.fn() - jest.mock() → vi.mock() - jest.spyOn() → vi.spyOn() - jest.useFakeTimers() → vi.useFakeTimers() - jest.useRealTimers() → vi.useRealTimers() - jest.advanceTimersByTime() → vi.advanceTimersByTime() - jest.clearAllMocks() → vi.clearAllMocks() - jest.resetAllMocks() → vi.resetAllMocks() - jest.restoreAllMocks() → vi.restoreAllMocks() - jest.requireActual() → vi.importActual() (note: async in Vitest) 3. Fix mock hoisting — vi.mock() is hoisted automatically, but variables used in the mock factory must be prefixed with `vi` or declared inside the factory. 4. Fix jest.requireActual — This becomes vi.importActual() and returns a Promise: // Jest jest.mock('./utils', () => ({ ...jest.requireActual('./utils'), fetchData: jest.fn(), })) // Vitest vi.mock('./utils', async () => ({ ...(await vi.importActual('./utils')), fetchData: vi.fn(), })) 5. Snapshot tests work the same way. No changes needed. 6. Timer mocks — same API after the jest → vi rename. 7. Module mocks with __mocks__ directories work identically. ## TypeScript Config Add Vitest types to tsconfig.json: { "compilerOptions": { "types": ["vitest/globals"] } } ## After Migration 1. Delete leftover Jest config files 2. Update CI config to use `vitest run` 3. Run tests and fix any remaining failures
If you’re still on Jest and your TypeScript test suite is dragging, give Vitest a shot. The migration is low-risk, the speed improvements are real, and with the prompt above, the hardest parts are handled. ☕
/ Development / Testing / Typescript / Vitest / Jest
-
Polar — Monetize your software with ease | Polar
Monetize your software with ease
/ Tools / Development / links / platform / creators
-
Choosing the Right Package Manager: npm vs Yarn vs pnpm vs Bun
npm, Yarn, pnpm, and Bun each take a different approach to managing JavaScript dependencies. Compare their speed, disk usage, and deployment impact to find the right fit for your project.
/ Development / links / javascript / Performance
-
JavaScript Package Managers: NPM, Yarn, PNPM, and Bun Compared
If you’ve been writing JavaScript for any length of time, you’ve probably had opinions about package managers. Everyone has used npm because it’s the default. Maybe you switched to Yarn back in 2016 and haven’t looked back. These days, there are better options.
That may seem like a bold statement, but bear with me. This article is a mix of opinions and facts. The package manager landscape has changed quite a bit in the last decade, and it’s worth exploring. Let’s break it down.
NPM: The Default Everyone Knows
npm is the package manager that ships with Node. It works. Everyone knows it. Node modules are straightforward to reason about, and security has been improving over the years.
But npm has historically struggled with performance. That’s partly a design problem, it was so widely adopted that making fundamental speed improvements meant risking breakage for the massive ecosystem already depending on it. When you’re supporting millions of packages, you need to be careful in managing backward compatibility breaks, making optimization a lot harder.
This performance gap is exactly what opened the door for alternatives.
Yarn: The Pioneer That Over-Optimized
Yarn showed up in 2016, created by Facebook, and it genuinely pushed the ecosystem forward. It parallelized downloads, introduced offline caching, and most notably, introduced lock files to JavaScript. npm eventually adopted lock files too, so Yarn’s influence on the broader ecosystem is undeniable.
Lock files did exist for other languages before 2016, such as Ruby and PHP, but Yarn was the first JavaScript package manager to include it.
The problem came with Yarn 2. It’s a classic case of over-optimization.
Yarn 2 introduced Plug’n’Play mode, which replaces your
node_modulesfolder with zip files. We’re on Yarn 4 now, and while you can swap between modes, if you’re in the zip mode it becomes genuinely painful to inspect the actual JavaScript code you’re installing. You have to unzip things, dig through archives, and it just adds friction where there shouldn’t be any.If you enjoy making JavaScript development harder than it needs to be, Yarn’s PnP mode has you covered. It’s your Toxic coworkers favorite tool.
PNPM: The Clear Upgrade
If you look at the benchmarks, pnpm wins in almost every common scenario. Running install with a warm cache, lock file, and existing node modules? Faster than npm. A clean install with nothing cached? 7 seconds versus 30 seconds. That’s not a marginal improvement.
Speed isn’t even the best part. pnpm uses hard links from a centralized store instead of copying packages into every project’s
node_modules. Depending on the size of your projects, you can save up to 70% of your disk space compared to npm. If you’re working on multiple JavaScript projects (and who isn’t?), that adds up fast.pnpm also handles the node_modules structure in a way that’s strict by default, which means your code can’t accidentally import packages you haven’t explicitly declared as dependencies. It catches bugs that npm would let slide.
So pnpm is the clear winner, right? Well, there’s one more contender we haven’t talked about yet.
Bun: The Speed Demon
Bun was released in 2023, and noticeably faster than pnpm.
The reason comes down to architecture. pnpm is written in TypeScript and runs on Node, which means every time you run
pnpm install, your computer has to start the V8 engine, load all the JavaScript, compile it, and then ask the operating system to do the actual work. That’s a lot of overhead.Bun is a compiled binary written in Zig. It talks directly to your kernel, no middleman, no V8 engine slowing down every tiny decision. On top of that, Bun is hyper-focused on optimizing system calls. Instead of doing file operations one at a time (open file A, write file A, close file A, repeat a thousand times), it aggressively batches them together. The result is speed improvements not just in disk operations but in everything it does.
Earlier versions of Bun had an annoying quirk similar to Yarn, it used a binary lock file that was difficult to manually audit. That’s been fixed. Bun now uses a readable lock file, which removes the biggest objection people had.
Which begs the question… ?
So Why Isn’t Everyone Using Bun?
The short answer: it’s complicated. Bun isn’t just a package manager, it also replaces Node as your runtime. If you’re using Bun as your runtime, using it as your package manager makes total sense. Everything fits together.
But most teams are still on Node. And when you’re on Node, pnpm is the clearer choice for everyone involved. A new developer joining your team sees pnpm and immediately knows, “Oh, this is a JavaScript project, I know how this works.” Bun as a package manager on top of Node adds a layer of “wait, why are we using this?” that you have to explain.
Maybe that changes in the future as Bun’s runtime adoption grows. I’m sure the Bun team is working hard to make that transition as smooth as possible. But the reality right now is that most JavaScript projects are running on Node.
My Recommendation
If you’re starting a new project or looking to switch:
- Using Bun as your runtime? Use Bun for package management too. It’s the fastest option and everything integrates cleanly.
- On Node (most of us)? Use pnpm. It’s faster than npm, saves disk space, and is strict in ways that catch real bugs. Your team will thank you.
- Still on npm? You’re not doing anything wrong, but you’re leaving performance and disk space on the table for no real benefit.
- On Yarn PnP? I have questions, but I respect your commitment.
The JavaScript ecosystem moves fast, and if you haven’t revisited your package manager choice in a while, it might be worth running a quick benchmark on your own project. The numbers might surprise you.
/ Tools / Development / javascript
-
REPL-Driven Development Is Back (Thanks to AI)
So you’ve heard of TDD. Maybe BDD. But have you heard of RDD?
REPL-driven development. I think most programmers these days don’t work this way. The closest equivalent most people are familiar with is something like Python notebooks—Jupyter or Colab.
But RDD is actually pretty old. Back in the 70s and 80s, Lisp and Smalltalk were basically built around the REPL. You’d write code, run it immediately, see the result, and iterate. The feedback loop was instant.
Then the modern era of software happened. We moved to a file-based workflow, probably stemming from Unix, C, and Java. You write source code in files. There’s often a compilation step. You run the whole thing.
The feedback loop got slower, more disconnected. Some languages we use today like Python, Ruby, JavaScript, PHP include a REPL, but that’s not usually how we develop. We write files, run tests, refresh browsers.
Here’s what’s interesting: AI coding assistants are making these interactive loops relevant again.
The new RDD is natural language as a REPL.
Think about it. The traditional REPL loop was:
- Type code
- System evaluates it
- See the result
- Iterate
The AI-assisted loop is almost identical:
- Type (or speak) your intent in natural language
- AI interprets and generates code
- AI runs it and shows you the result
- Iterate
You describe what you want. The AI writes the code. It executes. You see what happened. If it’s not right, you clarify, and the loop continues.
This feels fundamentally different from the file-based workflow most of us grew up with. You’re not thinking about which file to open, You’re thinking about what you want to happen, and you’re having a conversation until it does.
Of course, this isn’t a perfect analogy. With a traditional REPL, you have more control. You understood exactly what was being evaluated because you wrote it.
>>> while True: ... history.repeat()/ AI / Programming / Development
-
I usually brainstorm spec docs using Gemini or Claude, so if you are like me, this prompt is interesting insight into your software decisions.
Based off our previous chats and the previous documents you've helped me with, provide a detailed summary of all my software decisions and preferences when it comes to building different types of applications./ AI / Development
-
Switching to mise for Local Dev Tool Management
I’ve been making some changes to how I configure my local development environment, and I wanted to share what I’ve decided on.
Let me introduce to you, mise (pronounced “meez”), a tool for managing your programming language versions.
Why Not Just Use Homebrew?
Homebrew is great for installing most things, but I don’t like using it for programming language version management. It is too brittle. How many times has
brew upgradedecided to switch your Python or Node version on you, breaking projects in the process? Too many, in my experience.mise solves this elegantly. It doesn’t replace Homebrew entirely, you’ll still use that for general stuff but for managing your system programming language versions, mise is the perfect tool.
mise the Great, mise the Mighty
mise has all the features you’d expect from a version manager, plus some nice extras:
Shims support: If you want shims in your bash or zsh, mise has you covered. You’ll need to update your RC file to get them working, but once you do, you’re off to the races.
Per-project configuration: mise can work at the application directory level. You set up a
mise.tomlfile that defines its behavior for that specific project.Environment management: You can set up environment variables directly in the toml file, auto-configure your package manager, and even have it auto-create a virtual environment.
It can also load environment variables from a separate file if you’d rather not put them in the toml (which you probably want if you’re checking the file in).
It’s not a package manager: This is important. You still need poetry or uv for Python package management. As a reminder: don’t ever use pip. Just don’t.
A Quick Example
Here’s what a
.mise.tomlfile looks like for a Python project:[tools] python = "3.12.1" "aqua:astral-sh/uv" = "latest" [env] # uv respects this for venv location UV_PROJECT_ENVIRONMENT = ".venv" _.python.venv = { path = ".venv", create = true }Pretty clean, right? This tells mise to use Python 3.12.1, install the latest version of uv, and automatically create a virtual environment in
.venv.Note on Poetry Support
I had to install python from source using mise to get poetry working. You will want to leave this setting to be true. There is some problem with the precompiled binaries they are using.
You can install global python packages, like poetry, with the following command:
mise use --global poetry@latestYes, It’s Written in Rust
The programming veterans among you may have noticed the toml configuration format and thought, “Ah, must be a Rust project.” And you’d be right. mise is written in Rust, which means it’s fast! The project is stable, has a ton of GitHub stars, and is actively maintained.
Task Runner Built-In
One feature I wasn’t expecting: mise has a built-in task runner. You can define tasks right in your
mise.toml:[tasks."venv:info"] description = "Show Poetry virtualenv info" run = "poetry env info" [tasks.test] description = "Run tests" run = "poetry run pytest"Then run them with
mise run testormise r venv:info.If you’ve been putting off setting up Make for a project, this is a compelling alternative. The syntax is cleaner and you get descriptions for free
I’ll probably keep using Just for more complex build and release workflows, but for simple project tasks, mise handles it nicely. One less tool to install.
My Experience So Far
I literally just switched everything over today, and it was a smooth process. No too major so far. I’ll report back if anything breaks, but the migration from my previous setup was straightforward.
Now, I need to get the other languages I use, like Go, Rust, and PHP setup and moved to mise. Having everything consolidated into one tool is going to be so nice.
If you’re tired of Homebrew breaking your language versions or juggling multiple version managers for different languages, give mise a try.
The documentation is solid, and the learning curve is minimal.
/ DevOps / Tools / Development / Python
-
Welcome to Kev Quirk’s little corner of the internet. Here you can find out lots of information about me, how to contact me and, of course, my blog.
/ Writing / Development / links / personal / php
-
/ Programming / Development / links / handbook / references
-
The Rise of Spec-Driven Development: A Guide to Building with AI
Spec-driven development isn’t new. It has its own Wikipedia page and has been around longer than you might realize.
With the explosion of AI coding assistants, this approach has found new life and we now have a growing ecosystem of tools to support it.
The core idea is simple: instead of telling an AI “hey, build me a thing that does the boops and the beeps” then hoping it reads your mind, you front-load the thinking.
It’s kinda obvious, with it being in the name, but in case you are wondering, here is how it works.
The Spec-Driven Workflow
Here’s how it typically works:
-
Specify: Start with requirements. What do you want? How should it behave? What are the constraints?
-
Plan: Map out the technical approach. What’s the architecture? What “stack” will you use?
-
Task: Break the plan into atomic, actionable pieces. Create a dependency tree—this must happen before that. Define the order of operations. This is often done by the tool.
-
Implement: You work with whatever tool to build the software from your task list. The human is (or should be) responsible for deciding when a task is completed.
You are still a part of the process. It’s up to you to make the decisions at the beginning. It’s up to you to define the approach. And it’s up to you to decide you’re done.
So how do you get started?
The Tool Landscape
The problem we have now is there is not a unified standard. The tool makers are busy building the moats to take time to agree.
Standalone Frameworks:
-
Spec-Kit - GitHub’s own toolkit that makes “specifications executable.” It supports multiple AI agents through slash commands and emphasizes intent-driven development.
-
BMAD Method - Positions AI agents as “expert collaborators” rather than autonomous workers. Includes 21+ specialized agents for different roles like product management and architecture.
-
GSD (Get Shit Done) - A lightweight system that solves “context rot” by giving each task a fresh context window. Designed for Claude Code and similar tools.
-
OpenSpec - Adds a spec layer where humans and AI agree on requirements before coding. Each feature gets its own folder with proposals, specs, designs, and task lists.
-
Autospec - A CLI tool that outputs YAML instead of markdown, enabling programmatic validation between stages. Claims up to 80% reduction in API costs through session isolation.
Built Into Your IDE:
The major AI coding tools have adopted this pattern too:
- Kiro - Amazon’s new IDE with native spec support
- Cursor - Has a dedicated plan mode
- Claude Code - Plan mode for safe code analysis
- VSCode Copilot - Chat planning features
- OpenCode - Multiple modes including planning
- JetBrains Junie - JetBrains' AI assistant
- Google Antigravity - Implementation planning docs
- Gemini Conductor - Orchestration for Gemini CLI
Memory Tools
- Beads - Use it to manage your tasks. Works very well with your Agents in Claude Code.
Why This Matters
When first getting started building with AI, you might dive right in and be like “go build thing”. You keep then just iterating on a task until it falls apart once you try to do anything substantial.
You end up playing a game of whack-a-mole, where you fix one thing and you break another. This probably sounds familiar to a lot of you from the olden times of 2 years ago when us puny humans did all the work. The point being, even the robots make mistakes.
Another thing that you come to realize is it’s not a mind reader. It’s a prediction engine. So be predictable.
What did we learn? With spec-driven development, you’re in charge. You are the architect. You decide. The AI just handles the details, the execution, but the AI needs structure, and so these are the method(s) to how we provide it.
/ AI / Programming / Tools / Development
-
-
The personal blog of Dave Rupert, web developer and podcaster from Austin, TX.
/ Writing / Dev / Development / links / Austin
-
Everyone crashing out over OpenCode situation. Why not just use Claude Code (2.1+)? Or you know, there’s AMP. AMP exists too, and it looks equally interesting to me.
/ AI / Tools / Development
-
Gitea - Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI/CD
/ DevOps / Development / links / platform / self-hosted / code
-
How AI Tools Help Founders Code Again: My Experience with Claude Code
From intimidation to empowerment: how AI tools made modern web development accessible again for a founder who hadn’t coded in 15 years
/ Development / Claude / links / code / coding lessons
-
Claude Code Learning Hub - Master AI-Powered Development
Learn Claude Code, VS Code, Git/GitHub, Python, and R with hands-on tutorials. Build real-world projects with AI assistance.
/ Programming / Development / links / tutorials / code / coding lessons / learning
-
api2spec work will continue this weekend. We will add support for more frameworks and languages. Finding gaps in the implementation as we build fixtures for each framework. If you have a framework you’d like to see supported, please let me know.
/ Open-source / Api2spec / Development