Programming
-
Wrote a guide on writing a good CLAUDE.md. Takeaway: keep it under 200 lines. Every line loads into context every session, so bloat costs real tokens.
How are you handling multiple AI context files across tools?
/ Programming / Tools / Claude-code
-
When to Use Python Over Bash
When to use python over bash is really a question of when to use bash. Python is a general-purpose language that can handle just about anything you throw at it. Bash, on the other hand, has a very specific sweet spot. Once you understand that sweet spot, the decision makes itself.
What Bash Actually Is
Bash is an interactive command interpreter and scripting language, created in 1989 for the GNU project as a free software alternative to the Bourne shell. It pulled in advanced features from the Korn shell and C shell, and it’s been commonly used by Unix and Linux systems ever since.
What makes Bash unique is its approach to data flow programming. Files, directories, and system processes are treated as first-class objects. Bash is designed to take advantage of utilities that almost always exist on Unix-based systems. So think of tools like
awk,sed,grep,cat, andcurl. Another important thing to know when writing effective Bash scripts, you also need to understand the pipeline operator and how I/O redirection works.A good Bash script will look something like this:
#!/bin/bash set -euo pipefail LOG_DIR="/var/log/myapp" DAYS_OLD=30 find "$LOG_DIR" -name "*.log" -mtime +"$DAYS_OLD" -print0 | xargs -0 gzip -9 echo "Compressed logs older than $DAYS_OLD days"Simple, portable, does one thing well. That’s Bash at its best.
Where Bash Falls Short
Bash isn’t typed. There’s no real object orientation. Error handling is basically
set -eand hoping for the best. There’s notry/catch, no structured exception handling. When things go wrong in a Bash script, they tend to go wrong quietly or spectacularly, with not much in between.Python, by contrast, is optionally strongly typed and object-oriented. If you want to manipulate a file or a system process in Python, you wrap that system entity inside a Python object. That adds some overhead, sure, but in exchange you get something that’s more predictable, more secure, and scales well from simple scripts to complex logic.
Here’s that same log compression task in Python:
from pathlib import Path import gzip import shutil from datetime import datetime, timedelta log_dir = Path("/var/log/myapp") cutoff = datetime.now() - timedelta(days=30) for log_file in log_dir.glob("*.log"): if datetime.fromtimestamp(log_file.stat().st_mtime) < cutoff: with open(log_file, "rb") as f_in: with gzip.open(f"{log_file}.gz", "wb") as f_out: shutil.copyfileobj(f_in, f_out) log_file.unlink()More verbose? Absolutely. But also more explicit about what’s happening, easier to extend, and much easier to add error handling to.
The Performance Question
In some cases, performance genuinely matters. Think high-frequency trading platforms, edge devices, or massive clusters. Bash scripts excel here because there’s almost zero startup overhead. Compare that to Python, which needs to load up the interpreter before it can start executing code. You’re going from microseconds to milliseconds, and sometimes that matters.
But startup time is just one factor. When you compare the actual work being done, Python can pull ahead. String manipulation on structured data? Python wins. Parsing JSON, YAML, or any structured format? Python’s core libraries are written in C and optimized for exactly this kind of work. If you find yourself reaching for
jqoryqin a Bash script, that’s a strong signal you should be using Python instead.The Guidelines People Throw Around
You’ll see a common guideline online: if your script exceeds 100 lines of Bash, rewrite it in Python. But a lot of veterans in the industry feel like that cutoff is way too generous. Experienced engineers often put it at 50 lines, or even 25.
Another solid indicator: nested
ifstatements. Some people say “deeply nested” if statements, but let’s be honest, more than one level of nesting in Bash is already getting painful. Python handles complex branching logic far more gracefully, and you’ll thank yourself when you come back to maintain it six months later.Unit Testing Tells the Story
You can do unit testing with Bash. BATS (Bash Automated Testing System) exists, and ShellCheck is useful as a lightweight linter for catching bad practices. But despite these tools, Python’s testing ecosystem is on another level entirely. It’s fully mature with multiple frameworks, excellent mocking capabilities, and the ability to simulate network calls, external APIs, or system binaries. Complex mocking that would be difficult or impossible in Bash is straightforward in Python.
If your script needs solid testing or if it’s doing anything important, that’s a strong vote for Python.
Bash’s Biggest Win: Portability
So what does Bash actually win at? Portability. When you think about all the dependencies Python needs to run, Bash is the clear winner. You’re distributing a single
.shfile. That’s it.With Python, you have to ask: Does Python exist on this machine? Is it the right version? You’ll need a virtual environment so you don’t pollute system Python. You need third-party libraries installed via a package manager; and please friends, remember that we don’t let friends use pip. Use Poetry or uv. Pip is so bad that I’d honestly argue that Bash not having a package manager is better than Python having pip. At least Bash doesn’t pretend to manage dependencies well.
If you want something simple, something that can run on practically any Unix-based machine without setup, Bash is your answer. Even Windows can handle it these days through WSL, though you’re jumping through a few hoops.
TLDR
The decision is actually pretty straightforward:
- Use Bash when you’re gluing together system commands, the logic is linear, it’s under 50 lines, and portability matters.
- Use Python when you’re parsing structured data, need error handling, have branching logic, want proper tests, or the script is going to grow.
If you’re reaching for
jq, writing nestedifstatements, or the script is getting long enough that you’re losing track of what it does… it’s time for Python.I think in a future post we might look at when Go makes sense over Bash. There’s a lot to cover there about compiled binaries, but for now, hopefully this helps you make the call next time you’re wondering what to start your scripting with.
/ DevOps / Programming / Python / Bash / Scripting
-
Local Secrets Manager - Dotenv Encrypter
I built a thing to solve a problem. It has helped me, maybe it will help you?
It all starts with a question.
Why isn’t there a good local secrets manager that encrypts your secrets at rest? I imagine a lot of people, like me, have a number of local applications. I don’t want to pay per-seat pricing just to keep my sensitive data from sitting in plaintext on my machine.
I built an app called LSM Local Secrets Manager to solve that problem. The core idea is simple. Encrypt your
.envfiles locally and only decrypt when you need them (sometimes at runtime).The Problem
If you’ve got a bunch of projects on your machine, each with their own
.envor.env.localfile full of API keys you’re definitely not rotating every 90 days. Those files just sit there in plaintext. Any process on your system can read them. And with AI agents becoming part of our dev workflows, the attack surface for leaking secrets is only getting easier.ThE CLAW EnteRed ChaT
I started looking at Doppler specifically for OpenCLAW. Their main selling feature is injecting secrets into your runtime so they never touch the filesystem. I was like, cool. Also I like that Doppler stores everything remotely. The only thing was the cost did not make sense for me right now. I don’t want to pay $10-20 a month for this set of features.
So what else is there?
Well GCP Secret Manager has its own set of issues.
You can’t have duplicate names per project, so something as common as
NODE_ENVacross multiple apps becomes a more work than you want to deal with. Some wrapper script that injects prefixes? No thanks. I imagine there are a thousand and one homegrown solutions to solve this problem. Again, no thanks.So what else is there?
You Find A Solution
AWS Secret Manager
A Problem for Solution Problem
AWS IAM
🫣
I have a lot more to say here on this subject but will save this for another post. Subscribe if you want to see the next post.
The Solution
The workflow is straightforward:
lsm init— Run this once from anywhere. It generates your encryption key file.lsm link <app-name>— Run this inside your project directory. It creates a config entry in~/.lsm/config.yamlfor that application.lsm import— Takes your existing.envor.env.localand creates an encrypted version.lsm clean— Removes the plaintext.envfiles so they’re not just sitting around.lsm dump— Recreates the.envfiles if you need them back.
But wait there’s more.
Runtime Injection with
lsm execRemember that cool thing I just told you about? Instead of dumping secrets back to disk, you run:
lsm exec -- pnpm devI feel like a family man from Jersey, who don’t mess around. Aye, you got, runtime injection. I got that.
Well that’s
lsmanyways. It can decrypt your secrets and inject them directly into the runtime environment of whatever command follows the--. Your secrets exist in memory for the duration of that process and nowhere else. No plaintext files hanging around for other processes to sniff.Credit to Doppler for the idea. The difference to what we are doing is your encrypted files stay local.
What’s Next
I’ve got some possible ideas of improvements to try building.
- Separate encrypt/decrypt keys — You create secrets with one key, deploy the encrypted file to a server, and use a read-only key to decrypt at runtime. The server never has write access to your secrets.
- Time-based derivative keys — Imagine keys that expire or rotate automatically.
- Secure sharing — Right now you’d have to decrypt and drop the file into a password manager to share it. There’s room to make that smoother.
I’m not sure how to do all of that yet, but we’re making progress.
Why Not Just Use Doppler?
There are genuinely compelling reasons to use Doppler or similar services. I mean bsides the remote storage, access controls and auditable logs. There’s a lot to love.
For local development across a bunch of personal projects? I don’t think you should need a SaaS subscription to keep your secrets encrypted.
LSM is still early, but the core workflow is there and it works.
Give it a try if you’re tired of plaintext
.envfiles scattered across your machine.
/ DevOps / Programming / Tools / security
-
Python: If your CLI tool uses print() and gets called as a subprocess by Claude Code, the output now gets swallowed. The parent process captures it. Structured logging will fix it.
/ Programming / Claude-code / Debugging
-
The Only Code That Defies Entropy
A developer explores the Second Law of Thermodynamics and finds that love is the only code capable of creating local order against universal entropy.
/ Programming / links / code / learning
-
How Python and Kotlin provide structured concurrency out of the box while Go achieves the same patterns explicitly using errgroup, WaitGroup, and context.
/ Programming / Golang / Python / links
-
How to Write a Good CLAUDE.md File
Every time you start a new chat session with Claude Code, it’s starting from zero knowledge about your project. It doesn’t know your tech stack, your conventions, or where anything lives. A well-written
CLAUDE.mdfile fixes that by giving Claude the context it needs before it writes a single line of code.This is context engineering, and your
CLAUDE.mdfile is one of the most important pieces of it.Why It Matters
Without a context file, Claude has to discover basic information about your project — what language you’re using, how the CLI works, where tests live, what your preferred patterns are. That discovery process burns tokens and time. A good
CLAUDE.mdfront-loads that knowledge so Claude can get to work immediately.If you haven’t created one yet, you can generate a starter file with the
/initcommand. Claude will analyze your project and produce a reasonable first draft. It’s a solid starting point, but you’ll want to refine it over time.The File Naming Problem
If you’re working on a team where people use different tools: Cursor has its own context file, OpenAI has theirs, and Google has theirs. You can easily end up with three separate context files that all contain slightly different information about the same project. That’s a maintenance headache.
It would be nice if Anthropic made the filename a configuration setting in
settings.json, but as of now they don’t. Some tools like Cursor do let you configure the default context file, so it’s worth checking.My recommendation? Look at what tools people on your team are actually using and try to standardize on one file, maybe two. I’ve had good success with the symlink approach , where you pick your primary file and symlink the others to it. So if
CLAUDE.mdis your default, you can symlinkAGENTS.mdorGEMINI.mdto point at the same file.It’s not perfect, but it beats maintaining three separate files with diverging information.
Keep It Short
Brevity is crucial. Your context file gets loaded into the context window every single session, so every line costs tokens. Eliminate unnecessary adjectives and adverbs. Cut the fluff.
A general rule of thumb that Anthropic recommends is to keep your
CLAUDE.mdunder 200 lines. If you’re over that, it’s time to trim.I recently went through this exercise myself. I had a bunch of Python CLI commands documented in my context file, but most of them I rarely needed Claude to know about.
We don’t need to list every single possible command in the context file. That information is better off in a
docs/folder or your project’s documentation. Just add a line in yourCLAUDE.mdpointing to where that reference lives, so Claude knows where to look when it needs it.Maintain It Regularly
A context file isn’t something you write once and forget about. Review it periodically. As your project evolves, sections become outdated or irrelevant. Remove them. If a section is only useful for a specific type of task, consider moving it out of the main file entirely.
The goal is to keep only the information that’s frequently relevant. Everything else should live somewhere Claude can find it on demand, not somewhere it has to read every single time.
Where to Put It
Something that’s easy to miss: you can put your project-level
CLAUDE.mdin two places../CLAUDE.md(project root)./.claude/CLAUDE.md(inside the.claudedirectory)
A common pattern is to
.gitignorethe.claude/folder. So if you don’t want to check in the context file — maybe it contains personal preferences or local paths — putting it in.claude/is a good option.Rules Files for Large Projects
If your context file is getting too large and you genuinely can’t cut more, you have another option: rules files. These go in the
.claude/rules/directory and act as supplemental context that gets loaded on demand rather than every session.You might have one rule file for style guidelines, another for testing conventions, and another for security requirements. This way, Claude gets the detailed context when it’s relevant without bloating the main file.
Auto Memory: The Alternative Approach
Something you might not be aware of is that Claude Code now has auto memory, where it automatically writes and maintains its own memory files. If you’re using Claude Code frequently and don’t want to manually maintain a context file, auto memory can be a good option.
The key thing to know is that you should generally use one approach or the other. If you’re relying on auto memory, delete the
CLAUDE.mdfile, and vice versa.Auto memory is something I’ll cover in more detail in another post, but it’s worth knowing the feature exists. Just make sure you enable it in your
settings.jsonif you want to try it.Quick Checklist
If you’re writing or revising your
CLAUDE.mdright now, here’s what I’d focus on:- Keep it under 200 lines — move detailed references to docs
- Include your core conventions — package manager, runtime, testing approach
- Document key architecture — how the project is structured, where things live
- Add your preferences — things Claude should always or never do
- Review monthly — cut what’s no longer relevant
- Consider symlinks — if your team uses multiple AI tools
- Use rules files — for detailed, task-specific context
That’s All For Now. 👋
/ AI / Programming / Claude-code / Developer-tools
-
aden-hive/hive: Outcome driven agent development framework that evolves
Outcome driven agent development framework that evolves - aden-hive/hive
/ Programming / links / agent / platform
-
I built a hacker terminal typing game. Type Python symbols to “decrypt” corrupted code, hex becomes assembly, then pseudocode, then real source. Mess up and the screen glitches. A typing tutor for programmers that doesn’t feel like one.
/ Gaming / Programming / Python
-
What I Actually Mean When I Say "Vibe Coding"
Have you thought about what you actually mean when you tell someone you vibe code? The term gets thrown around a lot, but I think there’s a meaningful distinction worth spelling out. Here’s how I think about it.
Vibe coding, to me, doesn’t mean you’re using an LLM because at this point, it’s hard to avoid using an LLM no matter what type of work you’re doing. The distinction isn’t about the tools, it’s about the intent and the stakes.
Vibe Coding vs. Engineering Work
To me, vibe coding is when I have a simple idea and I want to build a self-contained app that expresses that idea. There’s no spec, no requirements doc, no code review. Just an idea and the energy to make it real.
Engineering work, whether that’s context engineering, spec-driven development, or the old-fashioned way; is when I’m working on open source or paid work. There’s structure, there are standards, and the code needs to hold up over time.
Both are fun. But I take vibe coding less seriously than engineering work.
Vibe coding is the creative playground. Engineering is the craft.
My Setup
All my vibe coded apps live in a single repository. The code isn’t public, but the results are. You can find all my ideas at logan.center.
I have it set up so that every time I add a new folder, it automatically gets its own subdomain.
I’m using Caddy for routing and deploying to Railway. I have an idea, create the app, and boom — it’s live.
I would like to open source a template version of this setup so other people could deploy to something like Railway easily, but I haven’t gotten around to building that yet. One day.
For my repository, I decided it would be fun to build everything in Svelte instead of React.
That’s why you may have seen a bunch of posts from me lately about learning how Svelte works. It’s been fun because the framework stays out of your way and lets you move fast, which is exactly what you want when you’re chasing the vibe.
So, for me, vibe coding is a specific thing: low stakes, high creativity, self-contained apps, and the freedom to just build without overthinking it.
I mean I’m not crazy, I still have testing setup…
/ Programming / Vibe-coding / svelte
-
Stop Using pip. Seriously.
If you’re writing Python in 2026, I need you to pretend that pip doesn’t exist. Use Poetry or uv instead.
Hopefully you’ve read my previous post on why testing matters. If you haven’t, go read that first. Back? Hopefully you are convinced.
If you’re writing Python, you should be writing tests, and you can’t do that properly with pip. It’s an unfortunate but true state of Python right now.
In order to write tests, you need dependencies, which is how we get to the root of the issue.
The Lock File Problem
The closest thing pip has to a lock file is
pip freeze > requirements.txt. But it just doesn’t cut the mustard. It’s just a flat list of pinned versions.A proper lock file captures the resolution graph, the full picture of how your dependencies relate to each other. It distinguishes between direct dependencies (the packages you asked for) and transitive dependencies (the packages they pulled in). A
requirements.txtdoesn’t do any of that.Ok, so? You might be asking yourself.
It means that you can’t guarantee that running
pip install -r requirements.txtsix months or six minutes from now will give you the same copy of all your dependencies.It’s not repeatable. It’s not deterministic. It’s not reliable.
The one constant in Code is that it changes. Without a lock file, you’re rolling the dice every time.
Everyone Else Figured This Out
Every other modern language ecosystem “solved” this problem years ago:
- JavaScript has
package-lock.json(npm) andpnpm-lock.yaml(pnpm) - Rust has
Cargo.lock - Go has
go.sum - Ruby has
Gemfile.lock - PHP has
composer.lock
Python’s built-in package manager just… doesn’t have this.
That’s a real problem when you’re trying to build reproducible environments, run tests in CI, or deploy with any confidence that what you tested locally is what’s running in production.
What to Use Instead
Both Poetry and uv solve the lock file problem and give you reproducible environments. They’re more alike than different — here’s what they share:
- Lock files with full dependency resolution graphs
- Separation of dev and production dependencies
- Virtual environment management
pyproject.tomlas the single config file- Package building and publishing to PyPI
Poetry is the more established option. It’s at version 2.3 (released January 2026), supports Python 3.10–3.14, and has been the go-to alternative to pip for years. It’s stable, well-documented, and has a large ecosystem of plugins.
uv is the newer option from Astral (the team behind Ruff). It’s written in Rust and is 10–100x faster than pip at dependency resolution. It can also manage Python versions directly, similar to mise or pyenv. It’s currently at version 0.10, so it hasn’t hit 1.0 yet, but gaining adoption fast.
You can’t go wrong with either. Pick one, use it, and stop using pip.
/ DevOps / Programming / Python
- JavaScript has
-
Why Testing Matters
There is a fundamental misunderstanding about testing in software development. The dirty and not-so-secret, secret, in software development is TESTING is more often than not seen as something that we do after the fact, despite the best efforts from the TDD and BDD crowd.
So why is that the case and why does TESTING matter?
All questions about software decisions lead to a maintainability answer.
If you write software that is intended to be used, and I don’t care how you write it, what language, what framework or what your background is; it should be tested or it should be deleted/archived.
That sounds harsh but it’s the truth.
If you intended to run the software beyond the moment you built it, then it needs to be maintained. It could be used by someone else, or even by you at a later date, it doesn’t matter. Test it.
if software.intended_to_run > once: testing = requiredThat’s just the reality of the craft. Here is why.
Testing Is Showing Your Work
Remember proofs in math class? Testing is the software equivalent. It’s how you show your work. It’s how you demonstrate that the thing you built actually does what you say it does, and will keep doing it tomorrow.
Chances are your project has dependencies. What happens to those dependencies a month from now? Five years from now? A decade?
Code gets updated. Libraries evolve. APIs change. Testing makes sure that those future dependency updates aren’t going to cause regression issues in your application.
It’s a bet against future problems. If I write tests now, I reduce the time I spend debugging later. That’s not idealism, it’s just math.
T = Σ(B · D) - CWhere B = probability of bug, D = debug time, C = cost of writing tests and T is time saved.
Protecting Your Team’s Work
If you’re working on a team at the ole' day job, you want to make sure that the code other people are adding isn’t breaking the stuff you’re working on or the stuff you worked on six months ago, add tests.
Tests give you that safety net. They’re the contract that says “this thing works, and if someone changes it in a way that breaks it, we’ll know immediately.”
Without tests, you’re essentially hoping for the best and hope isn’t good bet when it comes to the future of a software based business.
Your Customers Are Not Your QA Team
Auto-deploying to production without any testing or verification process? That’s just crazy. You shouldn’t be implicitly or explicitly asking your customers to test your software. It’s not their responsibility. It’s yours.
Your job is to produce software that’s as bug-free as possible. Software that people can rely on. Reliable, maintainable software, that’s what you owe the people using what you build.
Bringing Testing to the Table
Look, I get it. Writing tests isn’t the most fun part of the job. However, a lot has changed in the past couple of years. You might have heard about this whole AI thing? With the Agents we all have available to us, we can add tests with as little as 5 words.
“Write tests on new code.”
Looking back at that forumla for C, we can now see that the cost of writing tests is quickly approaching zero. It just takes a bit of time for the tests to be written, it just takes a bit of time to verify the tests the Agent added are useful.
Don’t worry about doing everything at the start and setup a full CI pipeline to run the tests. Just start with the 5 words and add the complicated bits later.
No excuses, just testing.
-
Laravel - The PHP Framework For Web Artisans
Laravel is a PHP web application framework with expressive, elegant syntax. We’ve already laid the foundation — freeing you to create without sweating the small things.
/ Programming / Webdev / links / open source / php / laravel / framework
-
Why Svelte 5 Wants `let` Instead of `const` (And Why Your Linter Is Confused)
If you’ve been working with Svelte 5 and a linter like Biome, you might have run into this sitauation:
use
constinstead ofletHowever, Svelte actually needs that
let.Have you ever wondered why this is?
Here is an attempt to explain it to you.
Svelte 5’s reactivity model is different from the rest of the TypeScript world, and some of our tooling hasn’t quite caught up yet.
The
constRule Everyone KnowsIn standard TypeScript and React, the
prefer-construle is a solid best practice.If you declare a variable and never reassign it, use
const. It communicates intent clearly: this binding won’t change. You would thinkBut there is confusion when it comes to objects that are defined as
const.Let’s take a look at a React example:
// React — const makes perfect sense here const [count, setCount] = useState(0); const handleClick = () => setCount(count + 1);count is a number (primitive). setCount is a function.
Neither can modify itself so it makes sense to use
const.Svelte 5 Plays by Different Rules
Svelte 5 introduced runes — reactive primitives like
$state(),$derived(), and$props()that bring fine-grained reactivity directly into JavaScript.Svelte compiler transforms these declarations into getters and setters behind the scenes.
The value does get reassigned, even if your code looks like a simple variable.
<script lang="ts"> let count = $state(0); </script> <button onclick={() => count++}> Clicked {count} times </button>Biome Gets This Wrong
But they are trying to make it right. Biome added experimental Svelte support in v2.3.0, but it has a significant limitation: it only analyzes the
<script>block in isolation.It doesn’t see what happens in the template. So when Biome looks at this:
<script lang="ts"> let isOpen = $state(false); </script> <button onclick={() => isOpen = !isOpen}> Toggle </button>It only sees
let isOpen = $state(false)and thinks: “this variable is never reassigned, useconst.”It completely misses the
isOpen = !isOpenhappening in the template markup.If you run
biome check --write, it will automatically changelettoconstand break your app.The Biome team has acknowledged this as an explicit limitation of their partial Svelte support.
For now, the workaround is to either disable the
useConstrule for Svelte files or addbiome-ignorecomments where needed.What About ESLint?
The Svelte ESLint plugin community has proposed two new rules to handle this properly:
svelte/rune-prefer-let(which recommendsletfor rune declarations) and a Svelte-awaresvelte/prefer-constthat understands reactive declarations. These would give you proper linting without the false positives.My Take
Svelte 5’s runes are special and deserve their own way of handling variable declarations.
React hooks are different.
Svelte’s compiler rewrites your variable.
I like Svelte.
Please fix Biome.
Don’t make me use eslint.
/ Programming / svelte / Typescript / Biome / Linting
-
REPL-Driven Development Is Back (Thanks to AI)
So you’ve heard of TDD. Maybe BDD. But have you heard of RDD?
REPL-driven development. I think most programmers these days don’t work this way. The closest equivalent most people are familiar with is something like Python notebooks—Jupyter or Colab.
But RDD is actually pretty old. Back in the 70s and 80s, Lisp and Smalltalk were basically built around the REPL. You’d write code, run it immediately, see the result, and iterate. The feedback loop was instant.
Then the modern era of software happened. We moved to a file-based workflow, probably stemming from Unix, C, and Java. You write source code in files. There’s often a compilation step. You run the whole thing.
The feedback loop got slower, more disconnected. Some languages we use today like Python, Ruby, JavaScript, PHP include a REPL, but that’s not usually how we develop. We write files, run tests, refresh browsers.
Here’s what’s interesting: AI coding assistants are making these interactive loops relevant again.
The new RDD is natural language as a REPL.
Think about it. The traditional REPL loop was:
- Type code
- System evaluates it
- See the result
- Iterate
The AI-assisted loop is almost identical:
- Type (or speak) your intent in natural language
- AI interprets and generates code
- AI runs it and shows you the result
- Iterate
You describe what you want. The AI writes the code. It executes. You see what happened. If it’s not right, you clarify, and the loop continues.
This feels fundamentally different from the file-based workflow most of us grew up with. You’re not thinking about which file to open, You’re thinking about what you want to happen, and you’re having a conversation until it does.
Of course, this isn’t a perfect analogy. With a traditional REPL, you have more control. You understood exactly what was being evaluated because you wrote it.
>>> while True: ... history.repeat()/ AI / Programming / Development
-
I kinda want to build more git/hub tools. I guess I have been trying to get to 20 repos that I am proud of. I certainly don’t all of them public. I am taking most of mine private or archiving. I should probably focus on like 5 or 1. Open claw happens when you become hyper fixated on just one thing.
/ Programming / Github
-
Why Data Modeling Matters When Building with AI
If you’ve started building software recently, especially if you’re leaning heavily on AI tools to help you code—here’s something that might not be obvious: data modeling matters more now than ever.
AI is remarkably good at getting the local stuff right. Functions work. Logic flows. Tests pass. But when it comes to understanding the global architecture of your application? That’s where things get shaky.
Without a clear data model guiding the process, you’re essentially letting the AI do whatever it thinks is best. And what the AI thinks is best isn’t always what’s best for your codebase six months from now.
The Flag Problem
When you don’t nail down your data structure upfront, AI tools tend to reach for flags to represent state. You end up with columns like
is_draft,is_published,is_deleted, all stored as separate boolean fields.This seems fine at first. But add a few more flags, and suddenly you’ve got rows where
is_draft = trueANDis_published = trueANDis_deleted = true.That’s an impossible state. Your code can’t handle it because it shouldn’t exist.
Instead of multiple flags, use an enum:
status: draft | published | deleted. One field. Clear states. No contradictions.This is just one example of why data modeling early can save you from drowning in technical debt later.
Representation, Storage, and Retrieval
If data modeling is about the shape of your data, data structures determine how efficiently you represent, store, and retrieve it.
This matters because once you’ve got a lot of data, migrating from one structure to another, or switching database engines—becomes genuinely painful.
When you’re designing a system, think about its lifetime.
- How much data will you store monthly? Yearly?
- How often do you need to retrieve it?
- Does recent data need to be prioritized over historical data?
- Will you use caches or queues for intermediate storage?
Where AI Takes Shortcuts
AI agents inherit our bad habits. Lists and arrays are everywhere in their training data, so they default to using them even when a set, hash map, or dictionary would perform dramatically better.
In TypeScript, I see another pattern constantly: when the AI hits type errors, it makes everything optional.
Problem solved, right? Except now your code is riddled with null checks and edge cases that shouldn’t exist.
Then there’s the object-oriented problems. When building software that should use proper OOP patterns, AI often takes shortcuts in how it represents data. Those shortcuts feel fine in the moment but create maintenance nightmares down the road.
The Prop Drilling Epidemic
LLM providers have optimized their agents to be nimble, managing context windows so they can stay productive. That’s a good thing. But that nimbleness means the agents don’t always understand the full structure of your code.
In TypeScript projects, this leads to prop drilling: passing the entire global application object down through nested components.
Everything becomes tightly coupled. When you need to change the structure of an object, it’s like dropping a pebble in a pond. The ripples spread everywhere.
You change one thing, and suddenly you’re fixing a hundred other places that all expected the old structure.
The Takeaway
If you’re building with AI, invest time in data modeling before you start coding. Define your data structures. Think about how your data will grow and how you’ll access it.
The AI can help you build fast. But you still need to provide the architectural vision. That’s not something you can blindly trust the AI to handle, not yet, anyway.
-
A specification for adding human and machine readable meaning to commit messages
/ Programming / Git / links / code / commit
-
JavaScript Still Doesn't Have Types (And That's Probably Fine)
Here’s the thing about JavaScript and types: it doesn’t have them, and it probably won’t any time soon.
Back in 2022, there was a proposal to add TypeScript-like type syntax directly to JavaScript. The idea was being able to write type annotations without needing a separate compilation step. But the proposal stalled because the JavaScript community couldn’t reach consensus on implementation details.
The core concern? Performance. JavaScript is designed to be lightweight and fast, running everywhere from browsers to servers to IoT devices. Adding a type system directly into the language could slow things down, and that’s a tradeoff many aren’t willing to make.
So the industry has essentially accepted that if you want types in JavaScript, you use TypeScript. And honestly? That’s fine.
TypeScript: JavaScript’s Type System
TypeScript has become the de facto standard for typed JavaScript development. Here’s what it looks like:
// TypeScript Example let name: string = "John"; let age: number = 30; let isStudent: boolean = false; // Function with type annotations function greet(name: string): string { return `Hello, ${name}!`; } // Array with type annotation let numbers: number[] = [1, 2, 3]; // Object with type annotation let person: { name: string; age: number } = { name: "Alice", age: 25 };TypeScript compiles down to plain JavaScript, so you get the benefits of static type checking during development without any runtime overhead. The types literally disappear when your code runs.
The Python Parallel
You might be interested to know that the closest parallel to this JavaScript/TypeScript situation is actually Python.
Modern Python has types, but they’re not enforced by the language itself. Instead, you use third-party tools like mypy for static analysis and pydantic for runtime validation. There’s actually a whole ecosystem of libraries supporting types in Python in various ways, which can get a bit confusing.
Here’s how Python’s type annotations look:
# Python Example name: str = "John" age: int = 30 is_student: bool = False # Function with type annotations def greet(name: str) -> str: return f"Hello, {name}!" # List with type annotation numbers: list[int] = [1, 2, 3] # Dictionary with type annotation person: dict[str, int] = {"name": "Alice", "age": 25}Look familiar? The syntax is surprisingly similar to TypeScript. Both languages treat types as annotations that help developers and tools understand the code, but neither strictly enforces them at runtime (unless you add additional tooling).
What This Means for You
If you’re writing JavaScript, stop, and use TypeScript. It’s mature and widely adopted. Now also you can run TypeScript directly in some runtimes like Bun or Deno.
Type systems were originally omitted from many of these languages because the creators wanted to establish a low barrier to entry, making it significantly easier for people to adopt the language.
Additionally, computers at the time were much slower, and compiling code with rigorous type systems took a long time, so creators prioritized the speed of the development loop over strict safety.
However, with the power of modern computers, compilation speed is no longer a concern. Furthermore, the type systems themselves have improved significantly in efficiency and design.
Since performance is no longer an issue, the industry has shifted back toward using types to gain better structure and safety without the historical downsides.
/ Programming / Python / javascript / Typescript
-
/ Programming / Development / links / handbook / references
-
The Rise of Spec-Driven Development: A Guide to Building with AI
Spec-driven development isn’t new. It has its own Wikipedia page and has been around longer than you might realize.
With the explosion of AI coding assistants, this approach has found new life and we now have a growing ecosystem of tools to support it.
The core idea is simple: instead of telling an AI “hey, build me a thing that does the boops and the beeps” then hoping it reads your mind, you front-load the thinking.
It’s kinda obvious, with it being in the name, but in case you are wondering, here is how it works.
The Spec-Driven Workflow
Here’s how it typically works:
-
Specify: Start with requirements. What do you want? How should it behave? What are the constraints?
-
Plan: Map out the technical approach. What’s the architecture? What “stack” will you use?
-
Task: Break the plan into atomic, actionable pieces. Create a dependency tree—this must happen before that. Define the order of operations. This is often done by the tool.
-
Implement: You work with whatever tool to build the software from your task list. The human is (or should be) responsible for deciding when a task is completed.
You are still a part of the process. It’s up to you to make the decisions at the beginning. It’s up to you to define the approach. And it’s up to you to decide you’re done.
So how do you get started?
The Tool Landscape
The problem we have now is there is not a unified standard. The tool makers are busy building the moats to take time to agree.
Standalone Frameworks:
-
Spec-Kit - GitHub’s own toolkit that makes “specifications executable.” It supports multiple AI agents through slash commands and emphasizes intent-driven development.
-
BMAD Method - Positions AI agents as “expert collaborators” rather than autonomous workers. Includes 21+ specialized agents for different roles like product management and architecture.
-
GSD (Get Shit Done) - A lightweight system that solves “context rot” by giving each task a fresh context window. Designed for Claude Code and similar tools.
-
OpenSpec - Adds a spec layer where humans and AI agree on requirements before coding. Each feature gets its own folder with proposals, specs, designs, and task lists.
-
Autospec - A CLI tool that outputs YAML instead of markdown, enabling programmatic validation between stages. Claims up to 80% reduction in API costs through session isolation.
Built Into Your IDE:
The major AI coding tools have adopted this pattern too:
- Kiro - Amazon’s new IDE with native spec support
- Cursor - Has a dedicated plan mode
- Claude Code - Plan mode for safe code analysis
- VSCode Copilot - Chat planning features
- OpenCode - Multiple modes including planning
- JetBrains Junie - JetBrains' AI assistant
- Google Antigravity - Implementation planning docs
- Gemini Conductor - Orchestration for Gemini CLI
Memory Tools
- Beads - Use it to manage your tasks. Works very well with your Agents in Claude Code.
Why This Matters
When first getting started building with AI, you might dive right in and be like “go build thing”. You keep then just iterating on a task until it falls apart once you try to do anything substantial.
You end up playing a game of whack-a-mole, where you fix one thing and you break another. This probably sounds familiar to a lot of you from the olden times of 2 years ago when us puny humans did all the work. The point being, even the robots make mistakes.
Another thing that you come to realize is it’s not a mind reader. It’s a prediction engine. So be predictable.
What did we learn? With spec-driven development, you’re in charge. You are the architect. You decide. The AI just handles the details, the execution, but the AI needs structure, and so these are the method(s) to how we provide it.
/ AI / Programming / Tools / Development
-
-
Tessl - Agent Enablement Platform
Tessl helps teams build AI-native software by giving coding agents structured, versioned context. Ship AI-powered systems that hold up in real codebases.
/ AI / Programming / links / agent / automation / platform
-
Shu has a great article on performance:
Performance is like a garden. Without constant weeding, it degrades.
It’s not just a JavaScript thing, it’s a mindset. We keep making the same mistakes even with solid understanding of the code because just like Agents we too have a context window. It takes ongoing maintenance to keep the garden looking good.
Worth a read: shud.in/thoughts/…
-
I’m reading an article today about a long-term programmer coming to terms with using Claude Code. There’s a quote at the end that really stuck with me: “It’s easy to generate a program you don’t understand, but it’s much harder to fix a program that you don’t understand.”
I concur, while building it may be fun, guess what? Once you build it, you got to maintain it, and as a part of that, it means you got to know how it works for when it doesn’t.
/ AI / Programming / Claude-code
-
Security and Reliability in AI-Assisted Development
You may not realize it, but AI code generation is fundamentally non-deterministic. It’s probabilistic at its core, it’s predicting code rather than computing it.
And while there’s a lot of orchestration happening between the raw model output and what actually lands in your editor, you can still get wildly different results depending on how you use the tools.
This matters more than most people realize.
Garbage In, Garbage Out (Still True)
The old programming adage applies here with renewed importance. You need to be explicit with these tools. Adding predictability into how you build is crucial.
Some interesting patterns:
- Specialized agents set up for specific tasks
- Skills and templates for common operations
- Orchestrator conversations that plan but don’t implement directly
- Multiple conversation threads working on the same codebase via Git workspaces
The more structure you provide, the more consistent your output becomes.
The Security Problem
This topic doesn’t get talked about enough. All of our common bugs have snuck into the training data. SQL injection patterns, XSS vulnerabilities, insecure defaults… they’re all in there.
The model can’t always be relied upon to build it correctly the first time. Then there’s the question of trust.
Do you trust your LLM provider?
Is their primary focus on quality and reliable, consistent output? What guardrails exist before the code reaches you? Is the model specialized for coding, or is it a general-purpose model that happens to write code?
These are important engineering questions.
Deterministic Wrappers Around Probabilistic Cores
The more we can put deterministic wrappers around these probabilistic cores, the more consistent the output will be.
So, what does this look like in practice?
Testing is no longer optional. We used to joke that we’d get to testing when we had time. That’s not how it works anymore. Testing is required because it provides feedback to the models. It’s your mechanism for catching problems before they compound.
Testing is your last line of defense against garbage sneaking into the system.
AI-assisted review is essential. The amount of code you can now create has increased dramatically. You need better tools to help you understand all that code. The review step, typically done during a pull request, is now crucial for product development. Not optional. Crucial.
The models need to review itself, or you need a separate review process that catches what the generating step missed.
The Takeaway
We’re in an interesting point in time. These tools can dramatically increase your output, but only if you build the right guardrails around them should we trust the result.
Structure your prompts. Test everything. Review systematically. Trust but verify.
The developers who figure out how to add predictability to unpredictable processes are the ones who’ll who will be shipping features instead of shitting out code.
/ DevOps / AI / Programming