DevOps
-
I switched to mise for version management a month ago. No regrets. No more
brew upgradebreaking Python. Built-in task runner replaced some of projects that were using Makefiles.Still juggling nvm + pyenv + rbenv?
/ DevOps / Programming / Tools
-
When to Use Python Over Bash
When to use python over bash is really a question of when to use bash. Python is a general-purpose language that can handle just about anything you throw at it. Bash, on the other hand, has a very specific sweet spot. Once you understand that sweet spot, the decision makes itself.
What Bash Actually Is
Bash is an interactive command interpreter and scripting language, created in 1989 for the GNU project as a free software alternative to the Bourne shell. It pulled in advanced features from the Korn shell and C shell, and it’s been commonly used by Unix and Linux systems ever since.
What makes Bash unique is its approach to data flow programming. Files, directories, and system processes are treated as first-class objects. Bash is designed to take advantage of utilities that almost always exist on Unix-based systems. So think of tools like
awk,sed,grep,cat, andcurl. Another important thing to know when writing effective Bash scripts, you also need to understand the pipeline operator and how I/O redirection works.A good Bash script will look something like this:
#!/bin/bash set -euo pipefail LOG_DIR="/var/log/myapp" DAYS_OLD=30 find "$LOG_DIR" -name "*.log" -mtime +"$DAYS_OLD" -print0 | xargs -0 gzip -9 echo "Compressed logs older than $DAYS_OLD days"Simple, portable, does one thing well. That’s Bash at its best.
Where Bash Falls Short
Bash isn’t typed. There’s no real object orientation. Error handling is basically
set -eand hoping for the best. There’s notry/catch, no structured exception handling. When things go wrong in a Bash script, they tend to go wrong quietly or spectacularly, with not much in between.Python, by contrast, is optionally strongly typed and object-oriented. If you want to manipulate a file or a system process in Python, you wrap that system entity inside a Python object. That adds some overhead, sure, but in exchange you get something that’s more predictable, more secure, and scales well from simple scripts to complex logic.
Here’s that same log compression task in Python:
from pathlib import Path import gzip import shutil from datetime import datetime, timedelta log_dir = Path("/var/log/myapp") cutoff = datetime.now() - timedelta(days=30) for log_file in log_dir.glob("*.log"): if datetime.fromtimestamp(log_file.stat().st_mtime) < cutoff: with open(log_file, "rb") as f_in: with gzip.open(f"{log_file}.gz", "wb") as f_out: shutil.copyfileobj(f_in, f_out) log_file.unlink()More verbose? Absolutely. But also more explicit about what’s happening, easier to extend, and much easier to add error handling to.
The Performance Question
In some cases, performance genuinely matters. Think high-frequency trading platforms, edge devices, or massive clusters. Bash scripts excel here because there’s almost zero startup overhead. Compare that to Python, which needs to load up the interpreter before it can start executing code. You’re going from microseconds to milliseconds, and sometimes that matters.
But startup time is just one factor. When you compare the actual work being done, Python can pull ahead. String manipulation on structured data? Python wins. Parsing JSON, YAML, or any structured format? Python’s core libraries are written in C and optimized for exactly this kind of work. If you find yourself reaching for
jqoryqin a Bash script, that’s a strong signal you should be using Python instead.The Guidelines People Throw Around
You’ll see a common guideline online: if your script exceeds 100 lines of Bash, rewrite it in Python. But a lot of veterans in the industry feel like that cutoff is way too generous. Experienced engineers often put it at 50 lines, or even 25.
Another solid indicator: nested
ifstatements. Some people say “deeply nested” if statements, but let’s be honest, more than one level of nesting in Bash is already getting painful. Python handles complex branching logic far more gracefully, and you’ll thank yourself when you come back to maintain it six months later.Unit Testing Tells the Story
You can do unit testing with Bash. BATS (Bash Automated Testing System) exists, and ShellCheck is useful as a lightweight linter for catching bad practices. But despite these tools, Python’s testing ecosystem is on another level entirely. It’s fully mature with multiple frameworks, excellent mocking capabilities, and the ability to simulate network calls, external APIs, or system binaries. Complex mocking that would be difficult or impossible in Bash is straightforward in Python.
If your script needs solid testing or if it’s doing anything important, that’s a strong vote for Python.
Bash’s Biggest Win: Portability
So what does Bash actually win at? Portability. When you think about all the dependencies Python needs to run, Bash is the clear winner. You’re distributing a single
.shfile. That’s it.With Python, you have to ask: Does Python exist on this machine? Is it the right version? You’ll need a virtual environment so you don’t pollute system Python. You need third-party libraries installed via a package manager; and please friends, remember that we don’t let friends use pip. Use Poetry or uv. Pip is so bad that I’d honestly argue that Bash not having a package manager is better than Python having pip. At least Bash doesn’t pretend to manage dependencies well.
If you want something simple, something that can run on practically any Unix-based machine without setup, Bash is your answer. Even Windows can handle it these days through WSL, though you’re jumping through a few hoops.
TLDR
The decision is actually pretty straightforward:
- Use Bash when you’re gluing together system commands, the logic is linear, it’s under 50 lines, and portability matters.
- Use Python when you’re parsing structured data, need error handling, have branching logic, want proper tests, or the script is going to grow.
If you’re reaching for
jq, writing nestedifstatements, or the script is getting long enough that you’re losing track of what it does… it’s time for Python.I think in a future post we might look at when Go makes sense over Bash. There’s a lot to cover there about compiled binaries, but for now, hopefully this helps you make the call next time you’re wondering what to start your scripting with.
/ DevOps / Programming / Python / Bash / Scripting
-
Local Secrets Manager - Dotenv Encrypter
I built a thing to solve a problem. It has helped me, maybe it will help you?
It all starts with a question.
Why isn’t there a good local secrets manager that encrypts your secrets at rest? I imagine a lot of people, like me, have a number of local applications. I don’t want to pay per-seat pricing just to keep my sensitive data from sitting in plaintext on my machine.
I built an app called LSM Local Secrets Manager to solve that problem. The core idea is simple. Encrypt your
.envfiles locally and only decrypt when you need them (sometimes at runtime).The Problem
If you’ve got a bunch of projects on your machine, each with their own
.envor.env.localfile full of API keys you’re definitely not rotating every 90 days. Those files just sit there in plaintext. Any process on your system can read them. And with AI agents becoming part of our dev workflows, the attack surface for leaking secrets is only getting easier.ThE CLAW EnteRed ChaT
I started looking at Doppler specifically for OpenCLAW. Their main selling feature is injecting secrets into your runtime so they never touch the filesystem. I was like, cool. Also I like that Doppler stores everything remotely. The only thing was the cost did not make sense for me right now. I don’t want to pay $10-20 a month for this set of features.
So what else is there?
Well GCP Secret Manager has its own set of issues.
You can’t have duplicate names per project, so something as common as
NODE_ENVacross multiple apps becomes a more work than you want to deal with. Some wrapper script that injects prefixes? No thanks. I imagine there are a thousand and one homegrown solutions to solve this problem. Again, no thanks.So what else is there?
You Find A Solution
AWS Secret Manager
A Problem for Solution Problem
AWS IAM
🫣
I have a lot more to say here on this subject but will save this for another post. Subscribe if you want to see the next post.
The Solution
The workflow is straightforward:
lsm init— Run this once from anywhere. It generates your encryption key file.lsm link <app-name>— Run this inside your project directory. It creates a config entry in~/.lsm/config.yamlfor that application.lsm import— Takes your existing.envor.env.localand creates an encrypted version.lsm clean— Removes the plaintext.envfiles so they’re not just sitting around.lsm dump— Recreates the.envfiles if you need them back.
But wait there’s more.
Runtime Injection with
lsm execRemember that cool thing I just told you about? Instead of dumping secrets back to disk, you run:
lsm exec -- pnpm devI feel like a family man from Jersey, who don’t mess around. Aye, you got, runtime injection. I got that.
Well that’s
lsmanyways. It can decrypt your secrets and inject them directly into the runtime environment of whatever command follows the--. Your secrets exist in memory for the duration of that process and nowhere else. No plaintext files hanging around for other processes to sniff.Credit to Doppler for the idea. The difference to what we are doing is your encrypted files stay local.
What’s Next
I’ve got some possible ideas of improvements to try building.
- Separate encrypt/decrypt keys — You create secrets with one key, deploy the encrypted file to a server, and use a read-only key to decrypt at runtime. The server never has write access to your secrets.
- Time-based derivative keys — Imagine keys that expire or rotate automatically.
- Secure sharing — Right now you’d have to decrypt and drop the file into a password manager to share it. There’s room to make that smoother.
I’m not sure how to do all of that yet, but we’re making progress.
Why Not Just Use Doppler?
There are genuinely compelling reasons to use Doppler or similar services. I mean bsides the remote storage, access controls and auditable logs. There’s a lot to love.
For local development across a bunch of personal projects? I don’t think you should need a SaaS subscription to keep your secrets encrypted.
LSM is still early, but the core workflow is there and it works.
Give it a try if you’re tired of plaintext
.envfiles scattered across your machine.
/ DevOps / Programming / Tools / security
-
Doppler | Centralized cloud-based secrets management platform
Doppler’s secrets management platform helps teams secure, sync, and automate their secrets across environments and infrastructure. Experience enhanced security, agility, and automation with our cloud platform.
/ DevOps / links / platform / security / cloud security
-
Claude Code Now Has Two Different Security Review Tools
If you’re using Claude Code, you might have noticed that Anthropic has been quietly building out security tooling. There are now two distinct features worth knowing about. They sound similar but do very different things, so let’s break it down.
The /security-review Command
Back in August 2025, Anthropic added a
/security-reviewslash command to Claude Code. This one is focused on reviewing your current changes. Think of it as a security-aware code reviewer for your pull requests. It looks at what you’ve modified and flags potential security issues before you merge.It’s useful, but it’s scoped to your diff. It’s not going to crawl through your entire codebase looking for problems that have been sitting there for months.
The New Repository-Wide Security Scanner
Near the end of February 2026, Anthropic announced something more ambitious: a web-based tool that scans your entire repository and operates more like a security researcher than a linter. This is the thing that will help you identify and fix security issues across your entire codebase.
First we need to look at what already exists to understand why it matters.
SAST tools — Static Application Security Testing. SAST tools analyze your source code without executing it, looking for known vulnerability patterns. They’re great at catching things like SQL injection, hardcoded credentials, or buffer overflows based on pattern matching rules.
If a vulnerability doesn’t match a known pattern, it slips through. SAST tools also tend to generate a lot of false positives, which means teams start ignoring the results.
What Anthropic built is different. Instead of pattern matching, it uses Claude to actually reason about your code the way a security researcher would. It can understand context, follow data flows across files, and identify logical vulnerabilities that a rule-based scanner would never catch. Think things like:
- Authentication bypass through unexpected code paths
- Authorization logic that works in most cases but fails at edge cases
- Business logic flaws that technically “work” but create security holes
- Race conditions that only appear under specific timing
These are the kinds of issues that usually require a human security expert to find or … real attacker.
SAST tools aren’t going away, and you should still use them. They’re fast, they catch the common stuff, and they integrate easily into CI/CD pipelines.
Also the new repository-wide security scanner isn’t out yet, so stick with what you got until it’s ready.
/ DevOps / AI / Claude-code / security
-
You move fast. Cloud development cycles do not.
Mixing and matching optimization strategies won’t fix your slow development loop. LocalStack streamlines your feedback loop, bringing the cloud directly to your laptop. Same production behavior. Faster feedback. Fully under your control.
/ DevOps / Development / links / localcloud / docker
-
Stop Using pip. Seriously.
If you’re writing Python in 2026, I need you to pretend that pip doesn’t exist. Use Poetry or uv instead.
Hopefully you’ve read my previous post on why testing matters. If you haven’t, go read that first. Back? Hopefully you are convinced.
If you’re writing Python, you should be writing tests, and you can’t do that properly with pip. It’s an unfortunate but true state of Python right now.
In order to write tests, you need dependencies, which is how we get to the root of the issue.
The Lock File Problem
The closest thing pip has to a lock file is
pip freeze > requirements.txt. But it just doesn’t cut the mustard. It’s just a flat list of pinned versions.A proper lock file captures the resolution graph, the full picture of how your dependencies relate to each other. It distinguishes between direct dependencies (the packages you asked for) and transitive dependencies (the packages they pulled in). A
requirements.txtdoesn’t do any of that.Ok, so? You might be asking yourself.
It means that you can’t guarantee that running
pip install -r requirements.txtsix months or six minutes from now will give you the same copy of all your dependencies.It’s not repeatable. It’s not deterministic. It’s not reliable.
The one constant in Code is that it changes. Without a lock file, you’re rolling the dice every time.
Everyone Else Figured This Out
Every other modern language ecosystem “solved” this problem years ago:
- JavaScript has
package-lock.json(npm) andpnpm-lock.yaml(pnpm) - Rust has
Cargo.lock - Go has
go.sum - Ruby has
Gemfile.lock - PHP has
composer.lock
Python’s built-in package manager just… doesn’t have this.
That’s a real problem when you’re trying to build reproducible environments, run tests in CI, or deploy with any confidence that what you tested locally is what’s running in production.
What to Use Instead
Both Poetry and uv solve the lock file problem and give you reproducible environments. They’re more alike than different — here’s what they share:
- Lock files with full dependency resolution graphs
- Separation of dev and production dependencies
- Virtual environment management
pyproject.tomlas the single config file- Package building and publishing to PyPI
Poetry is the more established option. It’s at version 2.3 (released January 2026), supports Python 3.10–3.14, and has been the go-to alternative to pip for years. It’s stable, well-documented, and has a large ecosystem of plugins.
uv is the newer option from Astral (the team behind Ruff). It’s written in Rust and is 10–100x faster than pip at dependency resolution. It can also manage Python versions directly, similar to mise or pyenv. It’s currently at version 0.10, so it hasn’t hit 1.0 yet, but gaining adoption fast.
You can’t go wrong with either. Pick one, use it, and stop using pip.
/ DevOps / Programming / Python
- JavaScript has
-
Serverless and Edge Computing: A Practical Guide
Serverless and edge computing have transformed how we deploy and scale web applications. Instead of managing servers, you write functions that automatically scale from zero to millions of users.
Edge computing takes this further by running code geographically close to users for minimal latency. Let’s break down how these technologies work and when you’d actually want to use them.
What is Serverless?
Serverless doesn’t mean “no servers”, it means you don’t manage them. The provider handles infrastructure, scaling, and maintenance. You just write “functions”.
The functions are stateless, auto-scaling, and you only pay for execution time. So what’s the tradeoff? Well there are several but the first one is Cold starts. The first request after idle time is slower because the container needs to spin up.
The serverless Platforms as a Service are sticky, preventing you easily moving to another platform.
They are stateless, meaning each invocation is independent and doesn’t retain any state between invocations.
In some cases, they don’t run Node, so they behave much differently when building locally, which complicates development and testing.
Each request is handled by a NEW function and so you can imagine that if you have a site that gets a lot of traffic, and makes a lot of requests, this will lead to expensive hosting bills or you playing the pauper on social media.
Traditional vs. Serverless vs. Edge
Think of it this way:
- Traditional servers are always running, always costing you money, and you handle all the scaling yourself. Great for predictable, high-traffic workloads. Lots of different options for hosting and scaling.
- Serverless (AWS Lambda, Vercel Functions, GCP Functions) spins up containers on demand and kills them when idle. Auto-scales from zero to infinity. Cold starts around 100-500ms.
- Edge (Cloudflare Workers, Vercel Edge) uses V8 Isolates instead of containers, running your code in 200+ locations worldwide. Cold starts under 1ms.
Cost Projections
Here’s how the costs break down at different scales:
Requests / Month ~RPS (Avg) AWS Lambda Cloudflare Workers VPS / K8s Cluster Winner 1 Million 0.4 $0.00 (Free Tier) $0.00 (Free Tier) $40–$60 (Min HA Setup) Serverless 10 Million 4.0 ~$12 ~$5 $40–$100 Serverless 100 Million 40 ~$120 ~$35 $80–$150 Tie / Workers 500 Million 200 ~$600 ~$155 $150–$300 VPS / Workers 1 Billion 400 ~$1,200+ ~$305 $200–$400 VPS / EC2 The Hub and Spoke Pattern
Also called the citadel pattern, this is where serverless and traditional infrastructure stop competing and start complementing each other. The idea is simple: keep a central hub (your main application running on containers or a VPS) and offload specific tasks to serverless “spokes” at the edge.
Your core API, database connections, and stateful logic stay on traditional infrastructure where they belong. But image resizing, auth token validation, A/B testing, geo-routing and rate limiting all move to edge functions that run close to the user.
When to Use Serverless
- Unpredictable or spiky traffic — APIs that go from 0 to 10,000 requests in minutes (webhooks, event-driven workflows)
- Lightweight, stateless tasks — image processing, PDF generation, sending emails, data transformation
- Low-traffic side projects — anything that sits idle most of the time and you don’t want to pay for an always-on server… and you don’t know how to setup a Coolify server.
- Edge logic — geolocation routing, header manipulation, request validation before it hits your origin
When to Use Containers / VPS
- Sustained high traffic — once you’re consistently above ~100M requests/month, a VPS is cheaper (see the table above)
- Stateful workloads — WebSocket connections, long-running processes, anything that needs to hold state between requests
- Database-heavy applications — connection pooling and persistent connections don’t play well with serverless cold starts
- Complex applications — monoliths or microservices that need shared memory, background workers, or cron jobs
The Hybrid Approach
The best architectures often use both. It depends on your specific use case and requirements. It depends on the team, the budget, and the complexity of your application.
Knowing the tradeoffs is the difference between a seasoned developer and a junior. It’s important that you make the right decisions based off your needs and constraints.
Good luck and godspeed!
/ DevOps / Development / Serverless / Cloud
-
Switching to mise for Local Dev Tool Management
I’ve been making some changes to how I configure my local development environment, and I wanted to share what I’ve decided on.
Let me introduce to you, mise (pronounced “meez”), a tool for managing your programming language versions.
Why Not Just Use Homebrew?
Homebrew is great for installing most things, but I don’t like using it for programming language version management. It is too brittle. How many times has
brew upgradedecided to switch your Python or Node version on you, breaking projects in the process? Too many, in my experience.mise solves this elegantly. It doesn’t replace Homebrew entirely, you’ll still use that for general stuff but for managing your system programming language versions, mise is the perfect tool.
mise the Great, mise the Mighty
mise has all the features you’d expect from a version manager, plus some nice extras:
Shims support: If you want shims in your bash or zsh, mise has you covered. You’ll need to update your RC file to get them working, but once you do, you’re off to the races.
Per-project configuration: mise can work at the application directory level. You set up a
mise.tomlfile that defines its behavior for that specific project.Environment management: You can set up environment variables directly in the toml file, auto-configure your package manager, and even have it auto-create a virtual environment.
It can also load environment variables from a separate file if you’d rather not put them in the toml (which you probably want if you’re checking the file in).
It’s not a package manager: This is important. You still need poetry or uv for Python package management. As a reminder: don’t ever use pip. Just don’t.
A Quick Example
Here’s what a
.mise.tomlfile looks like for a Python project:[tools] python = "3.12.1" "aqua:astral-sh/uv" = "latest" [env] # uv respects this for venv location UV_PROJECT_ENVIRONMENT = ".venv" _.python.venv = { path = ".venv", create = true }Pretty clean, right? This tells mise to use Python 3.12.1, install the latest version of uv, and automatically create a virtual environment in
.venv.Note on Poetry Support
I had to install python from source using mise to get poetry working. You will want to leave this setting to be true. There is some problem with the precompiled binaries they are using.
You can install global python packages, like poetry, with the following command:
mise use --global poetry@latestYes, It’s Written in Rust
The programming veterans among you may have noticed the toml configuration format and thought, “Ah, must be a Rust project.” And you’d be right. mise is written in Rust, which means it’s fast! The project is stable, has a ton of GitHub stars, and is actively maintained.
Task Runner Built-In
One feature I wasn’t expecting: mise has a built-in task runner. You can define tasks right in your
mise.toml:[tasks."venv:info"] description = "Show Poetry virtualenv info" run = "poetry env info" [tasks.test] description = "Run tests" run = "poetry run pytest"Then run them with
mise run testormise r venv:info.If you’ve been putting off setting up Make for a project, this is a compelling alternative. The syntax is cleaner and you get descriptions for free
I’ll probably keep using Just for more complex build and release workflows, but for simple project tasks, mise handles it nicely. One less tool to install.
My Experience So Far
I literally just switched everything over today, and it was a smooth process. No too major so far. I’ll report back if anything breaks, but the migration from my previous setup was straightforward.
Now, I need to get the other languages I use, like Go, Rust, and PHP setup and moved to mise. Having everything consolidated into one tool is going to be so nice.
If you’re tired of Homebrew breaking your language versions or juggling multiple version managers for different languages, give mise a try.
The documentation is solid, and the learning curve is minimal.
/ DevOps / Tools / Development / Python
-
Two Changes in Claude Code That Actually Matter
As of 2026-01-24, the stable release of Claude Code is 2.1.7, but if you’ve been following the bleeding edge, versions 2.1.15 and 2.1.16 bring some significant changes. Here’s what you need to know.
The npm Deprecation Notice
Version 2.1.15 added a deprecation notification for npm installations.
If you’ve been using Claude Code via npm or homebrew, Anthropic will soon start nudging you toward a new installation method. You’ll want to run
claude installor check out the official getting started docs for the recommended approach.This isn’t a breaking change yet, but it’s a clear they are moving away from npm for releases going forward.
Built-in Task Management
Version 2.1.16 introduces something I’m genuinely excited about: a new task management system with dependency tracking.
If you’ve been using tools like beads for lightweight issue tracking within your coding sessions, this built-in system offers a similar workflow without the setup.
You can define tasks, track their status, and—here’s the key part—specify dependencies between them. Task B won’t start until Task A completes.
This is particularly useful for repositories where you don’t have beads configured or you’re working on something quick where setting up external tooling feels like overkill.
Your sub-agents can now have proper task management without anything extra.
Should You Update?
If you’re on 2.1.7 stable and everything’s working, there’s no rush. But if you’re comfortable with newer releases, the task management in 2.1.16 is worth trying, especially if you work with complex multi-step workflows or use sub-agents frequently.
The npm deprecation is something to keep on your radar regardless. Plan your migration before it becomes mandatory.
/ DevOps / Ai-tools / Claude-code
-
Security and Reliability in AI-Assisted Development
You may not realize it, but AI code generation is fundamentally non-deterministic. It’s probabilistic at its core, it’s predicting code rather than computing it.
And while there’s a lot of orchestration happening between the raw model output and what actually lands in your editor, you can still get wildly different results depending on how you use the tools.
This matters more than most people realize.
Garbage In, Garbage Out (Still True)
The old programming adage applies here with renewed importance. You need to be explicit with these tools. Adding predictability into how you build is crucial.
Some interesting patterns:
- Specialized agents set up for specific tasks
- Skills and templates for common operations
- Orchestrator conversations that plan but don’t implement directly
- Multiple conversation threads working on the same codebase via Git workspaces
The more structure you provide, the more consistent your output becomes.
The Security Problem
This topic doesn’t get talked about enough. All of our common bugs have snuck into the training data. SQL injection patterns, XSS vulnerabilities, insecure defaults… they’re all in there.
The model can’t always be relied upon to build it correctly the first time. Then there’s the question of trust.
Do you trust your LLM provider?
Is their primary focus on quality and reliable, consistent output? What guardrails exist before the code reaches you? Is the model specialized for coding, or is it a general-purpose model that happens to write code?
These are important engineering questions.
Deterministic Wrappers Around Probabilistic Cores
The more we can put deterministic wrappers around these probabilistic cores, the more consistent the output will be.
So, what does this look like in practice?
Testing is no longer optional. We used to joke that we’d get to testing when we had time. That’s not how it works anymore. Testing is required because it provides feedback to the models. It’s your mechanism for catching problems before they compound.
Testing is your last line of defense against garbage sneaking into the system.
AI-assisted review is essential. The amount of code you can now create has increased dramatically. You need better tools to help you understand all that code. The review step, typically done during a pull request, is now crucial for product development. Not optional. Crucial.
The models need to review itself, or you need a separate review process that catches what the generating step missed.
The Takeaway
We’re in an interesting point in time. These tools can dramatically increase your output, but only if you build the right guardrails around them should we trust the result.
Structure your prompts. Test everything. Review systematically. Trust but verify.
The developers who figure out how to add predictability to unpredictable processes are the ones who’ll who will be shipping features instead of shitting out code.
/ DevOps / AI / Programming
-
Gitea - Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI/CD
/ DevOps / Development / links / platform / self-hosted / code
-
Welcome to Woodpecker | Woodpecker CI
Woodpecker is a CI/CD tool. It is designed to be lightweight, simple to use and fast. Before we dive into the details, let’s have a look at some of the basics.
/ DevOps / links / automation
-
36 Framework Fixtures in One Session: How Beads + Claude Code Changed Our Testing Game
We built test fixtures for 36 web frameworks in a single session. Not days. Not a week of grinding through documentation. Hours.
Here’s what happened and why it matters.
The Problem
api2spec is a CLI tool that parses source code to generate OpenAPI specifications. To test it properly, we needed real, working API projects for every supported framework—consistent endpoints, predictable responses, the whole deal.
We started with 5 frameworks: Laravel, Axum, Flask, Gin, and Express. The goal was to cover all 36 supported frameworks with fixture projects we could use to validate our parsers.
What We Actually Built
36 fixture repositories across 15 programming languages. Each one includes:
- Health check endpoints (
GET /health,GET /health/ready) - Full User CRUD (
GET/POST /users,GET/PUT/DELETE /users/:id) - Nested resources (
GET /users/:id/posts) - Post endpoints with pagination (
GET /posts?limit=&offset=) - Consistent JSON response structures
The language coverage tells the story:
- Go: Chi, Echo, Fiber, Gin
- Rust: Actix, Axum, Rocket
- TypeScript/JS: Elysia, Express, Fastify, Hono, Koa, NestJS
- Python: Django REST Framework, FastAPI, Flask
- Java: Micronaut, Spring
- Kotlin: Ktor
- Scala: Play, Tapir
- PHP: Laravel, Slim, Symfony
- Ruby: Rails, Sinatra
- C#/.NET: ASP.NET, FastEndpoints, Nancy
- C++: Crow, Drogon, Oat++
- Swift: Vapor
- Haskell: Servant
- Elixir: Phoenix
- Gleam: Gleam (Wisp)
For languages without local runtimes on my machine—Haskell, Elixir, Gleam, Scala, Java, Kotlin—we created Docker Compose configurations with both an app service and a dev service for interactive development.
How Beads Made This Possible
We used beads (a lightweight git-native issue tracker) to manage the work. The structure was simple:
- 40 total issues created
- 36 closed in one session
- 5 Docker setup tasks marked as P2 priority (these blocked dependent fixtures)
- 31 fixture tasks at P3 priority
- 4 remaining for future work
The dependency tracking was key. Docker environments had to be ready before their fixtures could be worked on, and beads handled that automatically.
When I’d finish a Docker setup task, the blocked fixture tasks became available.
Claude Code agents worked through the fixture implementations in parallel where possible.
The combination of clear task definitions, dependency management, and AI-assisted coding meant we weren’t context-switching between “what do I need to do next?” and “how do I implement this?”
The Numbers
Metric Value Total Issues 40 Closed 36 Avg Lead Time 0.9 hours New GitHub Repos 31 Languages Covered 15 That average lead time of under an hour per framework includes everything: creating the repo, implementing the endpoints, testing, and pushing.
What’s Left
Four tasks queued for follow-up sessions:
- Drift detection - Compare generated specs against expected output
- Configurable report formats - JSON, HTML, and log output options
- CLAUDE.md files - Development instructions for each fixture
- Claude agents - Framework-specific coding assistants
The Takeaway
Doing this today, was like having a super power. “I need to test across 36 frameworks” and actually having those test fixtures ready, but with agents and Opus 4.5 and beads, BAM done!
Beads gave us the structure to track dependencies and progress.
Claude Code agents handled the repetitive-but-different implementation work across languages and frameworks.
The combination let us focus on the interesting problems instead of the mechanical ones.
All 36 repos are live at github.com/api2spec with the
api2spec-fixture-*naming convention.Have you tried this approach yet?
/ DevOps / Open-source / Testing / Ai-tools / Claude-code
- Health check endpoints (
-
Twenty Years of DevOps: What's Changed and What Hasn't
I’ve been thinking about how much our industry has transformed over the past two decades. It’s wild to realize that 20 years ago, DevOps as we know it didn’t even exist. We were deploying to production using FTP. Yes, FTP. You use the best tool that is available to you and that’s what we had.
So what’s actually changed, and what’s stayed stubbornly the same?
The Constants
JavaScript is still king. Although to be fair, the JavaScript of 2005 and the JavaScript of today are almost unrecognizable. We’ve gone from jQuery spaghetti to sophisticated module systems, TypeScript, and frameworks that would have seemed like science fiction back then.
And yet, we’re still centering that div.
Certainly, HTML5 and semantic tags have genuinely helped, and I’m certainly grateful we’re not building everything out of tables and spans anymore.
What’s Different
The list of things we didn’t have 20 years ago is endless but here are some of the big ones:
- WebSockets
- HTTP/2
- SSL certificates as a default (most sites were running plain HTTP)
- Git and GitOps
- Containers and Kubernetes
- CI/CD pipelines as we know them
- Jenkins didn’t exist
- Docker wasn’t even a concept
The framework landscape is unrecognizable. You might call it a proliferation … We went from a handful of options to, well, a new JavaScript framework every week, so the joke goes.
Git adoption has been one of the best things to happen to our industry. (RIP SVN) Although I hear rumors that some industries are still clinging to some truly Bazaar version control systems. Mecurial anyone?
The Bigger Picture
Here’s the thing that really gets me: our entire discipline didn’t exist. DevOps, SRE, platform engineering… these weren’t job titles. They weren’t even concepts people were discussing.
We had developers in their hole and operations in their walled gardens. Now we have infrastructure as code, GitOps workflows, observability platforms, and the expectation that you can deploy to production multiple times a day without breaking a sweat.
The cultural shift from “ops handles production” to “you build it, you run it” fundamentally changed how we think about software.
What Stays the Same
Despite all the tooling changes, some things remain constant. We’re still trying to ship reliable software faster. We’re still balancing speed with stability.
Twenty years from now, I wonder what we’ll be reminiscing about. Remember when we used to actually write software ourselves and complain about testing?
What seems cutting-edge is the new legacy before you know it.
/ DevOps / Web-development / Career / Retrospective
-
Introducing api2spec: Generate OpenAPI Specs from Source Code
You’ve written a beautiful REST API. Routes are clean, handlers are tested and the types are solid. But where’s your OpenAPI spec? It’s probably outdated, incomplete, or doesn’t exist at all.
If you’re “lucky”, you’ve been maintaining one by hand. The alternatives aren’t great either, runtime generation requires starting your app and hitting every endpoint or annotation-heavy approaches clutter your code. We should all know at this point, with manual maintenance, it will inevitably drift from reality.
What if you could just point a tool at your source code and get an OpenAPI spec?
Enter api2spec
# Install go install github.com/api2spec/api2spec@latest # Initialize config (auto-detects your framework) api2spec init # Generate your spec api2spec generateThat’s it. No decorators to add. No server to start. No endpoints to crawl.
What We Support
Here’s where it gets interesting. We didn’t build this for one framework—we built a plugin architecture that supports 30+ frameworks across 16 programming languages:
- Go: Chi, Gin, Echo, Fiber, Gorilla Mux, stdlib
- TypeScript/JavaScript: Express, Fastify, Koa, Hono, Elysia, NestJS
- Python: FastAPI, Flask, Django REST Framework
- Rust: Axum, Actix, Rocket
- PHP: Laravel, Symfony, Slim
- Ruby: Rails, Sinatra
- JVM: Spring Boot, Ktor, Micronaut, Play
- And more: Elixir Phoenix, ASP.NET Core, Gleam, Vapor, Servant…
How It Works
The secret sauce is tree-sitter, an incremental parsing library that can parse source code into concrete syntax trees.
Why tree-sitter instead of language specific AST libraries?
- One approach, many languages. We use the same pattern-matching approach whether we’re parsing Go, Rust, TypeScript, or PHP.
- Speed. Tree-sitter is designed for real-time parsing in editors. It’s fast enough to parse entire codebases in seconds.
- Robustness. It handles malformed or incomplete code gracefully, which is important when you’re analyzing real codebases.
- No runtime required. Your code never runs. We can analyze code even if dependencies aren’t installed or the project won’t compile.
For each framework, we have a plugin that knows how to detect if the framework is in use, find route definitions using tree-sitter queries, and extract schemas from type definitions.
Let’s Be Honest: Limitations
Here’s where I need to be upfront. Static analysis has fundamental limitations.
When you generate OpenAPI specs at runtime (like FastAPI does natively), you have perfect information. The actual response types. The real validation rules. The middleware that transforms requests.
We’re working with source code. We can see structure, but not behavior.
What this means in practice:
- Route detection isn’t perfect. Dynamic routing or routes defined in unusual patterns might be missed.
- Schema extraction varies by language. Go structs with JSON tags? Great. TypeScript interfaces? We can’t extract literal union types as enums yet.
- We can’t follow runtime logic. If your route path comes from a database, we won’t find it.
- Response types are inferred, not proven.
This is not a replacement for runtime-generated specs. But maybe so in the future and for many teams, it’s a massive improvement over having no spec at all.
Built in a Weekend
The core of this tool was built in three days.
- Day one: Plugin architecture, Go framework support, CLI scaffolding
- Day two: TypeScript/JavaScript parsers, schema extraction from Zod
- Day three: Python, Rust, PHP support, fixture testing, edge case fixes
Is it production-ready? Maybe?
Is it useful? Absolutely.
For the fixture repositories we’ve created—realistic APIs in Express, Gin, Flask, Axum, and Laravel—api2spec correctly extracts 20-30 routes and generates meaningful schemas. Not perfect. But genuinely useful.
How You Can Help
This project improves through real-world testing. Every fixture we create exposes edge cases. Every framework has idioms we haven’t seen yet.
- Create a fixture repository. Build a small API in your framework of choice. Run api2spec against it. File issues for what doesn’t work.
- Contribute plugins. The plugin interface is straightforward. If you know a framework well, you can make api2spec better at parsing it.
- Documentation. Found an edge case? Document it. Figured out a workaround? Share it.
The goal is usefulness, and useful tools get better when people use them.
Getting Started
go install github.com/api2spec/api2spec@latest cd your-api-project api2spec init api2spec generate cat openapi.yamlIf it works well, great! If it doesn’t, file an issue. Either way, you’ve helped.
api2spec is open source under the FSL-1.1-MIT license. Star us on GitHub if you find it useful.
Built with love, tree-sitter, and too much tea. ☕
/ DevOps / Programming / Openapi / Golang / Open-source
-
Automate Folder Archiving on macOS with Raycast and 7zip
If you’re like me and frequently need to archive project folders to an external drive, you know how tedious the process can be: right-click, compress, wait, find the archive, move it to the external drive, rename if there’s a conflict… It’s a workflow that begs for automation.
Today, I’m going to show you how I built a custom Raycast script that compresses any folder with 7zip and automatically moves it to an external drive, all with a single keyboard shortcut.
What We’re Building
A Raycast script command that:
- Takes whatever folder you have selected in Finder
- Compresses it using 7zip (better compression than macOS’s built-in zip)
- Moves it directly to a specified folder on your external drive
- Automatically handles version numbering if the archive already exists
- Provides clear error messages if something goes wrong
No more manual copying. No more filename conflicts. Just select, trigger, and done.
Prerequisites
Before we start, you’ll need:
- Raycast - Download from raycast.com if you haven’t already
- 7zip - Install via Homebrew:
brew install p7zip- An external drive - Obviously, but make sure you know its mount path
The Problem with the Built-in Approach
Initially, I thought: “Can’t I just have Raycast pass the selected folder path as an argument?”
The answer is technically yes, but it’s clunky. Raycast would prompt you for the folder path every time, which means you’d need to:
- Copy the folder path
- Trigger the Raycast command
- Paste the path
- Hit enter
That’s not automation—that’s just extra steps with good intentions.
The Solution: AppleScript Integration
The key insight was using AppleScript to grab the currently selected item from Finder. This way, the workflow becomes:
- Select a folder in Finder
- Trigger the Raycast command (I use
Cmd+Shift+7) - Watch it compress and move automatically
No input required. No path copying. Just pure automation bliss.
Building the Script
Here’s the complete script with all the error handling we need:
#!/bin/bash # Required parameters: # @raycast.schemaVersion 1 # @raycast.title Compress Selected to External Drive # @raycast.mode fullOutput # Optional parameters: # @raycast.icon 📦 # @raycast.needsConfirmation false # Documentation: # @raycast.description Compress selected Finder folder with 7zip and move to external drive # [@raycast.author](http://raycast.author) Your Name EXTERNAL_DRIVE="/Volumes/YourDrive/ArchiveFolder" # Get the selected item from Finder FOLDER_PATH=$(osascript -e 'tell application "Finder" to set selectedItems to selection if (count of selectedItems) is 0 then return "" else return POSIX path of (item 1 of selectedItems as alias) end if') # Check if anything is selected if [ -z "$FOLDER_PATH" ]; then echo "❌ Error: No item selected in Finder" echo "Please select a folder in Finder and try again" exit 1 fi # Trim whitespace FOLDER_PATH=$(echo "$FOLDER_PATH" | xargs) # Check if path exists if [ ! -e "$FOLDER_PATH" ]; then echo "❌ Error: Path does not exist: $FOLDER_PATH" exit 1 fi # Check if path is a directory if [ ! -d "$FOLDER_PATH" ]; then echo "❌ Error: Selected item is not a folder: $FOLDER_PATH" exit 1 fi # Check if 7z is installed if ! command -v 7z &> /dev/null; then echo "❌ Error: 7z not found. Install with: brew install p7zip" exit 1 fi # Check if external drive is mounted if [ ! -d "$EXTERNAL_DRIVE" ]; then echo "❌ Error: External drive not found at: $EXTERNAL_DRIVE" echo "Make sure the drive is connected and mounted" exit 1 fi # Create archive name from folder name BASE_NAME="$(basename "$FOLDER_PATH")" ARCHIVE_NAME="${BASE_NAME}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" # Check if archive already exists and find next available version number if [ -f "$OUTPUT_PATH" ]; then echo "⚠️ Archive already exists, creating versioned copy..." VERSION=2 while [ -f "$EXTERNAL_DRIVE/${BASE_NAME}_v${VERSION}.7z" ]; do VERSION=$((VERSION + 1)) done ARCHIVE_NAME="${BASE_NAME}_v${VERSION}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" echo "📝 Using version number: v${VERSION}" echo "" fi echo "🗜️ Compressing: $(basename "$FOLDER_PATH")" echo "📍 Destination: $OUTPUT_PATH" echo "" # Compress with 7zip if 7z a "$OUTPUT_PATH" "$FOLDER_PATH"; then echo "" echo "✅ Successfully compressed and moved to external drive" echo "📦 Archive: $ARCHIVE_NAME" echo "📊 Size: $(du -h "$OUTPUT_PATH" | cut -f1)" else echo "" echo "❌ Error: Compression failed" exit 1 fiKey Features Explained
1. Finder Integration
The AppleScript snippet grabs whatever you have selected in Finder:
FOLDER_PATH=$(osascript -e 'tell application "Finder" to set selectedItems to selection if (count of selectedItems) is 0 then return "" else return POSIX path of (item 1 of selectedItems as alias) end if')This returns a POSIX path (like
/Users/yourname/Documents/project) that we can use with standard bash commands.2. Comprehensive Error Checking
The script validates everything before attempting compression:
- Is anything selected?
- Does the path exist?
- Is it actually a directory?
- Is 7zip installed?
- Is the external drive connected?
Each check provides a helpful error message so you know exactly what went wrong.
3. Automatic Version Numbering
This was a crucial addition. If
project.7zalready exists, the script will automatically createproject_v2.7z. If that exists, it’ll createproject_v3.7z, and so on:if [ -f "$OUTPUT_PATH" ]; then VERSION=2 while [ -f "$EXTERNAL_DRIVE/${BASE_NAME}_v${VERSION}.7z" ]; do VERSION=$((VERSION + 1)) done ARCHIVE_NAME="${BASE_NAME}_v${VERSION}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" fiNo more manual renaming. No more overwriting precious backups.
4. Progress Feedback
Using
@raycast.mode fullOutputmeans you see everything that’s happening:- Which folder is being compressed
- Where it’s going
- The final archive size
This transparency is important when you’re archiving large projects that might take a few minutes.
Setting It Up
- Find your external drive path:
ls /Volumes/Look for your drive name, then determine where you want archives saved. For example:
/Volumes/Expansion/WebProjectsArchive-
Create the script:
- Open Raycast Settings → Extensions → Script Commands
- Click “Create Script Command”
- Paste the script above
- Update the
EXTERNAL_DRIVEvariable with your path - Save it (like
~/Documents/Raycast/Scripts/or~/.local/raycast)
-
Make it executable:
chmod +x ~/.local/raycast/compress-to-external.sh- Assign a hotkey (optional but recommended):
- In Raycast, search for your script
- Press
Cmd+Kand select “Add Hotkey” - I use
Cmd+Shift+7for “Archive”
Using It
Now the workflow is beautifully simple:
- Open Finder
- Select a folder
- Hit your hotkey (or trigger via Raycast search)
- Watch the magic happen
The script will show you the compression progress and let you know when it’s done, including the final archive size.
Why 7zip Over Built-in Compression?
macOS has built-in zip compression, so why bother with 7zip? A few reasons:
- Better compression ratios - 7zip typically achieves 30-70% better compression than zip
- Cross-platform - .7z files are widely supported on Windows and Linux
- More options - If you want to add encryption or split archives later, 7zip supports it
- Speed - 7zip can be faster for large files
For project archives that might contain thousands of files and dependencies, these advantages add up quickly.
Potential Improvements
This script works great for my needs, but here are some ideas for enhancement:
- Multiple drive support - Let the user select from available drives
- Compression level options - Add arguments for maximum vs. fast compression
- Notification on completion - Use macOS notifications for long-running compressions
- Delete original option - Add a flag to remove the source folder after successful archiving
- Batch processing - Handle multiple selected folders
Troubleshooting
“7z not found” error:
brew install p7zip“External drive not found” error: Make sure your drive is connected and the path in
EXTERNAL_DRIVEmatches exactly. Check with:ls -la /Volumes/YourDrive/YourFolderScript doesn’t appear in Raycast: Refresh the script directory in Raycast Settings → Extensions → Script Commands → Reload All Scripts
Permission denied: Make sure the script is executable:
chmod +x your-script.shConclusion
This Raycast script has saved me countless hours of manual file management. What used to be a multi-step process involving right-clicks, waiting, dragging, and renaming is now a single keyboard shortcut.
The beauty of Raycast’s script commands is that they’re just bash scripts with some metadata. If you know bash, you can automate almost anything on your Mac. This particular script demonstrates several useful patterns:
- Integrating with Finder via AppleScript
- Robust error handling
- Automatic file versioning
- User-friendly progress feedback
I encourage you to take this script and adapt it to your own workflow. Maybe you want to compress to Dropbox instead of an external drive. Maybe you want to add a timestamp to the filename. The flexibility is there, you just need to modify a few lines.
Happy automating!
Have questions or improvements? Feel free to reach out. If you build something cool with this pattern, I’d love to hear about it!
/ DevOps / Programming
-
What you should start saying in Standup now
/ DevOps / Programming
-
You'll never guess what I was searching Perplexity AI for just now.
You'll never guess what I was searching Perplexity AI for just now.
- Can you provide examples of successful mini rack builds?
- What are the main benefits of using a mini rack for a home lab?
- A mini rack sommelier.
- I dabble in the racks of mini
- I'm somewhat of a mini rack connoisseur.
- Jeff Geerling's sweet MINI RACK
Ok, if you made it this far, you weirdos. Here is my MiniRack Dojo
https://www.perplexity.ai/collections/minirack-dojo-qotiBcSJSQekqymojD66Ow

/ DevOps / Productivity / Online Tools
-
Kubernetes: The Dominant Force in Container Orchestration
In the rapidly evolving landscape of cloud computing, container orchestration has become a critical component of modern application deployment and management. Kubernetes has emerged as the undisputed leader among the various platforms available, revolutionizing how we deploy, scale, and manage containerized applications. This blog post delves into the rise of Kubernetes, its rich ecosystem, and the various ways it can be deployed and utilized.
The Rise of Kubernetes: From Google’s Halls to Global Dominance
Kubernetes, often abbreviated as K8s, has a fascinating origin story that begins within Google. Born from the tech giant’s extensive experience with container management, Kubernetes is the open-source successor to Google’s internal system called Borg. In 2014, Google decided to open-source Kubernetes, a move that would reshape the container orchestration landscape.
Kubernetes’s journey from a Google project to the cornerstone of cloud-native computing is nothing short of remarkable. Its adoption accelerated rapidly, fueled by its robust features and the backing of the newly formed Cloud Native Computing Foundation (CNCF) in 2015. As major cloud providers embraced Kubernetes, it quickly became the de facto standard for container orchestration.
Key milestones in Kubernetes' history showcase its rapid evolution:
- 2016 Kubernetes 1.0 was released, marking its readiness for production use.
- 2017 saw significant cloud providers adopting Kubernetes as their primary container orchestration platform.
- By 2018, Kubernetes had matured significantly, becoming the first project to graduate from the CNCF.
- From 2019 onwards, Kubernetes has experienced continued rapid adoption and ecosystem growth.
Today, Kubernetes continues to evolve, with a thriving community of developers and users driving innovation at an unprecedented pace.
The Kubernetes Ecosystem: A Toolbox for Success
As Kubernetes has grown, so has its tools and extensions ecosystem. This rich landscape of complementary technologies has played a crucial role in Kubernetes' dominance, offering solutions to common challenges and extending its capabilities in numerous ways.
Helm, often called the package manager for Kubernetes, is a powerful tool that empowers developers by simplifying the deployment of applications and services. It allows developers to define, install, and upgrade even the most complex Kubernetes applications, putting them in control of the deployment process.
Prometheus has become the go-to solution for monitoring and alerting in the Kubernetes world. Its powerful data model and query language make it ideal for monitoring containerized environments, providing crucial insights into application and infrastructure performance.
Istio has emerged as a popular service mesh, adding sophisticated capabilities like traffic management, security, and observability to Kubernetes clusters. It allows developers to decouple application logic from the intricacies of network communication, enhancing both security and reliability.
Other notable tools in the ecosystem include Rancher, a complete container management platform; Lens, a user-friendly Kubernetes IDE; and Kubeflow, a machine learning toolkit explicitly designed for Kubernetes environments.
Kubernetes Across Cloud Providers: Similar Yet Distinct
While Kubernetes is cloud-agnostic, its implementation can vary across different cloud providers. Major players like Google, Amazon, and Microsoft offer managed Kubernetes services, each with unique features and integrations.
Google Kubernetes Engine (GKE) leverages Google’s deep expertise with Kubernetes, offering tight integration with other Google Cloud Platform services. Amazon’s Elastic Kubernetes Service (EKS) seamlessly integrates with AWS services and supports Fargate for serverless containers. Microsoft’s Azure Kubernetes Service (AKS) provides robust integration with Azure tools and services.
The key differences among these providers lie in their integration with cloud-specific services, networking implementations, autoscaling capabilities, monitoring and logging integrations, and pricing models. Understanding these nuances is crucial when choosing the Kubernetes service that fits your needs and existing cloud infrastructure.
Local vs. Cloud Kubernetes: Choosing the Right Environment
Kubernetes can be run both locally and in the cloud, and each option serves a different purpose in the development and deployment lifecycle.
Local Kubernetes setups like Minikube or Docker Desktop’s Kubernetes are ideal for development and testing. They offer a simplified environment with easy setup and teardown, perfect for iterating quickly on application code. However, they’re limited by local machine resources and need more advanced features of cloud-based solutions.
Cloud Kubernetes, on the other hand, is designed for production workloads. It offers scalable resources, advanced networking and storage options, and integration with cloud provider services. While it requires more complex setup and management, cloud Kubernetes provides the robustness and scalability needed for production applications.
Kubernetes Flavors: From Lightweight to Full-Scale
The Kubernetes ecosystem offers several distributions catering to different use cases:
MicroK8s, developed by Canonical, is designed for IoT and edge computing. It offers a lightweight, single-node cluster that can be expanded as needed, making it perfect for resource-constrained environments.
Minikube is primarily used for local development and testing. It runs a single-node Kubernetes cluster in a VM, supporting most Kubernetes features while remaining easy to set up and use.
K3s, developed by Rancher Labs, is another lightweight distribution ideal for edge, IoT, and CI environments. Its minimal resource requirements and small footprint (less than 40MB) make it perfect for scenarios where resources are at a premium.
Full Kubernetes is the complete, production-ready distribution that offers multi-node clusters, a full feature set, and extensive extensibility. While it requires more resources and a more complex setup, it provides the robustness for large-scale production deployments.
Conclusion: Kubernetes as the Cornerstone of Modern Infrastructure
Kubernetes has firmly established itself as the leader in container orchestration thanks to its robust ecosystem, widespread adoption, and versatile deployment options. Whether you’re developing locally, managing edge devices, or deploying at scale in the cloud, there’s a Kubernetes solution tailored to your needs.
As containerization continues to shape the future of application development and deployment, Kubernetes stands at the forefront, driving innovation and enabling organizations to build, deploy, and scale applications with unprecedented efficiency and flexibility. Its dominance in container orchestration is not just a current trend but a glimpse into the future of cloud-native computing.
/ DevOps
-
Streamlining Infrastructure Management with Terraform and Ansible
In the realm of infrastructure management, Terraform and Ansible have emerged as powerful tools that significantly enhance the efficiency and reliability of managing complex IT environments. While each can be used independently, their combined use offers robust capabilities for managing and provisioning infrastructure as code (IaC).
Terraform: Declarative Infrastructure Provisioning
Terraform, developed by HashiCorp and first released in 2014, is an open-source tool that enables declarative infrastructure provisioning across various cloud providers and services. It uses its own domain-specific language (DSL) called HashiCorp Configuration Language (HCL) to define and manage resources. Key features of Terraform include:
- Multi-cloud support
- Declarative configuration
- Resource graph
- Plan and predict changes
- State management
One of Terraform’s key competitors is AWS CloudFormation, which is specific to Amazon Web Services (AWS) and uses JSON or YAML templates to define infrastructure.
Ansible: Configuration Management and Automation
Ansible, created by Michael DeHaan and released in 2012, was acquired by Red Hat in 2015. It is an agentless automation tool that focuses on configuration management, application deployment, and orchestration. Ansible uses YAML-based playbooks to define and manage infrastructure, supporting a wide range of operating systems and cloud platforms. Key features of Ansible include:
- Agentless architecture
- YAML-based playbooks
- Extensive module library
- Idempotent operations
- Dynamic inventory
Ansible competes with other configuration management tools like Puppet and Chef, which follow a different architecture and use their own DSLs.
Benefits of Using Terraform and Ansible Together
-
Comprehensive Infrastructure Management: Terraform excels at provisioning infrastructure, while Ansible shines in configuration management. Together, they cover the full spectrum of infrastructure lifecycle management.
-
Infrastructure as Code (IaC): Both tools allow teams to define infrastructure as code, enabling version control, collaboration, and automation. This approach reduces manual errors and ensures consistency across environments.
-
Multi-Cloud Support: Terraform’s native multi-cloud capabilities, combined with Ansible’s flexibility, make managing resources across different cloud providers seamless.
-
Scalability and Flexibility: Terraform’s declarative approach facilitates easy scaling and modification of infrastructure. Ansible’s agentless architecture and support for dynamic inventories make it highly scalable and flexible.
-
Community and Ecosystem: Both tools boast large and active communities, offering a wealth of modules, plugins, and integrations. This rich ecosystem accelerates development and allows teams to leverage pre-built components.
Comparing Terraform to CloudFormation
When comparing Terraform to CloudFormation:
- Cloud Provider Support: Terraform offers a more cloud-agnostic approach, while CloudFormation is specific to AWS.
- Language: Terraform uses HCL, which is often considered more readable than CloudFormation’s JSON/YAML.
- State Management: Terraform has built-in state management, while CloudFormation relies on AWS-specific constructs.
- Community: Terraform has a larger, multi-cloud community, whereas CloudFormation’s community is AWS-centric.
Comparing Ansible to Other Configuration Management Tools
In comparison to tools like Puppet and Chef:
- Architecture: Ansible is agentless, while Puppet and Chef require agents on managed nodes.
- Language: Ansible uses YAML, which is generally considered easier to learn than Puppet’s DSL or Chef’s Ruby-based recipes.
- Learning Curve: Ansible is often praised for its simplicity and ease of getting started.
- Scalability: While all tools can handle large-scale deployments, Ansible’s agentless nature can make it easier to manage in certain scenarios.
Choosing the Right Tool
The choice between Terraform, Ansible, and their alternatives depends on the specific needs and preferences of the team and organization. Consider factors such as:
- Existing infrastructure and cloud providers
- Team expertise and learning curve
- Scale of operations
- Specific use cases (e.g., provisioning vs. configuration management)
While these tools can be used together, they are not necessarily dependent on each other. Teams can select the tool that best fits their infrastructure management requirements, whether it’s provisioning with Terraform, configuration management with Ansible, or a combination of both.
Conclusion
By adopting infrastructure as code practices and leveraging tools like Terraform and Ansible, teams can streamline their infrastructure management processes, improve consistency, and achieve greater agility in an increasingly complex technology landscape. The combination of Terraform’s powerful provisioning capabilities and Ansible’s flexible configuration management creates a robust toolkit for modern DevOps practices, enabling organizations to manage their infrastructure more efficiently and reliably than ever before.
/ DevOps
-
My Recommended Way to Run WordPress in 2024
For Small to Medium Sized Sites
Here is where I would start with:
- Server Management - SpinupWP - $12/m
- Hosting - Vultr or DO - $12/m
- Speed + Security - Cloudflare APO - $5/m
With Cloudflare and the Paid APO Plugin, you will go from like 200 requests/sec to 600 requests/sec.

/ DevOps
-
Think I’m going to move all my Personal sites over to K3s. Rolling deploys and Rollbacks are just better with Kubernetes than the symlink craziness you have to do when not using Containers. Works great on Homelab. Now to get k3s set up on a VM.

/ DevOps
-
It took 20 versions for NodeJS to support ENVs, but here we are. Welcome to the future. 🎉🤡
/ DevOps
-
Programmers: Dracula theme + JetBrains Mono = ✅
/ DevOps