DevOps
-
36 Framework Fixtures in One Session: How Beads + Claude Code Changed Our Testing Game
We built test fixtures for 36 web frameworks in a single session. Not days. Not a week of grinding through documentation. Hours.
Here’s what happened and why it matters.
The Problem
api2spec is a CLI tool that parses source code to generate OpenAPI specifications. To test it properly, we needed real, working API projects for every supported framework—consistent endpoints, predictable responses, the whole deal.
We started with 5 frameworks: Laravel, Axum, Flask, Gin, and Express. The goal was to cover all 36 supported frameworks with fixture projects we could use to validate our parsers.
What We Actually Built
36 fixture repositories across 15 programming languages. Each one includes:
- Health check endpoints (
GET /health,GET /health/ready) - Full User CRUD (
GET/POST /users,GET/PUT/DELETE /users/:id) - Nested resources (
GET /users/:id/posts) - Post endpoints with pagination (
GET /posts?limit=&offset=) - Consistent JSON response structures
The language coverage tells the story:
- Go: Chi, Echo, Fiber, Gin
- Rust: Actix, Axum, Rocket
- TypeScript/JS: Elysia, Express, Fastify, Hono, Koa, NestJS
- Python: Django REST Framework, FastAPI, Flask
- Java: Micronaut, Spring
- Kotlin: Ktor
- Scala: Play, Tapir
- PHP: Laravel, Slim, Symfony
- Ruby: Rails, Sinatra
- C#/.NET: ASP.NET, FastEndpoints, Nancy
- C++: Crow, Drogon, Oat++
- Swift: Vapor
- Haskell: Servant
- Elixir: Phoenix
- Gleam: Gleam (Wisp)
For languages without local runtimes on my machine—Haskell, Elixir, Gleam, Scala, Java, Kotlin—we created Docker Compose configurations with both an app service and a dev service for interactive development.
How Beads Made This Possible
We used beads (a lightweight git-native issue tracker) to manage the work. The structure was simple:
- 40 total issues created
- 36 closed in one session
- 5 Docker setup tasks marked as P2 priority (these blocked dependent fixtures)
- 31 fixture tasks at P3 priority
- 4 remaining for future work
The dependency tracking was key. Docker environments had to be ready before their fixtures could be worked on, and beads handled that automatically.
When I’d finish a Docker setup task, the blocked fixture tasks became available.
Claude Code agents worked through the fixture implementations in parallel where possible.
The combination of clear task definitions, dependency management, and AI-assisted coding meant we weren’t context-switching between “what do I need to do next?” and “how do I implement this?”
The Numbers
Metric Value Total Issues 40 Closed 36 Avg Lead Time 0.9 hours New GitHub Repos 31 Languages Covered 15 That average lead time of under an hour per framework includes everything: creating the repo, implementing the endpoints, testing, and pushing.
What’s Left
Four tasks queued for follow-up sessions:
- Drift detection - Compare generated specs against expected output
- Configurable report formats - JSON, HTML, and log output options
- CLAUDE.md files - Development instructions for each fixture
- Claude agents - Framework-specific coding assistants
The Takeaway
Doing this today, was like having a super power. “I need to test across 36 frameworks” and actually having those test fixtures ready, but with agents and Opus 4.5 and beads, BAM done!
Beads gave us the structure to track dependencies and progress.
Claude Code agents handled the repetitive-but-different implementation work across languages and frameworks.
The combination let us focus on the interesting problems instead of the mechanical ones.
All 36 repos are live at github.com/api2spec with the
api2spec-fixture-*naming convention.Have you tried this approach yet?
/ DevOps / Open-source / Testing / Ai-tools / Claude-code
- Health check endpoints (
-
Twenty Years of DevOps: What's Changed and What Hasn't
I’ve been thinking about how much our industry has transformed over the past two decades. It’s wild to realize that 20 years ago, DevOps as we know it didn’t even exist. We were deploying to production using FTP. Yes, FTP. You use the best tool that is available to you and that’s what we had.
So what’s actually changed, and what’s stayed stubbornly the same?
The Constants
JavaScript is still king. Although to be fair, the JavaScript of 2005 and the JavaScript of today are almost unrecognizable. We’ve gone from jQuery spaghetti to sophisticated module systems, TypeScript, and frameworks that would have seemed like science fiction back then.
And yet, we’re still centering that div.
Certainly, HTML5 and semantic tags have genuinely helped, and I’m certainly grateful we’re not building everything out of tables and spans anymore.
What’s Different
The list of things we didn’t have 20 years ago is endless but here are some of the big ones:
- WebSockets
- HTTP/2
- SSL certificates as a default (most sites were running plain HTTP)
- Git and GitOps
- Containers and Kubernetes
- CI/CD pipelines as we know them
- Jenkins didn’t exist
- Docker wasn’t even a concept
The framework landscape is unrecognizable. You might call it a proliferation … We went from a handful of options to, well, a new JavaScript framework every week, so the joke goes.
Git adoption has been one of the best things to happen to our industry. (RIP SVN) Although I hear rumors that some industries are still clinging to some truly Bazaar version control systems. Mecurial anyone?
The Bigger Picture
Here’s the thing that really gets me: our entire discipline didn’t exist. DevOps, SRE, platform engineering… these weren’t job titles. They weren’t even concepts people were discussing.
We had developers in their hole and operations in their walled gardens. Now we have infrastructure as code, GitOps workflows, observability platforms, and the expectation that you can deploy to production multiple times a day without breaking a sweat.
The cultural shift from “ops handles production” to “you build it, you run it” fundamentally changed how we think about software.
What Stays the Same
Despite all the tooling changes, some things remain constant. We’re still trying to ship reliable software faster. We’re still balancing speed with stability.
Twenty years from now, I wonder what we’ll be reminiscing about. Remember when we used to actually write software ourselves and complain about testing?
What seems cutting-edge is the new legacy before you know it.
/ DevOps / Web-development / Career / Retrospective
-
Introducing api2spec: Generate OpenAPI Specs from Source Code
You’ve written a beautiful REST API. Routes are clean, handlers are tested and the types are solid. But where’s your OpenAPI spec? It’s probably outdated, incomplete, or doesn’t exist at all.
If you’re “lucky”, you’ve been maintaining one by hand. The alternatives aren’t great either, runtime generation requires starting your app and hitting every endpoint or annotation-heavy approaches clutter your code. At this point we should all know, with manual maintenance it’ll inevitably drift from reality.
What if you could just point a tool at your source code and get an OpenAPI spec?
Enter api2spec
# Install go install github.com/api2spec/api2spec@latest # Initialize config (auto-detects your framework) api2spec init # Generate your spec api2spec generateThat’s it. No decorators to add. No server to start. No endpoints to crawl.
What We Support
Here’s where it gets interesting. We didn’t build this for one framework—we built a plugin architecture that supports 30+ frameworks across 16 programming languages:
- Go: Chi, Gin, Echo, Fiber, Gorilla Mux, stdlib
- TypeScript/JavaScript: Express, Fastify, Koa, Hono, Elysia, NestJS
- Python: FastAPI, Flask, Django REST Framework
- Rust: Axum, Actix, Rocket
- PHP: Laravel, Symfony, Slim
- Ruby: Rails, Sinatra
- JVM: Spring Boot, Ktor, Micronaut, Play
- And more: Elixir Phoenix, ASP.NET Core, Gleam, Vapor, Servant…
How It Works
The secret sauce is tree-sitter, an incremental parsing library that can parse source code into concrete syntax trees.
Why tree-sitter instead of language-specific AST libraries?
- One approach, many languages. We use the same pattern-matching approach whether we’re parsing Go, Rust, TypeScript, or PHP.
- Speed. Tree-sitter is designed for real-time parsing in editors. It’s fast enough to parse entire codebases in seconds.
- Robustness. It handles malformed or incomplete code gracefully, which is important when you’re analyzing real codebases.
- No runtime required. Your code never runs. We can analyze code even if dependencies aren’t installed or the project won’t compile.
For each framework, we have a plugin that knows how to detect if the framework is in use, find route definitions using tree-sitter queries, and extract schemas from type definitions.
Let’s Be Honest: Limitations
Here’s where I need to be upfront. Static analysis has fundamental limitations.
When you generate OpenAPI specs at runtime (like FastAPI does natively), you have perfect information. The actual response types. The real validation rules. The middleware that transforms requests.
We’re working with source code. We can see structure, but not behavior.
What this means in practice:
- Route detection isn’t perfect. Dynamic routing or routes defined in unusual patterns might be missed.
- Schema extraction varies by language. Go structs with JSON tags? Great. TypeScript interfaces? We can’t extract literal union types as enums yet.
- We can’t follow runtime logic. If your route path comes from a database, we won’t find it.
- Response types are inferred, not proven.
This is not a replacement for runtime-generated specs. But maybe so in the future and for many teams, it’s a massive improvement over having no spec at all.
Built in a Weekend
The core of this tool was built in three days.
- Day one: Plugin architecture, Go framework support, CLI scaffolding
- Day two: TypeScript/JavaScript parsers, schema extraction from Zod
- Day three: Python, Rust, PHP support, fixture testing, edge case fixes
Is it production-ready? Maybe?
Is it useful? Absolutely.
For the fixture repositories we’ve created realistic APIs in Express, Gin, Flask, Axum, and Laravel. api2spec correctly extracts 20-30 routes and generates meaningful schemas. Not perfect. But genuinely useful.
How You Can Help
This project improves through real-world testing. Every fixture we create exposes edge cases. Every framework has idioms we haven’t seen yet.
- Create a fixture repository. Build a small API in your framework of choice. Run api2spec against it. File issues for what doesn’t work.
- Contribute plugins. The plugin interface is straightforward. If you know a framework well, you can make api2spec better at parsing it.
- Documentation. Found an edge case? Document it. Figured out a workaround? Share it.
The goal is usefulness, and useful tools get better when people use them.
Getting Started
go install github.com/api2spec/api2spec@latest cd your-api-project api2spec init api2spec generate cat openapi.yamlIf it works well, great! If it doesn’t, file an issue. Either way, you’ve helped.
api2spec is open source under the FSL-1.1-MIT license. Star us on GitHub if you find it useful.
Built with love, tree-sitter, and too much tea. ☕
/ DevOps / Programming / Openapi / Golang / Open-source
-
Automate Folder Archiving on macOS with Raycast and 7zip
If you’re like me and frequently need to archive project folders to an external drive, you know how tedious the process can be: right-click, compress, wait, find the archive, move it to the external drive, rename if there’s a conflict… It’s a workflow that begs for automation.
Today, I’m going to show you how I built a custom Raycast script that compresses any folder with 7zip and automatically moves it to an external drive, all with a single keyboard shortcut.
What We’re Building
A Raycast script command that:
- Takes whatever folder you have selected in Finder
- Compresses it using 7zip (better compression than macOS’s built-in zip)
- Moves it directly to a specified folder on your external drive
- Automatically handles version numbering if the archive already exists
- Provides clear error messages if something goes wrong
No more manual copying. No more filename conflicts. Just select, trigger, and done.
Prerequisites
Before we start, you’ll need:
- Raycast - Download from raycast.com if you haven’t already
- 7zip - Install via Homebrew:
brew install p7zip- An external drive - Obviously, but make sure you know its mount path
The Problem with the Built-in Approach
Initially, I thought: “Can’t I just have Raycast pass the selected folder path as an argument?”
The answer is technically yes, but it’s clunky. Raycast would prompt you for the folder path every time, which means you’d need to:
- Copy the folder path
- Trigger the Raycast command
- Paste the path
- Hit enter
That’s not automation—that’s just extra steps with good intentions.
The Solution: AppleScript Integration
The key insight was using AppleScript to grab the currently selected item from Finder. This way, the workflow becomes:
- Select a folder in Finder
- Trigger the Raycast command (I use
Cmd+Shift+7) - Watch it compress and move automatically
No input required. No path copying. Just pure automation bliss.
Building the Script
Here’s the complete script with all the error handling we need:
#!/bin/bash # Required parameters: # @raycast.schemaVersion 1 # @raycast.title Compress Selected to External Drive # @raycast.mode fullOutput # Optional parameters: # @raycast.icon 📦 # @raycast.needsConfirmation false # Documentation: # @raycast.description Compress selected Finder folder with 7zip and move to external drive # [@raycast.author](http://raycast.author) Your Name EXTERNAL_DRIVE="/Volumes/YourDrive/ArchiveFolder" # Get the selected item from Finder FOLDER_PATH=$(osascript -e 'tell application "Finder" to set selectedItems to selection if (count of selectedItems) is 0 then return "" else return POSIX path of (item 1 of selectedItems as alias) end if') # Check if anything is selected if [ -z "$FOLDER_PATH" ]; then echo "❌ Error: No item selected in Finder" echo "Please select a folder in Finder and try again" exit 1 fi # Trim whitespace FOLDER_PATH=$(echo "$FOLDER_PATH" | xargs) # Check if path exists if [ ! -e "$FOLDER_PATH" ]; then echo "❌ Error: Path does not exist: $FOLDER_PATH" exit 1 fi # Check if path is a directory if [ ! -d "$FOLDER_PATH" ]; then echo "❌ Error: Selected item is not a folder: $FOLDER_PATH" exit 1 fi # Check if 7z is installed if ! command -v 7z &> /dev/null; then echo "❌ Error: 7z not found. Install with: brew install p7zip" exit 1 fi # Check if external drive is mounted if [ ! -d "$EXTERNAL_DRIVE" ]; then echo "❌ Error: External drive not found at: $EXTERNAL_DRIVE" echo "Make sure the drive is connected and mounted" exit 1 fi # Create archive name from folder name BASE_NAME="$(basename "$FOLDER_PATH")" ARCHIVE_NAME="${BASE_NAME}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" # Check if archive already exists and find next available version number if [ -f "$OUTPUT_PATH" ]; then echo "⚠️ Archive already exists, creating versioned copy..." VERSION=2 while [ -f "$EXTERNAL_DRIVE/${BASE_NAME}_v${VERSION}.7z" ]; do VERSION=$((VERSION + 1)) done ARCHIVE_NAME="${BASE_NAME}_v${VERSION}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" echo "📝 Using version number: v${VERSION}" echo "" fi echo "🗜️ Compressing: $(basename "$FOLDER_PATH")" echo "📍 Destination: $OUTPUT_PATH" echo "" # Compress with 7zip if 7z a "$OUTPUT_PATH" "$FOLDER_PATH"; then echo "" echo "✅ Successfully compressed and moved to external drive" echo "📦 Archive: $ARCHIVE_NAME" echo "📊 Size: $(du -h "$OUTPUT_PATH" | cut -f1)" else echo "" echo "❌ Error: Compression failed" exit 1 fiKey Features Explained
1. Finder Integration
The AppleScript snippet grabs whatever you have selected in Finder:
FOLDER_PATH=$(osascript -e 'tell application "Finder" to set selectedItems to selection if (count of selectedItems) is 0 then return "" else return POSIX path of (item 1 of selectedItems as alias) end if')This returns a POSIX path (like
/Users/yourname/Documents/project) that we can use with standard bash commands.2. Comprehensive Error Checking
The script validates everything before attempting compression:
- Is anything selected?
- Does the path exist?
- Is it actually a directory?
- Is 7zip installed?
- Is the external drive connected?
Each check provides a helpful error message so you know exactly what went wrong.
3. Automatic Version Numbering
This was a crucial addition. If
project.7zalready exists, the script will automatically createproject_v2.7z. If that exists, it’ll createproject_v3.7z, and so on:if [ -f "$OUTPUT_PATH" ]; then VERSION=2 while [ -f "$EXTERNAL_DRIVE/${BASE_NAME}_v${VERSION}.7z" ]; do VERSION=$((VERSION + 1)) done ARCHIVE_NAME="${BASE_NAME}_v${VERSION}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" fiNo more manual renaming. No more overwriting precious backups.
4. Progress Feedback
Using
@raycast.mode fullOutputmeans you see everything that’s happening:- Which folder is being compressed
- Where it’s going
- The final archive size
This transparency is important when you’re archiving large projects that might take a few minutes.
Setting It Up
- Find your external drive path:
ls /Volumes/Look for your drive name, then determine where you want archives saved. For example:
/Volumes/Expansion/WebProjectsArchive-
Create the script:
- Open Raycast Settings → Extensions → Script Commands
- Click “Create Script Command”
- Paste the script above
- Update the
EXTERNAL_DRIVEvariable with your path - Save it (like
~/Documents/Raycast/Scripts/or~/.local/raycast)
-
Make it executable:
chmod +x ~/.local/raycast/compress-to-external.sh- Assign a hotkey (optional but recommended):
- In Raycast, search for your script
- Press
Cmd+Kand select “Add Hotkey” - I use
Cmd+Shift+7for “Archive”
Using It
Now the workflow is beautifully simple:
- Open Finder
- Select a folder
- Hit your hotkey (or trigger via Raycast search)
- Watch the magic happen
The script will show you the compression progress and let you know when it’s done, including the final archive size.
Why 7zip Over Built-in Compression?
macOS has built-in zip compression, so why bother with 7zip? A few reasons:
- Better compression ratios - 7zip typically achieves 30-70% better compression than zip
- Cross-platform - .7z files are widely supported on Windows and Linux
- More options - If you want to add encryption or split archives later, 7zip supports it
- Speed - 7zip can be faster for large files
For project archives that might contain thousands of files and dependencies, these advantages add up quickly.
Potential Improvements
This script works great for my needs, but here are some ideas for enhancement:
- Multiple drive support - Let the user select from available drives
- Compression level options - Add arguments for maximum vs. fast compression
- Notification on completion - Use macOS notifications for long-running compressions
- Delete original option - Add a flag to remove the source folder after successful archiving
- Batch processing - Handle multiple selected folders
Troubleshooting
“7z not found” error:
brew install p7zip“External drive not found” error: Make sure your drive is connected and the path in
EXTERNAL_DRIVEmatches exactly. Check with:ls -la /Volumes/YourDrive/YourFolderScript doesn’t appear in Raycast: Refresh the script directory in Raycast Settings → Extensions → Script Commands → Reload All Scripts
Permission denied: Make sure the script is executable:
chmod +x your-script.shConclusion
This Raycast script has saved me countless hours of manual file management. What used to be a multi-step process involving right-clicks, waiting, dragging, and renaming is now a single keyboard shortcut.
The beauty of Raycast’s script commands is that they’re just bash scripts with some metadata. If you know bash, you can automate almost anything on your Mac. This particular script demonstrates several useful patterns:
- Integrating with Finder via AppleScript
- Robust error handling
- Automatic file versioning
- User-friendly progress feedback
I encourage you to take this script and adapt it to your own workflow. Maybe you want to compress to Dropbox instead of an external drive. Maybe you want to add a timestamp to the filename. The flexibility is there, you just need to modify a few lines.
Happy automating!
Have questions or improvements? Feel free to reach out. If you build something cool with this pattern, I’d love to hear about it!
/ DevOps / Programming
-
What you should start saying in Standup now
/ DevOps / Programming
-
You'll never guess what I was searching Perplexity AI for just now.
You'll never guess what I was searching Perplexity AI for just now.
- Can you provide examples of successful mini rack builds?
- What are the main benefits of using a mini rack for a home lab?
- A mini rack sommelier.
- I dabble in the racks of mini
- I'm somewhat of a mini rack connoisseur.
- Jeff Geerling's sweet MINI RACK
Ok, if you made it this far, you weirdos. Here is my MiniRack Dojo
https://www.perplexity.ai/collections/minirack-dojo-qotiBcSJSQekqymojD66Ow

/ DevOps / Productivity / Online Tools
-
Kubernetes: The Dominant Force in Container Orchestration
In the rapidly evolving landscape of cloud computing, container orchestration has become a critical component of modern application deployment and management. Kubernetes has emerged as the undisputed leader among the various platforms available, revolutionizing how we deploy, scale, and manage containerized applications. This blog post delves into the rise of Kubernetes, its rich ecosystem, and the various ways it can be deployed and utilized.
The Rise of Kubernetes: From Google’s Halls to Global Dominance
Kubernetes, often abbreviated as K8s, has a fascinating origin story that begins within Google. Born from the tech giant’s extensive experience with container management, Kubernetes is the open-source successor to Google’s internal system called Borg. In 2014, Google decided to open-source Kubernetes, a move that would reshape the container orchestration landscape.
Kubernetes’s journey from a Google project to the cornerstone of cloud-native computing is nothing short of remarkable. Its adoption accelerated rapidly, fueled by its robust features and the backing of the newly formed Cloud Native Computing Foundation (CNCF) in 2015. As major cloud providers embraced Kubernetes, it quickly became the de facto standard for container orchestration.
Key milestones in Kubernetes' history showcase its rapid evolution:
- 2016 Kubernetes 1.0 was released, marking its readiness for production use.
- 2017 saw significant cloud providers adopting Kubernetes as their primary container orchestration platform.
- By 2018, Kubernetes had matured significantly, becoming the first project to graduate from the CNCF.
- From 2019 onwards, Kubernetes has experienced continued rapid adoption and ecosystem growth.
Today, Kubernetes continues to evolve, with a thriving community of developers and users driving innovation at an unprecedented pace.
The Kubernetes Ecosystem: A Toolbox for Success
As Kubernetes has grown, so has its tools and extensions ecosystem. This rich landscape of complementary technologies has played a crucial role in Kubernetes' dominance, offering solutions to common challenges and extending its capabilities in numerous ways.
Helm, often called the package manager for Kubernetes, is a powerful tool that empowers developers by simplifying the deployment of applications and services. It allows developers to define, install, and upgrade even the most complex Kubernetes applications, putting them in control of the deployment process.
Prometheus has become the go-to solution for monitoring and alerting in the Kubernetes world. Its powerful data model and query language make it ideal for monitoring containerized environments, providing crucial insights into application and infrastructure performance.
Istio has emerged as a popular service mesh, adding sophisticated capabilities like traffic management, security, and observability to Kubernetes clusters. It allows developers to decouple application logic from the intricacies of network communication, enhancing both security and reliability.
Other notable tools in the ecosystem include Rancher, a complete container management platform; Lens, a user-friendly Kubernetes IDE; and Kubeflow, a machine learning toolkit explicitly designed for Kubernetes environments.
Kubernetes Across Cloud Providers: Similar Yet Distinct
While Kubernetes is cloud-agnostic, its implementation can vary across different cloud providers. Major players like Google, Amazon, and Microsoft offer managed Kubernetes services, each with unique features and integrations.
Google Kubernetes Engine (GKE) leverages Google’s deep expertise with Kubernetes, offering tight integration with other Google Cloud Platform services. Amazon’s Elastic Kubernetes Service (EKS) seamlessly integrates with AWS services and supports Fargate for serverless containers. Microsoft’s Azure Kubernetes Service (AKS) provides robust integration with Azure tools and services.
The key differences among these providers lie in their integration with cloud-specific services, networking implementations, autoscaling capabilities, monitoring and logging integrations, and pricing models. Understanding these nuances is crucial when choosing the Kubernetes service that fits your needs and existing cloud infrastructure.
Local vs. Cloud Kubernetes: Choosing the Right Environment
Kubernetes can be run both locally and in the cloud, and each option serves a different purpose in the development and deployment lifecycle.
Local Kubernetes setups like Minikube or Docker Desktop’s Kubernetes are ideal for development and testing. They offer a simplified environment with easy setup and teardown, perfect for iterating quickly on application code. However, they’re limited by local machine resources and need more advanced features of cloud-based solutions.
Cloud Kubernetes, on the other hand, is designed for production workloads. It offers scalable resources, advanced networking and storage options, and integration with cloud provider services. While it requires more complex setup and management, cloud Kubernetes provides the robustness and scalability needed for production applications.
Kubernetes Flavors: From Lightweight to Full-Scale
The Kubernetes ecosystem offers several distributions catering to different use cases:
MicroK8s, developed by Canonical, is designed for IoT and edge computing. It offers a lightweight, single-node cluster that can be expanded as needed, making it perfect for resource-constrained environments.
Minikube is primarily used for local development and testing. It runs a single-node Kubernetes cluster in a VM, supporting most Kubernetes features while remaining easy to set up and use.
K3s, developed by Rancher Labs, is another lightweight distribution ideal for edge, IoT, and CI environments. Its minimal resource requirements and small footprint (less than 40MB) make it perfect for scenarios where resources are at a premium.
Full Kubernetes is the complete, production-ready distribution that offers multi-node clusters, a full feature set, and extensive extensibility. While it requires more resources and a more complex setup, it provides the robustness for large-scale production deployments.
Conclusion: Kubernetes as the Cornerstone of Modern Infrastructure
Kubernetes has firmly established itself as the leader in container orchestration thanks to its robust ecosystem, widespread adoption, and versatile deployment options. Whether you’re developing locally, managing edge devices, or deploying at scale in the cloud, there’s a Kubernetes solution tailored to your needs.
As containerization continues to shape the future of application development and deployment, Kubernetes stands at the forefront, driving innovation and enabling organizations to build, deploy, and scale applications with unprecedented efficiency and flexibility. Its dominance in container orchestration is not just a current trend but a glimpse into the future of cloud-native computing.
/ DevOps
-
Streamlining Infrastructure Management with Terraform and Ansible
In the realm of infrastructure management, Terraform and Ansible have emerged as powerful tools that significantly enhance the efficiency and reliability of managing complex IT environments. While each can be used independently, their combined use offers robust capabilities for managing and provisioning infrastructure as code (IaC).
Terraform: Declarative Infrastructure Provisioning
Terraform, developed by HashiCorp and first released in 2014, is an open-source tool that enables declarative infrastructure provisioning across various cloud providers and services. It uses its own domain-specific language (DSL) called HashiCorp Configuration Language (HCL) to define and manage resources. Key features of Terraform include:
- Multi-cloud support
- Declarative configuration
- Resource graph
- Plan and predict changes
- State management
One of Terraform’s key competitors is AWS CloudFormation, which is specific to Amazon Web Services (AWS) and uses JSON or YAML templates to define infrastructure.
Ansible: Configuration Management and Automation
Ansible, created by Michael DeHaan and released in 2012, was acquired by Red Hat in 2015. It is an agentless automation tool that focuses on configuration management, application deployment, and orchestration. Ansible uses YAML-based playbooks to define and manage infrastructure, supporting a wide range of operating systems and cloud platforms. Key features of Ansible include:
- Agentless architecture
- YAML-based playbooks
- Extensive module library
- Idempotent operations
- Dynamic inventory
Ansible competes with other configuration management tools like Puppet and Chef, which follow a different architecture and use their own DSLs.
Benefits of Using Terraform and Ansible Together
-
Comprehensive Infrastructure Management: Terraform excels at provisioning infrastructure, while Ansible shines in configuration management. Together, they cover the full spectrum of infrastructure lifecycle management.
-
Infrastructure as Code (IaC): Both tools allow teams to define infrastructure as code, enabling version control, collaboration, and automation. This approach reduces manual errors and ensures consistency across environments.
-
Multi-Cloud Support: Terraform’s native multi-cloud capabilities, combined with Ansible’s flexibility, make managing resources across different cloud providers seamless.
-
Scalability and Flexibility: Terraform’s declarative approach facilitates easy scaling and modification of infrastructure. Ansible’s agentless architecture and support for dynamic inventories make it highly scalable and flexible.
-
Community and Ecosystem: Both tools boast large and active communities, offering a wealth of modules, plugins, and integrations. This rich ecosystem accelerates development and allows teams to leverage pre-built components.
Comparing Terraform to CloudFormation
When comparing Terraform to CloudFormation:
- Cloud Provider Support: Terraform offers a more cloud-agnostic approach, while CloudFormation is specific to AWS.
- Language: Terraform uses HCL, which is often considered more readable than CloudFormation’s JSON/YAML.
- State Management: Terraform has built-in state management, while CloudFormation relies on AWS-specific constructs.
- Community: Terraform has a larger, multi-cloud community, whereas CloudFormation’s community is AWS-centric.
Comparing Ansible to Other Configuration Management Tools
In comparison to tools like Puppet and Chef:
- Architecture: Ansible is agentless, while Puppet and Chef require agents on managed nodes.
- Language: Ansible uses YAML, which is generally considered easier to learn than Puppet’s DSL or Chef’s Ruby-based recipes.
- Learning Curve: Ansible is often praised for its simplicity and ease of getting started.
- Scalability: While all tools can handle large-scale deployments, Ansible’s agentless nature can make it easier to manage in certain scenarios.
Choosing the Right Tool
The choice between Terraform, Ansible, and their alternatives depends on the specific needs and preferences of the team and organization. Consider factors such as:
- Existing infrastructure and cloud providers
- Team expertise and learning curve
- Scale of operations
- Specific use cases (e.g., provisioning vs. configuration management)
While these tools can be used together, they are not necessarily dependent on each other. Teams can select the tool that best fits their infrastructure management requirements, whether it’s provisioning with Terraform, configuration management with Ansible, or a combination of both.
Conclusion
By adopting infrastructure as code practices and leveraging tools like Terraform and Ansible, teams can streamline their infrastructure management processes, improve consistency, and achieve greater agility in an increasingly complex technology landscape. The combination of Terraform’s powerful provisioning capabilities and Ansible’s flexible configuration management creates a robust toolkit for modern DevOps practices, enabling organizations to manage their infrastructure more efficiently and reliably than ever before.
/ DevOps
-
My Recommended Way to Run WordPress in 2024
For Small to Medium Sized Sites
Here is where I would start with:
- Server Management - SpinupWP - $12/m
- Hosting - Vultr or DO - $12/m
- Speed + Security - Cloudflare APO - $5/m
With Cloudflare and the Paid APO Plugin, you will go from like 200 requests/sec to 600 requests/sec.

/ DevOps
-
Think I’m going to move all my Personal sites over to K3s. Rolling deploys and Rollbacks are just better with Kubernetes than the symlink craziness you have to do when not using Containers. Works great on Homelab. Now to get k3s set up on a VM.

/ DevOps
-
It took 20 versions for NodeJS to support ENVs, but here we are. Welcome to the future. 🎉🤡
/ DevOps
-
Programmers: Dracula theme + JetBrains Mono = ✅
/ DevOps
-
Got the AWS account mostly cleaned up. My Mailcoach version didn’t support PHP 8.2 so had to ditch it after spending time migrating and getting it setup on the new server. Only Two VMSs on Vultr so far. The plausible server will be third. Will be fully migrated by the weekend.
/ DevOps
-
Time to migrate off AWS. Looking at saving $100 a month and moving to Vultr.
/ DevOps
-
Breaking RSA

RSA encryption is based on the difficulty of factoring large composite numbers into their prime factors. To break RSA encryption, one must factorize the public key, a product of two large prime numbers. The number of qubits required to factorize a number N using Shor's algorithm (a quantum algorithm that can be used to factorize integers) is 2n + 3, where n is the number of bits in N.
For instance, RSA-2048, a common RSA key size used in practice, comprises two 1024-bit prime numbers. To break this, one would need approximately 2*2048 + 3 = 4099 qubits. However, this is a simplification, as the number of qubits required would depend on the specifics of the quantum computer and quantum error correction techniques used, which could significantly increase the number of qubits required.
Having enough qubits isn't the only challenge. The qubits must be interconnected in a way that allows the implementation of the quantum gates needed for Shor's algorithm, and the machine must be able to maintain quantum coherence long enough to complete the operation.
/ DevOps
-
GitOps also allows for version control and rollback capabilities. With NoOps, there is a lack of visibility and control over the deployment pipeline, which can lead to potential errors and security issues. GitOps also promotes a culture of continuous integration and delivery.

/ DevOps
-
GRANT commands in PostgreSQL allow a user to assign privileges on a database object to other users. Syntax:
GRANT privilege_name ON object_type object_name TO role_name;Ex:GRANT SELECT ON table sales TO user_read;Revoking privileges is done using REVOKE command.
/ DevOps