Programming
-
Security and Reliability in AI-Assisted Development
You may not realize it, but AI code generation is fundamentally non-deterministic. It’s probabilistic at its core, it’s predicting code rather than computing it.
And while there’s a lot of orchestration happening between the raw model output and what actually lands in your editor, you can still get wildly different results depending on how you use the tools.
This matters more than most people realize.
Garbage In, Garbage Out (Still True)
The old programming adage applies here with renewed importance. You need to be explicit with these tools. Adding predictability into how you build is crucial.
Some interesting patterns:
- Specialized agents set up for specific tasks
- Skills and templates for common operations
- Orchestrator conversations that plan but don’t implement directly
- Multiple conversation threads working on the same codebase via Git workspaces
The more structure you provide, the more consistent your output becomes.
The Security Problem
This topic doesn’t get talked about enough. All of our common bugs have snuck into the training data. SQL injection patterns, XSS vulnerabilities, insecure defaults… they’re all in there.
The model can’t always be relied upon to build it correctly the first time. Then there’s the question of trust.
Do you trust your LLM provider?
Is their primary focus on quality and reliable, consistent output? What guardrails exist before the code reaches you? Is the model specialized for coding, or is it a general-purpose model that happens to write code?
These are important engineering questions.
Deterministic Wrappers Around Probabilistic Cores
The more we can put deterministic wrappers around these probabilistic cores, the more consistent the output will be.
So, what does this look like in practice?
Testing is no longer optional. We used to joke that we’d get to testing when we had time. That’s not how it works anymore. Testing is required because it provides feedback to the models. It’s your mechanism for catching problems before they compound.
Testing is your last line of defense against garbage sneaking into the system.
AI-assisted review is essential. The amount of code you can now create has increased dramatically. You need better tools to help you understand all that code. The review step, typically done during a pull request, is now crucial for product development. Not optional. Crucial.
The models need to review itself, or you need a separate review process that catches what the generating step missed.
The Takeaway
We’re in an interesting point in time. These tools can dramatically increase your output, but only if you build the right guardrails around them should we trust the result.
Structure your prompts. Test everything. Review systematically. Trust but verify.
The developers who figure out how to add predictability to unpredictable processes are the ones who’ll who will be shipping features instead of shitting out code.
/ DevOps / AI / Programming
-
Learning to Program in 2026
If I had to start over as a programmer in 2026, what would I do differently? This question comes up more and more and with people actively building software using AI, it’s as relevant as ever.
Some people will tell you to pick a project and learn whatever language fits that project best. Others will push JavaScript because it’s everywhere and you can build just about anything with it. Both are reasonable takes, but I do think there’s a best first language.
However, don’t take my word for it. Listen to Brian Kernighan. If you’re not familiar with the name, he co-authored The C Programming Language back in 1978 and worked at Bell Labs alongside the creators of Unix. Oh also, he is a computer science Professor at Princeton. This man TAUGHT programming to generations of computer scientists.
There’s an excellent interview on Computerphile with Kernighan where he makes a compelling case for Python as the first language.
Why Python?
Kernighan makes three points that you should listen to.
First, the “no limitations” argument. Tools like Scratch are great for kids or early learners, but you hit a wall pretty quickly. Python sits in a sweet spot—it’s readable and approachable, but the ecosystem is deep enough that you won’t outgrow it.
Second, the skills transfer. Once you learn the fundamentals—loops, variables, data structures—they apply everywhere. As Kernighan puts it: “If you’ve done N programming languages, the N+1 language is usually not very hard to get off the ground.”
Learning to think in code matters more than any specific syntax.
Third, Python works great for prototyping. You can build something to figure out your algorithms and data structures, then move to another language depending on your needs.
Why Not JavaScript?
JavaScript is incredibly versatile, but it throws a lot at beginners. Asynchronous behavior, event loops,
thisbinding, the DOM… and that’s a lot of cognitive overhead when you’re just trying to grasp what a variable is.Python’s readable syntax lets you focus on learning how to think like a programmer. Fewer cognitive hurdles means faster progress on the fundamentals that actually matter.
There’s also the type system. JavaScript’s loose equality comparisons (
==vs===) and automatic type coercion trip people up constantly.Python is more predictable. When you’re learning, predictable is good.
The Path Forward
So here’s how I’d approach it: start with Python and focus on the basics. Loops, variables, data structures.
Get comfortable reading and writing code. Once you’ve got that foundation, you can either go deeper with Python or branch out to whatever language suits the projects you want to build.
The goal isn’t to master Python, it’s to learn how to think about problems and express solutions in code.
That skill transfers everywhere, including reviewing AI-generated code in whatever language you end up working with.
There are a ton of great resources online to help you learn Python, but one I see consistently is Python for Everyone by Dr Chuck.
Happy coding!
/ Programming / Python / learning
-
Two Arguments Against AI in Programming (And Why I'm Not Convinced)
I’ve been thinking about the programmers who are against AI tools, and I think their arguments generally fall into two camps.
Of course, these are just my observations, so take them with a grain of salt, or you know, tell me I’m a dumbass in the comments.
The Learning Argument
The first position is that AI prevents you from learning good software engineering concepts because it does the hard work for you.
All those battle scars that industry veterans have accumulated over the years aren’t going to be felt by the new breed. For sure, the painful lessons about why you should do something this way and not that way are important to preserve into the future.
Maybe we’re already seeing anti-patterns slip back into how we build code? I don’t know for sure, its going to require some PHD level research to figure it out.
To this argument I say, if we haven’t codified the good patterns by now, what the hell have we all been doing? I think we have more good patterns in the public code than there are bad ones.
So just RELAX! The cream will rise to the top. The glass is half full. We’ll be fine… Which brings me to the next argument.
The Non-Determinism Argument
The second position comes from people who’ve dug into how large language models actually work.
They see that it’s predicting the next token, and they start thinking of it as this fundamentally non-deterministic thing.
How can we trust software built on predictions? How do we know what’s actually going to happen when everything is based on weights created during training?
Here’s the thing though: when you’re using a model from a provider, you’re not getting raw output. There’s a whole orchestration layer. There’s guardrails, hallucination filters, mixture of experts approaches, and the thinking features that all work together to let the model double-check its work before responding.
It’s way more sophisticated than “predict the next word and hope for the best.”
That said, I understand the discomfort. We’re used to deterministic systems where the same input reliably produces the same output.
We are are now moving from those type of systems to ones that are probabilistic.
Let me remind you, math doesn’t care about the differences between a deterministic and a probabilistic system. It just works, and so we1.
The Third Argument I’m Skipping
There’s obviously a third component; the ethical argument about training data, labor displacement, and whether these tools should exist at all.
I will say this though, it’s too early to make definitive ethical judgments on a tool while we’re still building it, while we’re still discovering what it’s actually useful for.
Will it all be worth it in the end? We won’t know until the end.
-
This “we” I use to mean us as in the human race, but also our software we build. ↩︎
-
-
When do you think everyone will finally agree that Python is Python 3 and not 2? I know we aren’t going to get a major version bump anytime soon, if ever again, but we really should consider putting uv in core… Python needs modern package management baked in.
/ Programming / Python
-
OpenCode | The open source AI coding agent
OpenCode - The open source coding agent.
/ AI / Programming / Tools / links / agent / open source / code
-
Amp is a frontier coding agent that lets you wield the full power of leading models.
/ AI / Programming / Tools / links / agent / automation / code
-
vibe is the bait, code is the switch.
Vibe coding gets people in the door. We all know the hooks. Once you’re actually building something real, you still need to understand what the code is doing. And that’s not a bad thing.
/ AI / Programming / Vibe-coding
-
Stay updated with the latest news, feature releases, and critical security and code quality blogs from CodeAnt AI.
/ AI / Programming / blogging / links / security
-
Claude Code Learning Hub - Master AI-Powered Development
Learn Claude Code, VS Code, Git/GitHub, Python, and R with hands-on tutorials. Build real-world projects with AI assistance.
/ Programming / Development / links / tutorials / code / coding lessons / learning
-
Shadcn Studio - Shadcn UI Components, Blocks & Templates
Accelerate your project development with ready-to-use, & customizable 1000+ Shadcn UI Components, Blocks, UI Kit, Boilerplate, Templates & Themes with AI Tools.
-
Introducing api2spec: Generate OpenAPI Specs from Source Code
You’ve written a beautiful REST API. Routes are clean, handlers are tested and the types are solid. But where’s your OpenAPI spec? It’s probably outdated, incomplete, or doesn’t exist at all.
If you’re “lucky”, you’ve been maintaining one by hand. The alternatives aren’t great either, runtime generation requires starting your app and hitting every endpoint or annotation-heavy approaches clutter your code. At this point we should all know, with manual maintenance it’ll inevitably drift from reality.
What if you could just point a tool at your source code and get an OpenAPI spec?
Enter api2spec
# Install go install github.com/api2spec/api2spec@latest # Initialize config (auto-detects your framework) api2spec init # Generate your spec api2spec generateThat’s it. No decorators to add. No server to start. No endpoints to crawl.
What We Support
Here’s where it gets interesting. We didn’t build this for one framework—we built a plugin architecture that supports 30+ frameworks across 16 programming languages:
- Go: Chi, Gin, Echo, Fiber, Gorilla Mux, stdlib
- TypeScript/JavaScript: Express, Fastify, Koa, Hono, Elysia, NestJS
- Python: FastAPI, Flask, Django REST Framework
- Rust: Axum, Actix, Rocket
- PHP: Laravel, Symfony, Slim
- Ruby: Rails, Sinatra
- JVM: Spring Boot, Ktor, Micronaut, Play
- And more: Elixir Phoenix, ASP.NET Core, Gleam, Vapor, Servant…
How It Works
The secret sauce is tree-sitter, an incremental parsing library that can parse source code into concrete syntax trees.
Why tree-sitter instead of language-specific AST libraries?
- One approach, many languages. We use the same pattern-matching approach whether we’re parsing Go, Rust, TypeScript, or PHP.
- Speed. Tree-sitter is designed for real-time parsing in editors. It’s fast enough to parse entire codebases in seconds.
- Robustness. It handles malformed or incomplete code gracefully, which is important when you’re analyzing real codebases.
- No runtime required. Your code never runs. We can analyze code even if dependencies aren’t installed or the project won’t compile.
For each framework, we have a plugin that knows how to detect if the framework is in use, find route definitions using tree-sitter queries, and extract schemas from type definitions.
Let’s Be Honest: Limitations
Here’s where I need to be upfront. Static analysis has fundamental limitations.
When you generate OpenAPI specs at runtime (like FastAPI does natively), you have perfect information. The actual response types. The real validation rules. The middleware that transforms requests.
We’re working with source code. We can see structure, but not behavior.
What this means in practice:
- Route detection isn’t perfect. Dynamic routing or routes defined in unusual patterns might be missed.
- Schema extraction varies by language. Go structs with JSON tags? Great. TypeScript interfaces? We can’t extract literal union types as enums yet.
- We can’t follow runtime logic. If your route path comes from a database, we won’t find it.
- Response types are inferred, not proven.
This is not a replacement for runtime-generated specs. But maybe so in the future and for many teams, it’s a massive improvement over having no spec at all.
Built in a Weekend
The core of this tool was built in three days.
- Day one: Plugin architecture, Go framework support, CLI scaffolding
- Day two: TypeScript/JavaScript parsers, schema extraction from Zod
- Day three: Python, Rust, PHP support, fixture testing, edge case fixes
Is it production-ready? Maybe?
Is it useful? Absolutely.
For the fixture repositories we’ve created realistic APIs in Express, Gin, Flask, Axum, and Laravel. api2spec correctly extracts 20-30 routes and generates meaningful schemas. Not perfect. But genuinely useful.
How You Can Help
This project improves through real-world testing. Every fixture we create exposes edge cases. Every framework has idioms we haven’t seen yet.
- Create a fixture repository. Build a small API in your framework of choice. Run api2spec against it. File issues for what doesn’t work.
- Contribute plugins. The plugin interface is straightforward. If you know a framework well, you can make api2spec better at parsing it.
- Documentation. Found an edge case? Document it. Figured out a workaround? Share it.
The goal is usefulness, and useful tools get better when people use them.
Getting Started
go install github.com/api2spec/api2spec@latest cd your-api-project api2spec init api2spec generate cat openapi.yamlIf it works well, great! If it doesn’t, file an issue. Either way, you’ve helped.
api2spec is open source under the FSL-1.1-MIT license. Star us on GitHub if you find it useful.
Built with love, tree-sitter, and too much tea. ☕
/ DevOps / Programming / Openapi / Golang / Open-source
-
please make it stop
😮💨
pattern = re.compile(r""" ^ # The beginning of my hubris I \s+ may \s+ be \s+ # A moment of hope done \s+ with \s+ # Freedom! Sweet freedom! regex # The beast I sought to escape \s+ but \s+ # Plot twist incoming... regex # It's back. It was always back. \s+ is \s+ not \s+ # The cruel truth done \s+ with \s+ # It has unfinished business me # I am the business $ # There is no escape, only EOL """, re.VERBOSE) -
Rust I wrote for my Link in Bio site. I think Link in Bio is my favorite thing to build for learning a language or new library combination. Checkout Github Templates setting for base sites that you can iterate on.
-
It’s just #Vibes Man
-
Automate Folder Archiving on macOS with Raycast and 7zip
If you’re like me and frequently need to archive project folders to an external drive, you know how tedious the process can be: right-click, compress, wait, find the archive, move it to the external drive, rename if there’s a conflict… It’s a workflow that begs for automation.
Today, I’m going to show you how I built a custom Raycast script that compresses any folder with 7zip and automatically moves it to an external drive, all with a single keyboard shortcut.
What We’re Building
A Raycast script command that:
- Takes whatever folder you have selected in Finder
- Compresses it using 7zip (better compression than macOS’s built-in zip)
- Moves it directly to a specified folder on your external drive
- Automatically handles version numbering if the archive already exists
- Provides clear error messages if something goes wrong
No more manual copying. No more filename conflicts. Just select, trigger, and done.
Prerequisites
Before we start, you’ll need:
- Raycast - Download from raycast.com if you haven’t already
- 7zip - Install via Homebrew:
brew install p7zip- An external drive - Obviously, but make sure you know its mount path
The Problem with the Built-in Approach
Initially, I thought: “Can’t I just have Raycast pass the selected folder path as an argument?”
The answer is technically yes, but it’s clunky. Raycast would prompt you for the folder path every time, which means you’d need to:
- Copy the folder path
- Trigger the Raycast command
- Paste the path
- Hit enter
That’s not automation—that’s just extra steps with good intentions.
The Solution: AppleScript Integration
The key insight was using AppleScript to grab the currently selected item from Finder. This way, the workflow becomes:
- Select a folder in Finder
- Trigger the Raycast command (I use
Cmd+Shift+7) - Watch it compress and move automatically
No input required. No path copying. Just pure automation bliss.
Building the Script
Here’s the complete script with all the error handling we need:
#!/bin/bash # Required parameters: # @raycast.schemaVersion 1 # @raycast.title Compress Selected to External Drive # @raycast.mode fullOutput # Optional parameters: # @raycast.icon 📦 # @raycast.needsConfirmation false # Documentation: # @raycast.description Compress selected Finder folder with 7zip and move to external drive # [@raycast.author](http://raycast.author) Your Name EXTERNAL_DRIVE="/Volumes/YourDrive/ArchiveFolder" # Get the selected item from Finder FOLDER_PATH=$(osascript -e 'tell application "Finder" to set selectedItems to selection if (count of selectedItems) is 0 then return "" else return POSIX path of (item 1 of selectedItems as alias) end if') # Check if anything is selected if [ -z "$FOLDER_PATH" ]; then echo "❌ Error: No item selected in Finder" echo "Please select a folder in Finder and try again" exit 1 fi # Trim whitespace FOLDER_PATH=$(echo "$FOLDER_PATH" | xargs) # Check if path exists if [ ! -e "$FOLDER_PATH" ]; then echo "❌ Error: Path does not exist: $FOLDER_PATH" exit 1 fi # Check if path is a directory if [ ! -d "$FOLDER_PATH" ]; then echo "❌ Error: Selected item is not a folder: $FOLDER_PATH" exit 1 fi # Check if 7z is installed if ! command -v 7z &> /dev/null; then echo "❌ Error: 7z not found. Install with: brew install p7zip" exit 1 fi # Check if external drive is mounted if [ ! -d "$EXTERNAL_DRIVE" ]; then echo "❌ Error: External drive not found at: $EXTERNAL_DRIVE" echo "Make sure the drive is connected and mounted" exit 1 fi # Create archive name from folder name BASE_NAME="$(basename "$FOLDER_PATH")" ARCHIVE_NAME="${BASE_NAME}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" # Check if archive already exists and find next available version number if [ -f "$OUTPUT_PATH" ]; then echo "⚠️ Archive already exists, creating versioned copy..." VERSION=2 while [ -f "$EXTERNAL_DRIVE/${BASE_NAME}_v${VERSION}.7z" ]; do VERSION=$((VERSION + 1)) done ARCHIVE_NAME="${BASE_NAME}_v${VERSION}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" echo "📝 Using version number: v${VERSION}" echo "" fi echo "🗜️ Compressing: $(basename "$FOLDER_PATH")" echo "📍 Destination: $OUTPUT_PATH" echo "" # Compress with 7zip if 7z a "$OUTPUT_PATH" "$FOLDER_PATH"; then echo "" echo "✅ Successfully compressed and moved to external drive" echo "📦 Archive: $ARCHIVE_NAME" echo "📊 Size: $(du -h "$OUTPUT_PATH" | cut -f1)" else echo "" echo "❌ Error: Compression failed" exit 1 fiKey Features Explained
1. Finder Integration
The AppleScript snippet grabs whatever you have selected in Finder:
FOLDER_PATH=$(osascript -e 'tell application "Finder" to set selectedItems to selection if (count of selectedItems) is 0 then return "" else return POSIX path of (item 1 of selectedItems as alias) end if')This returns a POSIX path (like
/Users/yourname/Documents/project) that we can use with standard bash commands.2. Comprehensive Error Checking
The script validates everything before attempting compression:
- Is anything selected?
- Does the path exist?
- Is it actually a directory?
- Is 7zip installed?
- Is the external drive connected?
Each check provides a helpful error message so you know exactly what went wrong.
3. Automatic Version Numbering
This was a crucial addition. If
project.7zalready exists, the script will automatically createproject_v2.7z. If that exists, it’ll createproject_v3.7z, and so on:if [ -f "$OUTPUT_PATH" ]; then VERSION=2 while [ -f "$EXTERNAL_DRIVE/${BASE_NAME}_v${VERSION}.7z" ]; do VERSION=$((VERSION + 1)) done ARCHIVE_NAME="${BASE_NAME}_v${VERSION}.7z" OUTPUT_PATH="$EXTERNAL_DRIVE/$ARCHIVE_NAME" fiNo more manual renaming. No more overwriting precious backups.
4. Progress Feedback
Using
@raycast.mode fullOutputmeans you see everything that’s happening:- Which folder is being compressed
- Where it’s going
- The final archive size
This transparency is important when you’re archiving large projects that might take a few minutes.
Setting It Up
- Find your external drive path:
ls /Volumes/Look for your drive name, then determine where you want archives saved. For example:
/Volumes/Expansion/WebProjectsArchive-
Create the script:
- Open Raycast Settings → Extensions → Script Commands
- Click “Create Script Command”
- Paste the script above
- Update the
EXTERNAL_DRIVEvariable with your path - Save it (like
~/Documents/Raycast/Scripts/or~/.local/raycast)
-
Make it executable:
chmod +x ~/.local/raycast/compress-to-external.sh- Assign a hotkey (optional but recommended):
- In Raycast, search for your script
- Press
Cmd+Kand select “Add Hotkey” - I use
Cmd+Shift+7for “Archive”
Using It
Now the workflow is beautifully simple:
- Open Finder
- Select a folder
- Hit your hotkey (or trigger via Raycast search)
- Watch the magic happen
The script will show you the compression progress and let you know when it’s done, including the final archive size.
Why 7zip Over Built-in Compression?
macOS has built-in zip compression, so why bother with 7zip? A few reasons:
- Better compression ratios - 7zip typically achieves 30-70% better compression than zip
- Cross-platform - .7z files are widely supported on Windows and Linux
- More options - If you want to add encryption or split archives later, 7zip supports it
- Speed - 7zip can be faster for large files
For project archives that might contain thousands of files and dependencies, these advantages add up quickly.
Potential Improvements
This script works great for my needs, but here are some ideas for enhancement:
- Multiple drive support - Let the user select from available drives
- Compression level options - Add arguments for maximum vs. fast compression
- Notification on completion - Use macOS notifications for long-running compressions
- Delete original option - Add a flag to remove the source folder after successful archiving
- Batch processing - Handle multiple selected folders
Troubleshooting
“7z not found” error:
brew install p7zip“External drive not found” error: Make sure your drive is connected and the path in
EXTERNAL_DRIVEmatches exactly. Check with:ls -la /Volumes/YourDrive/YourFolderScript doesn’t appear in Raycast: Refresh the script directory in Raycast Settings → Extensions → Script Commands → Reload All Scripts
Permission denied: Make sure the script is executable:
chmod +x your-script.shConclusion
This Raycast script has saved me countless hours of manual file management. What used to be a multi-step process involving right-clicks, waiting, dragging, and renaming is now a single keyboard shortcut.
The beauty of Raycast’s script commands is that they’re just bash scripts with some metadata. If you know bash, you can automate almost anything on your Mac. This particular script demonstrates several useful patterns:
- Integrating with Finder via AppleScript
- Robust error handling
- Automatic file versioning
- User-friendly progress feedback
I encourage you to take this script and adapt it to your own workflow. Maybe you want to compress to Dropbox instead of an external drive. Maybe you want to add a timestamp to the filename. The flexibility is there, you just need to modify a few lines.
Happy automating!
Have questions or improvements? Feel free to reach out. If you build something cool with this pattern, I’d love to hear about it!
/ DevOps / Programming
-
The Password Paradox: When Security Becomes Absurdity
I created an account on VRBO today and was shocked by their password policy. A password with over 30 characters was flagged as “weak” simply because it didn’t contain special characters. We can do better than this.
This is the current state of digital security: passwords so complex that humans can’t remember them, pushing us all toward password managers (which, let’s be honest, we should be using). But here’s the thing - while there are some standards and best practices for password security, implementation is wildly inconsistent. Each business decides how much they want to enforce “good” password policies, and even when they try to follow security methodologies, the execution is all over the map.
You end up with systems that reject genuinely strong passwords while accepting demonstrably weak ones, all because someone’s algorithm prioritizes symbols over entropy.
It’s security theater at its finest.
FREE THE EMAILS AT MY blurgl blog
-
What you should start saying in Standup now
/ DevOps / Programming
-
So the Gemini CLI...
Two minor gripes with the Gemini CLI tool. I know, I know … it just came out.
-
There’s no init command to automatically get it to generate its context file. I asked it to do it, and it created an empty file, and then I had to follow up with a second prompt to get it to update the file with helpful information. I’m never going to use that until it’s easier.
-
I ran a few prompts, and it automatically switched to the flash model, and there’s no way to control what model it uses. Like maybe I’m okay with a bit slower responses if it means I can use the pro-model. I’d like the ability to be able to control this, and there doesn’t seem to be a way to switch between the models.
-
-
I built a BlueSky Bot in 6 hours last week. I promise for good reasons.
TELL THE BOT THEY’RE DOING SUCH A GOOD JOB
It runs a query, does some analytics, then gives a progress update to socials.
-
Before: 🔴 47 Dependabot PRs clogging my repo
After: 🟢 3 meaningful updates per week
The secret? One .github/dependabot.yml config to ignore patch updates while keeping security fixes.
Sometimes the best feature is the one that does less work 😌 #Productivity #GitHub #LessIsMore #JavaScript
-
Claude Code is awesome. Junie is awesome. I want to try Claude Code GitHub Actions but its not included in the Max Plan. What Agents have been working well for you? #javascript #dev #webdev #fullstack
-
Happy 30th Birthday to PHP! To celebrate, I updated my CodeIgniter project to the latest version, refactored a bunch of stuff. I added a reference implementation of Vue to it. Claude helped. 🎉
-
New release of pkglock-rust
🎉 New release of pkglock-rust crate is out! 🎉
This update brings:
✅ Unit testing for robust performance.
📂 Modularized code for better organization and maintainability.
Check out the latest version and give it a try: https://github.com/llbbl/pkglock-rust
pkglock was created to streamline switching between local and remote npm registries, addressing the slowness of npm installations and resolving transpiling issues by being rewritten in Rust.
#rustlang
-
🚀 Mastering Cloud Storage - Smmall Cloud vs. The Giants! 🚀
Just published a new Article!
Dive into my latest blog where I compare cutting-edge cloud storage solutions like Smmall Cloud, Dropshare, CleanShot X, and Google Drive. Discover which service truly stands out in the realm of digital storage efficiency and innovation.
🔑 Key Insights:
Smmall Cloud’s unique features and how it stacks up against established players.
The practicality and cost-effectiveness of each service.
An intriguing peek into building your own cloud storage!
🎁 Exclusive Bonus: Grab a 10% discount on any Smmall.cloud plan inside the article!
Don’t miss out on these vital insights that could transform your approach to digital storage. Click to read and find out which service fits your needs best!
#CloudStorage #TechBlog #SmmallCloud #DigitalInnovation #StorageSolutions
-
Launching My First Rust Crate: pkglock
Excited to share my first Rust crate: pkglock 🚀
pkglock is a CLI tool that automates the process of switching URLs in your package-lock.json. Tailored to ease the management of local and remote npm registries, it stands out as a handy utility for developers.
Key Features: Configurable: Set up and switch between local and remote URLs effortlessly. Command-Line Support: Execute with various options straight from the command line.
I rewrote it in rust to avoid having to transpile JS in order to support CommonJS and ESM.