Two Arguments Against AI in Programming (And Why I'm Not Convinced)

I’ve been thinking about the programmers who are against AI tools, and I think their arguments generally fall into two camps.

Of course, these are just my observations, so take them with a grain of salt, or you know, tell me I’m a dumbass in the comments.

The Learning Argument

The first position is that AI prevents you from learning good software engineering concepts because it does the hard work for you.

All those battle scars that industry veterans have accumulated over the years aren’t going to be felt by the new breed. For sure, the painful lessons about why you should do something this way and not that way are important to preserve into the future.

Maybe we’re already seeing anti-patterns slip back into how we build code? I don’t know for sure, its going to require some PHD level research to figure it out.

To this argument I say, if we haven’t codified the good patterns by now, what the hell have we all been doing? I think we have more good patterns in the public code than there are bad ones.

So just RELAX! The cream will rise to the top. The glass is half full. We’ll be fine… Which brings me to the next argument.

The Non-Determinism Argument

The second position comes from people who’ve dug into how large language models actually work.

They see that it’s predicting the next token, and they start thinking of it as this fundamentally non-deterministic thing.

How can we trust software built on predictions? How do we know what’s actually going to happen when everything is based on weights created during training?

Here’s the thing though: when you’re using a model from a provider, you’re not getting raw output. There’s a whole orchestration layer. There’s guardrails, hallucination filters, mixture of experts approaches, and the thinking features that all work together to let the model double-check its work before responding.

It’s way more sophisticated than “predict the next word and hope for the best.”

That said, I understand the discomfort. We’re used to deterministic systems where the same input reliably produces the same output.

We are are now moving from those type of systems to ones that are probabilistic.

Let me remind you, math doesn’t care about the differences between a deterministic and a probabilistic system. It just works, and so we1.

The Third Argument I’m Skipping

There’s obviously a third component; the ethical argument about training data, labor displacement, and whether these tools should exist at all.

I will say this though, it’s too early to make definitive ethical judgments on a tool while we’re still building it, while we’re still discovering what it’s actually useful for.

Will it all be worth it in the end? We won’t know until the end.


  1. This “we” I use to mean us as in the human race, but also our software we build. ↩︎

/ AI / Programming / Software-engineering