Claude is changing how I write software dramatically.
My biggest worry, as someone who's been writing software for a long time, is that I will be so blinded by my hard earned intuition on complexity that I will subconsciously be less ambitious than the moment calls for – that some treacherous part of my mind stubbornly knows it's too hard.
When I was 25 I thought it was a given that I would just never stop learning. Back then the risk, from what I could see of middle aged developers, was clownishly bungling your way into some obvious technological cul-de-sac. The intervening 15 years have tested it at times, but I had not considered that the raw intuition itself might end up the problem.
Parachuting into an unfamiliar codebase, where the original author is long gone, is an experience that will be familiar to a lot of developers.
People writing with AI assistants have been encountering this in another form.
Simon Willison wrote a brief post about it recently, concluding, "I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next."
The way that I've used AI for coding has changed drastically over the last year.
In fact, the rate at which it is changing is probably the most drastic element of it – I can't recall any time since my first year of college, twenty years ago, that I've experienced such a rapid evolution in my own ability to do stuff.
This week I have written a fully self-contained system monitoring daemon and terminal UI, as well as custom Rust firmware for the Ulanzi TC001 desktop clock.
These are two projects that would have remained ideas, never making it out of my notes folder.