We believe with AI that the value of work goes to zero. When in reality, it makes people more impatient, and less tolerant of inconveniences. Therein lies opportunity for savvy developers to be a salve to new impatience.

I can now generate a photo-sharing app in Cursor in a weekend of vibe coding. This was previously a solid side-project that would take a month of hacking around. Because of this, it’s said we suffer from AI brainrot. We go to our favorite LLM, whip something out in a few hours, and end up with an app we half-understand.

We see vibe-coding and think “wow I’m obsolete, my skills are worthless, and I’m going to lose my job”. Well maybe. On the other hand, when the “80% task” become easier, expectations are reset. We get lazier and don’t want to tolerate the old “hard way” when it pops up from time to time.

My first job was working on an embedded system in the early 2000s. I would get field reports of crashes. One gyroscope navigator I worked on, coded in C, in a homegrown OS, suddenly reported its lat/long as the south pole. All I had to go on was a spreadsheet with 10 rows of the position over time. With no real debugging tools, I painstakingly parsed through printed-out code, running it in my mind with pencil. Over weeks, I eventually found a pointer being overwritten causing the issue.

At the time, this was all just normal. It was time consuming. Today, for most devs, it’s considerably rare. Forget AI, a static analyzer would probably have caught this bug. For many reasons, we could probably create this system much faster with far fewer resources today. But how much more valuable is that specialization when you need it? How much more do we feel that annoyance when we get outside that 80% and actually need someone who knows their stuff?

When general tasks become so easy suddenly trivial inconveniences feel physically painful. Like “my dumb vibe-coded rails app doesn’t work quite right on this system” or even lamer “my LLM hallucinates 5% of the time” begin to balloon into something that feels like a massive, unexpected, out-of-band annoyance. The value of the general goes down, the value and diversity of the specialized sides of the curve goes up.

So you’re willing to pay, a lot of money, for something like “prompt engineering” that one point sounded like a joke… but is more and more the domain of what an “ML/AI Engineer does”. Even further, training models “the old way”, with a pytorch model from scratch, feels even more cumbersome.

Previous roles that we once considered part of the generalist job become valued more highly exactly because they’re seen as more inconvenient than they used to be.

Of course, there’s a sweet spot here of not being so obscure of a niche that you cannot find that employer willing to pay for the person able to execute C in their mind. But still valuable enough to pay for a specialist. There’s also a danger you will find of becoming fully consumed into specialization - when in reality, you ought to be an inverted T person - hedging your specialization with generalist knowledge. Because at some point, you’ll need to hop to a different “spike” on that T.


If you’re interested in getting started down the path of search specialist - check out my course Cheat at Search with LLMs coming in July.


Doug Turnbull

More from Doug
Twitter | LinkedIn | Bsky | Grab Coffee?
Take My New Course - Cheat at Search with LLMs
Doug's articles at OpenSource Connections | Shopify Eng Blog