Software developers get stuck in the wrong feedback loop. Either a slow one, a complex one, or an inaccurate one.
Many have fallen into the trap of spaghetti coding an algorithm directly in a big app. You can only test after you spend an hour yak shaving and fixing your dev setup. Then every time you make a change and want to test again it takes another 30 mins to set everything back up again.
Instead a wise developer takes that hour to build a reasonable testing scaffold, modularize the algorithm into a library, to test the algorithm away from the heavy app and it’s many complexities.
But the point is not that “unit tests are magic”. Unit tests themselves can be the wrong feedback loop. For interactive and stateful problems, you write hundreds of lines of mocks and scaffolds, just to see if clicking a button adds 1 to a number. Or maybe the difficulty of writing a test is itself a kind of feedback: maybe your code’s depedency graph is a miss.
Or your code may be statistical in its notion of “quality” - like testing a machine learning model. Here we don’t run unit tests in any traditional sense. A recommendation system really cares about some X% improvement in precision. What is “precision” here and how does it tie to actual product goals? Well getting that right requires investment in careful, scientific methodology to get the needed feedback.
There’s also of course explicit integration tests to make sure systems glue well together. That kind of feedback needs you to scaffold together the systems themselves and run tests of the full experience.
Finally, in the end, your customer might see your blazing fast, web scale app with 100% test coverage, and thrilling recommendations. and realize you’ve built the wrong thing! Geez. Maybe a bad prototype full of shitty untested code gets that feedback faster before you dive into the other kinds of feedback
Feedback loops undergird good software engineering. Not because it “makes code nice”. Instead, we modularize code, make it testable, make it extendable, lint it, type check it, etc to unlock faster and more accurate iterations. When people argue questions like “should we use a monolith or microservices?” there’s no right answer. Which gets you the right feedback fastest? What type of feedback do you lack that you need? And what do you take for granted in the current setup that you might lose?
Hardest of all: making resource calls. Should you invest in type safety to prevent incidents because you passed a string where an int is expected? How much should you invest on instrumentation? Or in A/B testing vs offline/local ways of measuring a recommendation system? How early and often do you get the customer involved?
When people argue about ‘technical debt’, it’s about “where do we lack a certain type of feedback?”. Where could have gotten feedback earlier - feedback besides the app crashing or the customer yelling at you? Maybe a specific area needs to be refactored to be more testable as it’s a hot spot for issues, and it’s way too hard to really test well in the current environment? Maybe we need better instrumentation of a layer in our API? Where do we need to make the code obvious how to extend - so our type system or compiler, not production systems, guide us.
There are no easy answers. I consider feedback loops a really hard problem . We always find our blind spot too late when the incident happens or when the customer cancels their contract. That’s why good feedback investments are up there with caching and naming things as “hard problems in software”