There is a style of development out there that encourages making mostly minimal changes. Do the least amount of stuff to get your feature running.
I’ve heard this advice many times over the years.
This seems to be mostly used as a tactic when people are scared. Scared of changing the code.
It’s normal to be nervous when you work with crappy code, but keeping changes to a minimum will also keep the code crappy. It’s a vicious cycle: you keep making small changes, the code doesn’t get properly refactored, it stays crappy, you stay scared, you keep making minimal changes.
The only time for minimal changes should be when backporting bug fixes, when you issue some emergency fix or when you write a dirty hack.
While I don’t think that minimal changes are intrinsically wrong, most people get the wrong idea and try to do it all the time.
People think that the only way to add new features without introducing lots of bugs, is by making the smallest changes you can. This might be true, but at the same time you’re increasing entropy, and in the long run it will hurt you.
Think about it, you focus on a small bit of the system. It’s like you have tunnel vision so you will implement a local solution instead of a global one.
It’s like when searching for a function’s minimum, you can get stuck in a local minimum far away from the real one.
If you do implement a feature with minimal changes in mind, you need to follow up with refactoring. Not code refactoring, but architecture refactoring.
If you keep going without stopping to refactor, you may get your tasks done, but the system’s architecture will suffer because entropy will increase.
Like with everything else, there is a time and a place for making minimal changes, but it has to be used with moderation.