Refactoring gets really bad reviews, but from where I’m sitting as a hobby programmer in relative ignorance it seems like it should be easier, because you could potentially reuse a lot of code. Can someone break it down for me?
I’m thinking of a situation where the code is ugly but still legible here. I completely understand that actual reverse engineering is harder than coding on a blank slate.
Refactoring is good work. Rewriting is shallow fun.
Do the math.
Exception: perl is write-only code. Always rewrite.
Rebuild from scratch gets a bad reputation sometimes because it’s the go-to response of a junior programmer with a little experience. They know the system could be done better, and it seems like the fastest way to get there is to throw out everything.
What often happens next is the realization that the existing system was handling far more edge cases than it initially appears. You often discover these edge cases when the new system is deployed and someone complains about their use case breaking. As you fix each one, the new system starts to look worse than the old while supporting half its features.
This often leads people to prefer refactors rather than rewrites. Those can take a lot longer than expected and never quite shed what made the old system bad. Budget cuts can leave the whole project in a halfway state that’s worse than if it was left alone.
There are no easy answers, and the industry has not solved this problem.
What often happens next is the realization that the existing system was handling far more edge cases than it initially appears. You often discover these edge cases when the new system is deployed and someone complains about their use case breaking.
The reverse is also sometimes true and it’s when a rewrite is justifyable.
I’ve worked with many systems that piled up a ton of edge cases handling for things that are no longer possible, it makes the code way harder to follow than it should.
I’ve had successful rewrites that used 10x+ less the amount of code, for more features and significantly more reliable. And completely eliminated many of the edge cases by design.
My current team has had a great solution to this, which is to re-build in parallel. Build the new system alongside the old one, including the reporting and integrations. You’ll find the edge cases pretty quickly.
Responding as a java/kotlin maintainer of one single large system with frequent requirement changes. what i call “high entropy” programs. Other developers have different priorities and may answer differently based on what kind of system they work in, and their answers are also valid, but you do need to care about what kind of systems they work on when you decide whether or not to follow their advice.
In my experience, if the builder of the original system didn’t care about maintainability, then it’s probably faster to rewrite it.
Of course, then you’d have to be able to tell what maintainable code looks like, which is the tricky part, but includes things like,
- Interfaces
- Dependency injection
- Avoidance of static or const functions
- Avoidance of “indirect recursion” or what I call spaghetti jank that makes class internals really hard to understand.
- Class names indicate design patterns being used. Such as “Facade”. This indicates that the original builder was doing some top-down software design in an effort to write maintainable code.
- Data has one, and only one, source of truth. A lot of refactoring pain comes from trying to align multiple sources of truth, since disgreements cause mayhem to the program state.
Bad signs:
- Oops, all concrete classes.
- Inheritance. You get one Base Class, and only one, before you should give the code the death glare. Its extremely difficult for a programmer to be able to tell a true “is a” relationship from a false one. For starters you have to have rock solid class definitions to start with. If the presence of Inheritance smells like the original builder was only using it to save time building the feature, burn it with fire! Its anti-maintainable.
- Too much organizing - you have to open 20 files to find out what one algorithm does. That’s a sign that the original builder didn’t know the difference between organizing for organizing sake and keeping code together that changes together.
- Too little organizing - the original builder shoved eveything into one God class so they could use a bunch of global variables. You’d probably have a hard time replacing a component so big. Also, it probably won’t let you replace parts of itself - this style forces you to burn down the whole thing to make a change.
- Multiple sources of truth for data: classes that keep their own copies of data as member variables are a prime example of this kind of mistake.
this is a big ol’ “it depends”
if it’s a hobby project, then by all means rewrite it if you want to.
if it’s a commercial project of some kind - there’s a business that’s making money, and part of the business making money relies on this code working properly - then rewrites are almost always a bad idea.
read Things You Should Never Do, Part I, an almost 25-year-old blog post (man, that’s a weird sentence to write) about why giant rewrites in a commercial setting are a bad idea.
in general, people greatly underestimate how much work is involved in a rewrite. it feels like it should be simpler to start from a blank slate, and tell yourself you’re going to avoid all the mistakes that you hate with the existing codebase. maybe you’re writing it in a new language, or at least a newer dialect/version of the same language.
if the current codebase is a mess…how did it get that way? lack of engineering discipline? a “just make it work now, we can go back and tidy it up later” attitude towards accumulation of tech debt? if those same attitudes are present on the team doing the rewrite, you’re going to end up right back where you started after the rewrite is “done”.
the main things you need for refactoring to be successful is a) tests, and b) a plan.
the tests allow you to refactor with the confidence that if you break something, the tests will point it out for you. trying to refactor something that lacks tests is the worst place to be in, because you’ll want to add tests, and often adding tests requires refactoring the code to be more testable, placing you in a catch-22.
the plan allows you to make those refactoring changes gradually, over time, while still maintaining the system. in the context of a business that’s paying developers to do this work, the businesspeople tend to look poorly on an engineer coming to them and saying “we’re gonna spend the next year or two doing a big rewrite, so in the meantime you can’t ask us for any new features or bugfixes to the existing system. but once it’s done the new system will be really cool, trust me.”
successful refactoring is a Ship of Thesus - you can replace the entire thing, but you have to do it one component at a time.
This is a good answer.
At my job, there was a desire to do a big rewrite of the system. It was a disaster. We spent like 8 months on this project where we delivered no value to customers. Then there was essentially a mutiny from the engineering team and we killed it.
We’ve since built on top of the original system and had, in the words of product leadership, “the most productive quarter in the history of the company”.
Now, why was it a disaster? The biggest reason was that people, especially people in leadership positions, did not understand the existing system very well. They would then make decisions based on falsehoods and mythology.
Rereading this, I probably should have added a hedge - is it usually better to start from scratch. I do know that there’s exceptions to most rules, and this isn’t actually a practical problem I’m facing.
Thanks, this is kind of how I thought it should be. I just didn’t know if I was missing something, because people on the humour communities trash talk refactoring a lot.
Edit: Wow, Netscape… Sorry to say it, but that post isn’t much younger than me. I don’t even know a most of these examples. That being said, it was still a great read.
I’m almost always of the opinion that refactoring is better than a rewrite as long as the tech stack is supportable.
Everyone wants to rewrite stuff, because the old system is ‘needlessly complicated’. 90% of the time though, they end up finding it was complicated for a reason and it all ends up going back in. It does allow a system to be written with the full knowledge of its scope though, instead of an old system that has been repeatedly bodjed and expanded. Finally, if your old tech stack is unsupportable (not just uncool, unsupportable) then it can be the most feasible way. It will take ages though with no/little return until it’s all finished.
Refactoring is more difficult, as developers need to understand the existing codebase more to be able to safely upgrade it in situ. It does mean you can get continuous improvement through the process though as you update things bit by bit. You do need to test that each change doesn’t have unexpected impact though, and this can be difficult to do in badly written systems.
Most Devs hate working on other people’s code though, so prefer rewrites.
(Ran out of time to go into more detail)
How would you define “supportable”?
Most Devs hate working on other people’s code though, so prefer rewrites.
I now suspect this is basically where it’s coming from.