- AcmeCorporationBlah
- AcmeCorporationBlahBlah
- AcmeCorporationBlahBlahBlah
- AcmeCorporationBlahBlahBlahBlah
- Qaz102AcmeCorporationBlah
- Qaz102AcmeCorporationBlahBlah
- Qaz102AcmeCorporationBlahBlahBlah.
I asked my colleagues the significance of this number and was told it was version 1.02 of the Qaz databases.
The thing is, there exist no version 101s, 103s or even 100s, neither in production nor in any other environment. In fact, every time one of these 'Qaz' databases occurs, the 102 is blindly included in the name.
I asked if the '102' was strictly necessary and the entire team looked at each other and shrugged. I then asked if we would ever deploy different versions of the QAZ databases for the same customer and was told, no, that would never happen. I then asked how long the 102 had been in the names and none of them knew. It predated them all, and the dev manager.
Since my current task involves deploying the databases for a new customer, I asked if I could omit the 102 from the QAZ database names. Now they all looked at each other and looked reproachful.
"No, that's the standard," one of them said. "Why do you want to take it out; just because you don't like it?"
"No," I replied. "I want to take it out because it's adding a small amount of unnecessary complexity to the domain."
He raised his eyebrows, said "Wow!" and turned back to his workstation shaking his head.
And that was the end of the conversation.
So now I'm propagating this fallacious learned behaviour and I'm irked; irked enough to write this blog post.
In the same way that code smells accumulate to clutter our code base, bad learned behaviours accumulate to clutter our working practices.
We should refactor the bad smells out of both.
Often.
Great point Mike. I agree that it's important to fix these things and look at why there's such apathy about fixing it. Someone else is going to have the same question and waste time trying to find the answer. I've seen this attitude when there have been bigger code base issues to deal with or when there's not enough confidence that such a change won't cause any side effects
ReplyDeleteThanks James - and thanks for taking the time to comment.
ReplyDeleteThat apathy you're talking about is generally symptomatic of a deeper malaise in the development team; poor management, lack of support from the business, or stress. And it's incredibly perilous.
For too many years, development teams were forced to live by the adage "If it ain't broke, don't fix it." We now know this philosophy to be wrong-headed and responsible for most of the shitty code bases we come across. The Agile Manifesto has taught us this, and taught us that every time we touch the code base, we should leave it in a better state than that in which we found it.
When I evangelise this view, I'm generally rebutted with the 'causing side-effects' argument, to which my standard response is, a decent set of automated regression tests will winkle out any problems before you go to production - and if a team is allowing code to into production without a full set of regression tests, someone, somewhere needs to be fired.
Naturally, there should also be a full set of unit tests to highlight such issues at the compilation and integration phases, but realistically, no-one's going to go back and refactor an entire code base that was written without unit tests to add the tests retrospectively and factor out the test-inhibiting design. You'd be better off to redesign from scratch. Every time.
So failing a full suite of unit tests, automated regression tests are your safety net when you perform open-heart surgery on your code.
And to reinforce the point in the original blog post, we should be reevaluating our behaviours on a regular basis to unclutter our practices and our minds. Software development is enough of a minefield without willfully adding more complexity to it.