When semantic versioning is bad
That’s a lengthier rant against some misuses of semver. Of course, I’m not against the whole thing at all, I’m just throwing thoughts. :)
The problem
A few days ago I got to the following situation:
- I have to upgrade the UI library of a project to its next major version (needed to get some bugs fixed)…
- …which required the UI framework to be bumped to it next major version
- …which required the build tool to be upgraded to its next major version
- …which required Node.js to be upgraded to its next major LTS version
- …which required glibc to be upgraded
- …which required the whole Linux distribution of the build server to be upgraded!
In summary, in order to fix some UI glitches I had to upgrade my build server. Of course, one may (rightfully) say it’s my fault to rely on ancient build infrastructure, but I’d also point my finger at the use of semantic versioning across the whole stack.
Well, what do you expect from a new major version?
One of the most important rules in semver is: “only a new major version allows you to break backward compatibility”. Which, unfortunately, is more and more commonly read as “when you you’re releasing a new major version, you can break everything”. That leads to a situation where you try upgrading to a major version of something and suddenly you should change the whole stack.
Anyone remember the good ol’ C times? Stuff like (libxml
is arbitrary):
#ifdef ANCIENT_LIBXML
// Someone's living in the 90s. Emit a "pragma message" warning and proceed with a hard fall-back piece of code you hate.
#elseif OLD_LIBXML
// Do an ugly, but OK-ish fall-back. Emit a warning in an year or two.
#elseif NOT_THAT_OLD_LIBXML
// Some more elegant fall-back.
#else
// Your actual version-dependent piece of code.
#endif
This way you can be sure your software can be ran in ancient environments. The people will get warnings enough time before getting their build broken.
Although C/C++ is graybeards’ business and almost nobody actually uses native libraries nowadays, but that’s a lesson for all modern developers. That was one of the ways to develop projects lasting decades.
Nowadays a lot of people know all that stuff, but they’re still afraid to sacrifice their velocity with an approach like that. Of course, implementing such checks is costly - both in terms of developer productivity and QA effort later. It can even have a performance impact (we can’t always do build-time checking). But we may try to do our best before giving up and breaking everything.
Versions supported
Another problem when “implementing” semver is that the majority of front-end projects use a very linear development model supporting just one version. For example:
- 2018-06-17 - version 2.3.7
- 2018-07-01 - version 2.3.8
- 2018-07-16 - version 3.0.0
- 2018-07-27 - version 3.0.1
- 2018-08-14 - version 3.1.0
…and so forth
So, when the project leads decide to break backwards compatibility on multiple fronts with version 3.0.0, the people on 2.3.x are left without anything to do. And the most annoying bugs in 2.3.8 will remain in the 2.3.x series forever, because they’re fixed in 3.x several weeks later and never back-ported.
Of course, I realize that back-porting fixes is a hard thing more often than not and I can’t expect it by default from open-source software. But again, the maintainers may keep it in mind when supporting their software.
Summary
As I said in the beginning, I don’t have anything against semver per-se. I’m frustrated by some of its interpretations, which force you (as a front-end developer) to upgrade almost constantly. Of course, it’s always better to break a little bit and often by constantly upgrading your dependencies, but the things are not always so easy. Furthermore, even with the huge amount of development work, it’s not a big stability testimonial if a several-years-old projects is still producing major (breaking) versions like crazy. But that’s a different topic. :)
But hey, we’re full-stack developers because we should deal with that kind of stuff, right? :)