Updated: October 15, 2021
Normally, I am not too nostalgic. I don't pine for the "good ole days". Yes, some things were better in the past. And some were worse. For instance, today, cars are far more reliable - and overall better - than back then, in the days of lead and velor. Healthcare, another positive example. Software? Ah, no, not that one.
While my inner demons clamor for yet another outburst the likes of what I wrote in my modern software development piece, I'm not going to. Instead, I want to do something rather different and unusual for Dedoimedo. I want to take you back to 1989. And from a perspective made over thirty years back, I'd like to talk to you about software testing, which directly impacts software quality, y'know, that thing that has been steadily declining in the past decade.
Professional versus user
When it comes to software testing, there are three major considerations that need to be taken into account: time, money, and the complexity of the end product (assuming it's a product). In general, the more complex the product, the more testing it needs, hence time, hence money. On the other hand, if you have skilled testers and/or capable testing tools, you can potentially cut down on the time element, hence you save money, but skilled testers and capable testing tools cost money, hence money. This equation had bothered the bean counters since the dawn of humanity, or at least, World War Two era, more or less. Ideally, testing would cost zero, and the product would be perfect and without any bugs. In reality, it's an optimization function.
Often, you can offset some of the cost by transferring it into the final price of the product. You invest more in the testing, but then you make your product pricier. Sounds fair. However, what do you do when the margins are already thin, and everyone is racing to the bottom?
You cut corners, duh!
Welcome to the modern era, the age of cheap, badly made applications, mass consumption and endless "freeware". Unsurprisingly, lots of software made today is ... made without much testing, if any. Making things worse is the blind, fad-driven adoption of modern software methodologies by companies small and large, i.e., fast/rapid/nimble/flexible/agile development (whatever you want to call it). In this new, brave era of speed, software is created to the lowest common denominator (minimum viable product), there's no real end goal, software is always in constant, continuous development so testing can't really be done properly, lots of the features that need to be tested don't exist, haven't been defined, or have been superseded by a new set of features without the first set ever having been completed let alone tested.
But you may ask, why would companies do this? Doesn't this work directly against their interests?
The answer is simple. Money. You want to hit that optimization sweet spot, walk the line between investing as little money as possible in testing and not bleeding your user base due to complaints or dissatisfaction. In other words, if you can maintain the user base (with some level of grumbling) without eroding your profit margins, then you've nailed it.
Now, here's a radical thought. What if ... you don't do any testing in house at all? You let your users be the testers! Sounds ambitious? Revolutionary? No, not all. Good? Ah.
The question is, does this work?
Money aside, the only way to judge the merits of an internal testing team versus community/user base is to check the quality of the end product. And this is where we need to go back to 1989. Now, what I'm doing is populist, it's also very narrow in scope and all that. And yet, it's a very interesting lesson with a rather prescient view on how things ought to be.
Specifically, I'm talking about the Computer Gaming World magazine, Number 56, published in February 1989. In this particular issue, on page 46, there's a nice piece on the reality behind the game, a discussion with the development team of F-19 Stealth Fighter, a flight simulator game made two years prior. We're talking MicroProse, Sid Meier. The legends of the era.
One of the topics that came up was the question of software testing, Quality Control (QC), since Stealth Fighter turned out to be a bigger, more complex game that the ones the studio had made before. I'd like to quote from the article, if I may, plus I took the liberty of taking a screenshot of that page, because it's lovely and awesome, and the words resonate so true after more than thirty years.
A full-time professional Quality Control (QC) man (or woman) can find more bugs, and more important bugs, in one day than an "evening amateur" can find in a week. Complicated products like F-19 Stealth Fighter require special testing tricks ... Even "silly" bugs require a knowledgeable tester.
One example? Yes. A notable example? Definitely. For someone who has played probably the entire range of MicroProse games back in the 80s and 90s, they had their problems and faults, but shoddy code quality wasn't one of them.
And then, after having tested hundreds of Linux distributions, thousands of applications, every release of Windows since 3.11, and then some, I can definitely say that the slow, steady erosion of professional testing in the software world is noticeable. And by that mean, in those scenarios it actually existed, because in some domains, it's never been there, and it shows. If anything, the longer I keep my hands on this or that application or program, the more I'm convinced that the new, casual approach to quality is simply not working. There will be a moment of reckoning.
Conclusion
Software industry is pretty much the only industry where you're asked to accept an "as in" disclaimer before use. When you move into a new building, you're not expected to sign a disclaimer that if walls come a-tumblin' down, it's your fault. Nope, it's the building engineer's fault. When you buy a car, if something goes badly a-wonk, it's the car company that will be liable. But in the software world, if some naughty code eats your disk, hey, it's your problem!
And with so-called user testing now becoming the thing, the quality is getting progressively worse. The sad things is, most companies really have no reason to change their approach, because people seem to be okay with mediocre quality as long as they're getting the product cheaply or for free. The one thing that consoles me is the knowledge that this is not sustainable. One day, this nonsense el-cheapo model will implode. At some point, the macro-economic model will come under threat by low-quality software. It's already slowly happening today. Only, the price of human lives and data being fed into this machine isn't big enough yet. Still not critical enough to warrant a revolution. One day, there will be rethink, and the words of game developers from the 80s will be read like the wisdom of the ancients.
Cheers.