I read a rather old book called "Technological Risk" while working on startup software test plans, and this is making me think of the tradeoffs in a very abstract manner.
In the same way that there's a value for life whether one admits to putting a value on it or not, there's a value for bugs. When we invest in road infrastructure or cellphone usage laws to save lives, and whether we save lives in our own country or in a foreign country by sending aid, our personal and governmental expenditures put a value on life.
Investing in finding bugs has a similar implicit value: is a given amount of testing worth the likelihood of finding a bug? Strict testing regimes that enforce a certain coverage at certain points in the release process have an attraction, but it should be recognized that a strict full test plan is not better, it merely compensates for a lack of information.
An ideal test process, from the perspective of maximizing effectiveness, would utilize as much information as possible about what could have gone wrong and what the costs of mistakes in various areas are. For example, in a product involving user login, making and seeing posts, and changing display timezone:
- if timezones are a new feature, the ideal test for this release would involve a lot of work on timezones, including changing system time and different cases of daylight savings boundaries.
- If the timezone stuff hadn't changed in months, the ideal release acceptance test would only perform a sanity check on timezones, not go to extremes like changing system dates.
- Every test pass would confirm user login even if nothing had changed there, because the cost of breaking login would be high.
An ideal test process with full balancing of risk and cost is not achievable in the real world, but it's very easy to do better than a strict test plan.
No comments:
Post a Comment