One thing I both love and hate about the hardcore agile process, particularly the Scrum process, is that it tends to work around participants' biases, weaknesses, and lack of trust, rather than try to fix them. Scrum may not even leave room for improvements in trust or reliability. Some examples:
Developers' estimation biases are worked around, not fixed
Because developers have biases in estimating tasks (biases that are consistent over time and on average), scrum "velocity" measures estimates against results. It doesn't attempt to fix estimates, e.g. by showing a developer the difference between their estimates and time spent. In fact, velocity doesn't measure time spent at all, and lumps all developers in a team, over-estimators and under-estimators together, in one velocity measurement.
This is probably for the best -- it's one simple measure that's remarkably consistent. Still, I wonder if it wouldn't be more useful in the long run to learn to estimate better. I've never seen a really good estimation feedback loop in the software development context, but wouldn't it be neat to try?
Product owners' changes are either completely allowed or disallowed
Traditionally engineering teams have to train product owners not to change the product plans all the time. This involves frequent team arguments about product plan changes. Instead, scrum tries to carve out one small space where product changes are forbidden, and allow all other changes without argument. In the current interval, the product owner cannot make any changes or the current plans are all tossed out and the estimation process is restarted, a consequence severe enough to effectively forbid any small changes.
This rule does keep product owners off the developers' backs. If the product owner is thinking of a change to this week's plans, the consequences of an "abnormal sprint termination" probably stop them. If the product owner is thinking of a change to next week's plans, the upcoming sprint planning meeting is where they'll discuss it. Either way, the product owner does not walk up to a developer and say "Hey! I've got a great idea!"
If keeping the product owner off the developer's back sounds like a really good idea to you, well you probably haven't worked with trusted, experienced product owners. And if the team has processes that reify the distrust, then there's less chance to build trust.
Demos help people who can't analyse abstract plans
Weekly demos are a course correction mechanism. In order for anything to be marked done, it must be demoed. And when a feature is demoed, the product owner can see the practical consequences better than they could when the feature was designed. Now the product owner is able to immediately add things to the backlog, which might get done in the next iteration, and this is good. Iterative design for the win.
The practical consequence of this appears to be less specification and planning work, which is good (avoid overplanning), but taken to the point where the product owners do not feel any particular pressure to understand and analyze the design. Instead of sitting in front of the wireframes and thinking it through -- "What happens if I press this if it were implemented? If this were a real system and the item had been deleted, what would the user need to see? " It's a difficult what-if skill and takes practice. It doesn't make design any less iterative! Instead, it moves the iterations into the design phase rather than the costly implementation phase.
Frequent demos, and the scrum rules which allow for any changes in the backlog, seem to remove some of the need to develop abstract design skills. That makes me a little sad. Still, frequent demos and iterations are a tool I'd use in any software development process.
Optimizing considered harmful
In hardcore scrum, developers are practically forbidden from doing any changes now that would make future work easier. It's discouraged, and the way the system tracks tasks makes it unrewarding to do.
Let's say that today my task is to create a feature for users to delete items. There's also a story in the backlog or icebox for undeleting items (e.g. finding them in a trash folder and returning them). Well, the way success is structured in scrum, I estimate the time to delete items at the beginning of the period where I do that work, and it makes everybody happier if I figure out how to delete items without much work, and we make more features fit in this period. It doesn't help me
now to estimate high in order to prepare for the 'undelete' feature. It doesn't help me
in the future either -- when we get to the scrum meeting where we estimate the 'undelete' feature, it might not even be me doing the feature. (In theory, developers are supposed to be interchangeable in agile/scrum). Even if it is me, it's no big deal to have to do the undelete by rewriting the way delete worked -- I just build that into my estimates for this period.
There's no overall project schedule view that would have showed the value for doing these two features together and doing delete right the first time. There are other ways where optimizing is actively discouraged:
- Literally, optimizing speed or resource use is discouraged. Functional stories only say that the feature has to work. Other stories in the future might or might not say that the feature has to work in 2 seconds. Security might also be an afterthought!
- Optimizing by expertise is discouraged. Everybody in the team is supposed to be interchangeable. If all of this week's features are GUI features, then everybody does GUI work. If all of the next week's features are security features, then everybody does security work.
- Optimizing the sequence of work by engineering constraints is discouraged. If it would be faster to do feature A after the more general feature B is completed, too bad. If there's a pileup of work with dependencies, and it's slipping to the end of the backlog where the dependencies will start to slow down each task, too bad. Only the priority sorting of the product owner is valid.
I've seen
passionate arguments for "You Aren't Gonna Need It" (
YAGNI), and they're right. Engineers often optimize prematurely. Engineers often predict what they think the user will need and can be wrong. But none of the passionate arguments for YAGNI think that it can be used without a sense of balance, right? So when scrum methodology encourages developers to always put off optional work, scrum puts its thumb on the balance. It short-circuits the discussions of complex tradeoffs and simply says do it later.
Summary
What's common among all these traits is an attitude in scrum not only that people are fallible, but they're routinely wrong, and that they can't be trusted to work together on complex issues for the greater good. That means that it works best in environments where that attitude is more true -- e.g. a contract development team, comprised of decent engineers but not particularly specialized, who don't know much about the end use and maybe don't care too much, working with product owners who aren't experienced in software design, are way too busy to write specs and discuss complex tradeoffs, and are always tempted to change things they shouldn't.
Agile should be done differently when the engineers are specialized, care about what they're doing and what it will be used for, work closely with the product owners in the same business unit, and the product owners are smart and can be trusted to learn good software development practices. Just how it should be done differently is an open question to me. There are agile process variants that take different attitudes (e.g. those that encourage planning a whole release) and for the kind of team I am currently working in and enjoy working in, I'm interested in those variants.