Defining Done in a DevOps World

InfoQ recently published an article on QA in Scrum. It published a really simple definition of done list.

In a counterpoint post Matt Davey added Acceptance Testing to the list, bringing up Acceptance Tests as part off the definition.

That’s great but I feel they have both missed a critical point.

Your feature is worth nothing if it has not been given over to users or if it is of such low quality that those who are using it stop.

As software delivery specialists (Developers, QAs, Project Managers, Product Managers, SysAdmins etc.) we strive to make useful software products and it is of little value to us or the business if no one is using it. We use “Done” to define when a Unit of work (Feature, Use Case, Story, Task etc.) is complete. Rarely, does that definition take into account the true realisation of value locked in it.

Done
“Done” is probably one of the most variable terms in Software Engineering methodologies. To be fair – it’s succinct and to the point and really  should mean what it says. However it is just far too ambiguous.

We have a myriad of different definitions of what it means to be “Done.” All development practices focus on what those are and SCRUM teams have their “Definition of Done” check lists to tell them when a unit of work is complete. Every team I have worked with has had widely different definitions. To get around this we have all heard awful phrases such as “Done, Done” or “Ready, Ready” when we mean more than just completing part of that unit of work. I am as guilty of using these to try and cut through that ambiguity. Which in turn leads to greater ambiguity.

Almost none of the definitions I have seen across various teams completely match with a DevOps culture we should be trying to instil. The idea that the software has to be used for it to have value is not included. The delivery team focuses on build but not operations. In a world of DevOps & Continuous Delivery the lack of the last mile in such a key delivery metric has become a stumbling block. When velocity is measured against a measure of Done that allows a huge batch to build towards release, we are generally asking for failure. With that failure comes the invalidation of the declaration of success that the project tracking has given the team.

A DevOps definition of “Done”

“Released with a high enough level of confidence in it’s quality.”

This, for my team, embodies the necessary premises of DevOps – collaborating to fully deliver the product to our end users from code to production. There are still the check list of what that means but now the team knows when we say Done how far we expect them to have taken it. They understand that it isn’t enough to have finished the development tasks plus the relevant QA, it has to be released and they understand that they need to prepare for this early. Thinking about it up front at the start of the project, means that the first thing a team does is ensure that there is a solid and repeatable pipeline to production. That generally means development collaborating with our operational counterparts right from the start. Doing it early in the project gives us the opportunity to leverage that specialism from the start. And immediately starts to break down the synthetic silos between development and operations.

There’s also little excuse for anyone to deliver without confidence in its quality. Through a shared responsibility we should understand that there is enough test coverage, including those acceptance tests. But only that it is high enough to know it won’t break under the majority of user circumstances but perhaps without test full coverage. You can never truly have 100% confidence – bugs escaping is a fact of life for software engineers. But we can minimise the impact of that through the right amount of testing and assurance as the path proceeds to production.

The definition also does not shy away from the fact that a unit of work cannot count towards our velocity until it has been released. Project progress is measured based on whether it is delivering actual value to the user and the business. This is a bit of a leap of faith but think about it? Is it right to report success without running over that last mile?

Nothing New
Nothing I am saying here is new or ground breaking. It seems common sense to me but DevOps is a cultural shift for many engineers and sys admins. Defining “Done” to be something that ensures the start of that collaboration is one way of starting to instil the culture and values. Seeing the value in the engagement early, leads to more of a willingness to collaborate further. I’ve had some success with it in my own teams. While it takes time to fully embed the practice, if your team is willing then this definitely worth a go.

Advertisement

Burn Down Charts Suck

Burn Down Charts Suck. For me any way. The frustration is that they abstracting away the real picture of the progress into a single line.

A Burn Down Chart is “a graphical representation of work left to do versus time”*. Let’s take a look at one.

burndown

What is happening here? It looks right, the trend is downward which is good and progress seems to have been good over the last few days. At the start Day 1 there is 100 units of work to do and by day 14 – we are down to 58.

However, what happened on day 2? Why did it go Up when its a Burn Down chart. Well that’s obvious, the scope increased for some reason. OK so the scope increased to 112 units of work. Well actually it didn’t. The Burn Down chart is hiding the fact that there was a scope increase of 20% to 120 and that there were 8 units of work completed. At a glance – you cannot see that detail, your team cannot see it and your stakeholders cannot see it.

Day 7 is another good example. That’s a great day. 15 units of work completed, right? Well, actually No. The burn down chart again fails to highlight that there was a fluctuation in the scope. This time a reduction from the removal of 10 unnecessary units of work. Add that to the 5 units of work completed and its a pretty big drop.

My preference is for the Burn Up chart. This is the equivalent of the above Burn Down. The scope is red and the cumulative work completed the purple.

burnup
It still performs the same task – providing a graphical representation of work left against time. Rather than chasing 0 you are ticking off the work done and rising to meet the target.

It also provides a lot more detail at a glance. The fluctuations in scope are by far and away the most obvious win here. It is clear to see that the target has moved and it becomes an easy talking point. Particularly in Day 7 when this could easily be missed in the Burn Down chart. It also provides a graphical representation of the actual momentum/velocity of the team. In this example it is very steady progress. The team hit their stride and were ticking of units of work at a reasonable velocity. Scope changes could easily mask this in the burn down chart so it’s much clearer in this chart.

There is an argument that scope should never fluctuate and that everything should be so well defined that we know where we will be. Unfortunately that might be the case in a lab or training environment, but in reality, scope changes, particularly in an environment that is fast paced and changing. And we should embrace that change.

Realistically though, both charts have their place. And producing the two from the same data is really easy. However if you have space for 1 chart at your stand up or in your stakeholder sprint review slide deck make sure its the one that gives the full picture, not part of the story – the Burn Up chart.

Chart Data:

Day 1 2 3 4 5 6 7 8 9 10 11 12 13 14
TotalScope 100 100 120 120 120 120 110 110 110 110 110 110 110 110
Done 0 0 8 5 2 2 5 8 5 2 2 5 5 3
Cumulative Done 0 0 8 13 15 17 22 30 35 37 39 44 49 52
Work Left 100 100 112 107 105 103 88 80 75 73 71 66 61 58