Category Archives: Sprint Planning

Mixing New Development and Support on the Same Team

There are lots of reasons why it is good or bad to have a Scrum team handle both new development work as well as support work. When an organization determines that the best way is for support and new development to be done by the same team, careful planning and a little bit of discipline during sprint execution can help meet sprint goals and stay on top of issues reported by users.

Planning

During sprint planning, a certain percentage of the team’s time needs to be allocated to support. For an existing system, that percentage can be based on historical metrics from the time tracking system. Of course this requires that people’s time for support and new development be tracked under separate categories. Also, the support allocation may vary sprint by sprint if there are extraordinary events expected in a sprint (e.g. major new features released to production in the previous sprint).

Based on the percentage allocated to support and non-development work such as backlog grooming, company meetings, etc. the team’s capacity to work on user stories is reduced. A sample capacity calculation is show in the diagram below:

During sprint planning, user stories are pulled into the sprint plan, tasks are identified for each story, and task hours are estimated.  When the team has reached its hours of capacity based on total task hours, the sprint plan is complete. There is no need to create placeholder tasks for support items unless that helps with time tracking (more on this later).

All concerned stakeholders (including the team!) need to be aware what percentage of the team’s time is going to be allocated to support work.  This becomes more important if a team gets hit with a lot of support work and the sprint commitment is jeopardized. This also underscores of the importance of looking at the team’s velocity as an average across many sprints since it will fluctuate depending on support needs in a given sprint.

In Flight

Ground rules must be established and agreed to by team members, product owners, and representatives of the end user community that determine what work is handled as support items to be worked on immediately or as new backlog items for future sprints.  One model for this is:

For business stakeholders who previously worked in environments where the support team would jump on any support request as soon as they received it, the ‘4 hours or less’ rule is the most difficult one to get used to. In traditional project organizations with separate development and support teams, some stakeholders viewed support teams as a way of bypassing development teams that were mired in 6 or 12-month long development cycles. When the organization adopts agile and they see that their requests can be considered (of course factoring business value relative to other backlog items) within a few weeks, they are typically much more receptive to these types of guidelines.

Who works on support requests depends on the size and composition of the team.  A couple of models are:

  • Dedicate one person per sprint to be the primary support person.  Other team members focus on sprint backlog items. This works well when the team has a good balance of knowledge and skills.
  • Pick up support tickets as they come in. This can work well and fill in some of the time gaps or delays within a sprint. However, a high volume of tickets can make it hard for team members to get traction on sprint backlog items.

Just like it is important to provide high visibility for sprint backlog items and progress, it is just as important to make support work visible. A Kanban board, complete with Expedite lane for high-priority tickets, and sticky notes for each ticket are an effective tool that can help a team stay on top of support work.

Why use a Kanban board when your company already has perfectly good help ticket tracking software? I have found that the extra level of visibility of tickets on the board reminds everyone about outstanding support work rather than letting it fall into the background. This is especially important when the team is focused on user stories and meeting the sprint commitment. Timely resolution of support issues can impact customer satisfaction as much or more than new features so its warrants the same level of visibility as a Scrum taskboard.

It is also helpful to publish the actual percentage of time that the team is spending on support while the sprint is underway. Remember how we told everyone during sprint planning what the support allocation percentage was going to be for the sprint? This is where the team’s progress on sprint backlog items is weighed against the support commitment.  With a higher than planned support load, the team may not be able to complete all of the sprint backlog items; with the lower than expected support load, the team should be able to take on more backlog items than planned.

Reflections:

  • Adjust team capacity to reflect anticipate support load
  • Establish ground rules for what is handled as a support item vs. a new backlog item
  • Track and report support effort vs. planned effort so that impact on sprint commitment is understood

Lean Requirements with User Acceptance Test Cases

The Agile Manifesto extols the virtues of working software over documentation as a true measure of progress and business value. Teams in the early stages of agile adoption sometimes misinterpret this to mean that they should not write anything down.  Other teams that are accustomed to waterfall and documentation-heavy methodologies continue creating extensive and redundant documentation under the guise of User Stories and Acceptance Criteria.

Lean Agile Requirements is a middle ground that provides a framework in which complex requirements are captured in living documents that speak to many audiences and serve a purpose beyond the initial development of the application. The flow for discovering and capturing these requirements in the process of developing new software appears in the diagram below.

User Story Writing Workshop

Product Owners, business representatives, and the development team meet to identify user stories. User roles are identified and a couple of sentences are written to describe the goal and value of each story. Stories are mapped into releases and become the basis of the product backlog.

Grooming

Grooming conversations occur at regular intervals in advance of the sprint in which a story is actually turned into software.  In grooming, a user story may just contain a couple of sentences describing the feature increment.  Each story is discussed and the UI might be sketched on a whiteboard or in a UI mockup tool. Acceptance criteria also may be identified – these are high-level user acceptance test cases (hint: these are not detailed tests, yet) but are only discussed/captured to the extent needed to size the story.

Sprint Planning

Sprint planning conversations are where the team starts to think in more detail about how it will build the feature increment described in a story. The discussion becomes more detailed but it is important to avoid getting into too much detail too soon (e.g. avoid things like “Should we show transactions that are less than or less than or equal to the Invoice Date?”). Any details that do surface can be captured as the initial User Acceptance Test Cases but they are not always necessary at this point. The technical design is reflected in the tasks and estimates that the team creates. Note that the user story itself does not need to contain any of this additional detail.

What is a User Acceptance Test Case?

Acceptance Test Driven Development (ATDD) is based on the premise that one of the most effective ways of communicating requirements is by understanding how the user will use/test a feature and the expected results.  Acceptance tests avoid many of the pitfalls of business rule statements because they eliminate the layer of interpretation and ambiguity involved in understanding the outcome of a business rule in various scenarios.  Writing these tests is as simple as asking the user “What tests will you run to validate this feature?”  The test cases capture the user’s actions and expected results using real examples as much as possible. For the developer, it is like having the answers to an exam before taking it.

Sprint

When the team has pulled a user story into a sprint, the first step in working on the story is to meet with the users and detail the user acceptance test cases. This is when and how detailed requirements and behaviors are captured.  Note once again that the user story itself does not need to contain any of this detail. The detailed user acceptance test cases are then used by developers, QA staff, and users to create and validate the working software.

Requirements Life Cycle in Support

Support

Once the feature is deployed to production, bug reports that are handled as support tickets may uncover additional user acceptance test cases. As the one living document that the team maintains, the user acceptance test cases are updated as the software evolves.  They are used to run regression tests to make sure that other functionality is not impacted by the defect correction.

By capturing complex requirements in the form of user acceptance test cases, the team can build and maintain a product with the confidence that the user’s needs are being satisfied.

Are You Being Agile or Doing Agile?

The GoalThe starting point for adopting agile is for a team to learn the mechanics of an approach such as Scrum.  They write user stories instead of use cases, do daily standups dutifully, and replace their lessons learned meetings with sprint retrospectives. Eventually the rest of the business comes on board, and the Product Owner takes over writing the user stories and maybe even attends the retrospectives. The product backlog gets populated in time for the next sprint planning session.  The team might spend some time talking about some of the upcoming stories before the next sprint.

A novice golfer often focuses too much on swing mechanics and less on the desired end result of putting the ball on the target.  In doing so, he is “doing golf” rather than “playing golf” and his improvement typically plateaus at a mediocre level. Agile teams can also plateau and fail to realize the full potential of an agile approach by “doing agile” rather than “being agile”.  Here are some of the smells of teams that are doing agile:

Detailed user stories

One of the principles of the Agile Manifesto is that “the most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” Product owners who are accustomed to waterfall methodologies may still be tempted to put all of the requirements in the user story rather than using stories as placeholders for conversations with the developers.  Detailed user stories are both inhibitors to conversation and wasteful since it makes more sense to capture detailed requirements as test cases.

Obsession with metrics

The first step in replacing traditional project management methodologies with agile approaches to is to stop equating hours worked with progress.  In agile, stories that are “done” are the only true measure of progress. As such, it can be very tempting for an organization to put emphasis on user story completion rates as a key metric for judging the performance of agile teams. The danger is that when a team is considered a failure if it does not deliver 100% of planned stories for every sprint. The natural reaction is for the team to work very hard to eliminate all uncertainty and risk before starting a sprint. This can quickly lead to a culture of big, detailed user stories and excessive story grooming and sprint planning efforts in order to eliminate all possibility of not delivering 100% of stories in the sprint. In extreme cases, a team may spend an entire week planning a 4-week sprint!

Fear of building features over many sprints

The misconception that it is best to develop an entire feature in one fell swoop may still be present in some organizations.  Usually the rationale is something along the lines of building the entire feature in a single sprint makes the process more efficient by only having to test it once.  This misses the point of agile, which is to build feature and product increments, learn from user feedback, and make adjustments by adding to the feature in subsequent iterations. This brings to mind the old adage “Users don’t really know the requirements until they fail to be met by the software.” But what about having to regression test as increments are added?  Automated testing can eliminate alot of that effort.

Stories not considered done when new requirements discovered

New requirements will surface as the software comes to life in a sprint.  That’s good!  The key is recognizing these new requirements as new user stories rather than as defects that must be corrected in order to consider the story “done”. This requires strong trust between the users, Product Owners, and development team to recognize that the new requirements were not factored into the story sizing for the current sprint (remember – requirements are not detailed before starting a story).  If team starts to feels like story is a lot bigger than what they thought when they sized/estimated it, this culture of trust allows for a new user story to be created to handle that additional work in a subsequent sprint.  The “doing agile” version of this is where CYA kicks in and the Product Owner and team members run for cover to avoid the organizational fallout from additional work coming out of a sprint.

Reflections

  • Being agile allows a team to focus on building great software that delights customers rather than merely creating the illusion of progress.
  • Just like becoming a great golfer, this is often the most challenging part of an agile transition.

Set the Stage for Swarming

Swarming is the essence of agile teamwork. Everyone on the team pitches in to push stories over the finish line. Egos are left at the door and team members sometimes operate outside of their comfort zones so that the team can deliver on its commitments.

Sprint planning sets the table for effective swarming.  Tasks for all types of work are represented in the sprint plan.  For a software development effort, this includes GUI creation, web development, database development, QA, and any other work needed for the sprint. A user story task list with the team’s estimates might look like:
Sprint Plan with Hours Only

In sprint planning, the team estimates stories and tasks until they fill up their capacity (available hours to work on tasks) for the period of time covered by the sprint.  This is different from velocity, which is based on user story points or ideal days and provides a rough measure of what can be in a sprint as a high-level release planning tool.

During sprint execution, the team determines based on its WIP limits or other factors whether to swarm on a story.  For example, Web developers could help out with the Develop QA test scripts task if needed to get a story done as the end of the sprint approaches.

Teams who are having trouble swarming may be the victims of their own sprint plan.  Some toolsets or teams treat certain types of work differently in the sprint plan.  For example, Microsoft’s Team Foundation Server (TFS) uses the concept of Test Points as a way of estimating the QA testing tasks and provides progress reporting based on burning up these points.   The plan for this story then looks like:

Sprint Plan with Tasks and Test Points

Notice that the QA tasks are no longer in the task list as the developers’ tasks. As a result, the developers create a sprint plan using the development tasks.  They add stories and tasks to the plan until they have filled all of the projected development hours capacity for the sprint. The QA people create a sprint plan based on test points. While developers and QA are working on the same user stories, they have lost their common pool of work to be completed to get the stories to “done”.

Effective swarming is based on the premise that all team members are responsible all of the work to complete the stories.  Creating a second sprint plan for other work on the same sprint creates a barrier that inhibits people from working outside of their pool of tasks.

Common arguments for this approach are:

“QA should not have to provide separate task estimates because test cases completed are a truer measure of progress rather than hours spent on testing.”

This argument might hold water in the purest sense, but does anyone really want to sacrifice team cohesiveness for the sake of generating status reports that are slightly more accurate?

“QA work cannot be accurately estimated because the level of effort depends on how many bugs are in the code.”

The reality is that all work has a certain level of uncertainty and team members use their collective experience to factor that into their time estimates.

If someone insists that QA work is planned separately from the developers’ tasks, I pity the fool who approaches the ScrumMaster mid-sprint and says, There is more QA work than the QA person can handle. Can we just tell the developers to swarm on the QA work?”During sprint planning, if the developers filled their available sprint capacity with coding tasks, they don’t have capacity to help with testing!

Reflections:

  • Maintaining separate plans for different types of work on a story is a barrier to swarming.
  • A sprint plan that includes tasks for ALL of the types of work for the stories provides better visibility and whole-team ownership of all tasks.
  • All of the tasks should be estimated in hours and counted against the team’s overall capacity.