Courtesy of DC Government's Ernest Chrappah, below is a transcript of his speaking session on 'Going Digital To Enhance The Customer Experience' to ...
Enterprise agile planning (EAP) tools help organizations to make use of agile practices at scale to achieve enterprise-class agile development. This is achieved by supporting practices that are business-outcome-driven, customer-centric, collaborative and cooperative, as well as with continual stakeholder feedback. These tools represent an evolution from project-centric agile tools and traditional application development life cycle management (ADLM) tools. The majority of products in the EAP tools market play into the overall ADLM product set, acting as a hub for the definition and management of work-item tracking.
From the outside, agile planning can seem the same at every level. You make a Kanban board, visualize the work in progress, create a backlog of the same-size elements, order the backlog, and collect metrics to predict when things will be done.
Except that the problems that planning solves are entirely different at each level in enterprise agile.
Prediction is also different at the various levels. An individual team can predict velocity by breaking down the work, estimating it, and making projections. Programs need to worry about dependencies and coordination, while portfolios are largely trying to figure out what initiative to fund, based on when the previous one is scheduled to finish.
All those differences mean the results of each planning phase will look different, depending on what level you're looking at. This is a quick guide that clarifies what agile planning means at each level of enterprise agile.
The core questions the individual technical team is trying to answer are:
Velocity is the total relative effort delivered over time. To estimate effort, teams break work down into small pieces called stories, then assign those stories point scores. The point scores are relative to each other, hence "relative effort." Velocity is the number of points the team finishes in a sprint, or the rolling average of those sprints over time.
Remember, velocity is a tool for planning, not a score. As soon as an executive exhorts the team to get velocity “up,” the velocity number will go up—because teams will start inflating their estimates.
To get predictability at the team level, the team needs:
With a level of detail that can range between a sentence and two paragraphs, “big” stories leave room for imprecision and information loss. In order for velocity to be stable, the individual technical members need to genuinely understand their acceptance criteria. Over time the team breaks stories down toward an ideal size, where one story equals one day of code writing.
Consistent story sizing
A group exercise such as relative mass valuation allows every member of the team to have input—and even argue—about the effort involved in each story, then come to a consensus on an entire backlog quickly.
At the team level, the planning window, or the amount of time a group has to complete a set of work, is typically two weeks. This needs to plug into the program level, to help the program predict and plan the team-of-teams-level work.
Retrospective as a part of planning
Agile teams try to continually improve. If velocity is dependent on the team, not some external blocker, it should improve over time. Fine-tuning the planning process is an often overlooked but important task to address in the retrospective. Ideally, each retrospective ends with a concrete thing that can be improved, a trial period, a person responsible, and a way to measure success.
The program gathers data from the teams in order to answer questions that the portfolio will ask. The portfolio wants to know when a program will finish so it can take that team of teams and assign its next project. As such, the program is stuck in the middle.
Program planning has some things in common with team-level planning. The group needs to create the work in meaningful chunks, visualize them, and track how chunks are progressing against the plan.
Organizations that work as teams of teams and use SAFe are likely to plan by the program increment, or PI, which represents four to six sprints (8 to 12 weeks), followed up by an innovation and planning sprint. During the planning sprint, the team has a PI planning event, ideally getting the entire team (of 50 to 150 people) in one room for two days to discuss what has to be built.
The PI planning event has two purposes. First, it's an opportunity for the group members to get to know one another, explain how each team works, and describe what problems they solve. This allows team members to understand the greater goals of the project.
Second, the event gives teams a high-level plan for the next 8 to 12 weeks. That means product owners come with an extremely rough plan, get it refined, figure out what the team will commit to, and leave with tasks that are defined well enough for teams to break down further into stories yet large enough that they can be visualized and placed on a Kanban board. In some cases, the goal is to define all the stories themselves—at least at the rough-cut level.
In their experience at Lego, Henrik Kniberg and Lars Roost found that the team accomplished about 80% of what was planned, added 26% of unplanned work, and had to take out about 19%.
The core leaders of the program get together weekly to discuss the program trendline and what to do about it. One term for this meeting is the “Scrum of Scrums.” A key part of that meeting is dependency management—what team is waiting on whom. This involves micro-level planning. It might mean, for example, switching tasks for a few team members in the middle of a sprint for as little as an hour or two. Solving four or five of these dependencies a week can have an overall impact of doubling or tripling the work that is done.
The leaders at the program level—a release train engineer, product manager, and system architect—work together to identify the bottleneck: the team, feature, or component that is the anchor slowing down the project. Again, this is a micro-plan—what those three will focus on in the next day to week to speed the program.
Johanna Rothman, author of Agile and Lean Program Management and Predicting the Unpredictable, suggests several tweaks to the delivery process that result in more predictable performance. For example, frequent integration (referred to as continuous integration, or CI)—on the order of hourly—combined with the ability to continuously release, makes actual progress visible and allows the organization to change direction easily. Frequent system demos that show actual working software can show decision makers what has been built in a much more meaningful way than statements such as “450 out of 920 story points at the halfway point of the PI increment” ever could.
Jennifer Fawcett, a SAFe fellow, refers to the innovation and planning sprint as extra “gas in the tank.” If the team needs to work on features, it can use that time to finish up the sprint. It's tempting to skip that, to save a week on the schedule—but think of it as an investment in risk management.
Like the program, the portfolio needs to take the progress to date of all of the programs, then make forward-looking predictions. Here are some core elements of that process.
Rationalize the portfolio
In many cases, the portfolio doesn’t make sense or does not exist. Instead, the company is a collection of essentially independent business units. That can work, but it means that each business unit is self-funding and deciding its priorities. A portfolio can look at the projected value of the business units (or programs, or value streams) and decide what to increase funding for, what to start, what to stop, and so on. Figuring out where the money is going now and visualizing it is a good first step.
Gather historical data
Kniberg and Roost were onto something when they calculated the percentage of planned stories completed at the end of a program increment. The portfolio needs to look not only at how much work was done versus the plan, but the revenue and cost of delay as well, along with the effects of increasing staff in one program by decreasing it in others.
Large planning window
At the portfolio level, investments typically tie into a corporate funding and strategy cycle. This could be an annual or multiyear plan, but it is likely both. Senior managers expect to “get budget” once a year and use that budget to hire and allocate resources. That means a flurry of activity before the annual process. Often, the company has contractors with contracts that expire the day the fiscal or planning year ends, and the portfolio team will want to make funding decisions so those contractors can renew early. This avoids a risky situation where you potentially lose contractors and their subject matter expertise because they find more secure positions at other companies.
Smaller planning windows
The portfolio team should meet weekly to review progress and discuss adjusting plans. This does not need to be as detailed as finding budget so that one team can bring in a contractor, but it might include discussions of unplanned work, requests for budget, staffing issues, and coordination issues. In an emergency, the company might shift resources from one program to another—this small group can make an effort to minimize the disruption and pick the right team. (On a short assignment, the wrong team will simply create on-boarding work and chaos).
As stated in the introduction, breaking the work down into chunks and visualizing it at all levels on a Kanban board is probably a smart idea. Walking between these boards and playing “spot the difference” can be an eye-opening, problem-solving method.
Rothman is quick to point out that communication is key to all of this—and that means actually talking together, as humans, and not just sending emails. Work together to build a mutual understanding of what is being built, by whom, and some idea of how to quantify the value of the work. Then try to figure out when the group can have it done, and work iteratively to hone the idea and make it fit into a planning window that makes sense.
The three (or four) levels work together, iteratively diving deep into the software to get some real data, pull it up to make predictions on performance, analyze the business value of those pieces, then go back up again. In the end, the picture looks more like a rolling wave, with the pieces becoming more fuzzy and approximate the further out you go.
The classic book on all this is Mike Cohn's Agile Estimating and Planning. It focuses on the team level. Rothman's two books, Agile and Lean Program Management and Manage Your Project Portfolio, provide higher-level guidance for a larger organization. Finally, Troy Magennis' company, Focused Objective, provides simple spreadsheet and windows tools to calculate and velocity and predict performance over time.
For now, though, you have enough to start. Go forth and plan—but don’t forget to gather data.