Agile By Stealth – Converting a Waterfall Project to Agile

Agile By StealthOne thing I’ve noticed about agile is that it’s difficult to really understand it unless you’ve actually done it. This makes it tricky to sell agile – especially to people who are used to a plan-driven (waterfall) approach. In other words pretty much everyone who hasn’t already gone agile.

In my experience, agile delivery is just too different from waterfall for some folk to jump in head first. It is, however, possible to take people on a journey from waterfall towards agile delivery that doesn’t involve too much of a leap of faith. In this article I’m going to describe a strategy that I’ve used a few times with some success.

What’s Wrong with Waterfall Anyway?

First of all, why change? Here are a few problems I’ve had with waterfall:

  1. Scope bloat. Waterfall projects are very “now or never” – and so stakeholders pack the scope with all sorts of nice-to-have functions that they think they’ll never get if they don’t ask for them now.
  2. It’s slow. Each phase (“requirements”, design, build, test) has to be finished before the next one can start, including all the tricky bits and the lower priorities. And because of the 80:20 rule, that last 20% always takes 80% of the time to complete.
  3. Architectural risk. Waterfall projects carry architectural risk until quite late on. Some architectural issues surface during build, but others remain hidden until well into testing.
  4. Functional risk. You only really discover whether you’ve delivered the right thing when you get it in front of the users – either in UAT or in production. By which time it’s often too late to do anything about it.
  5. Resistance to change. Change happens – either due to business change or in response to architectural or functional issues. But change impacts the plan, and we’re tracking progress against the plan, so we resist the change. Sometimes this is a good thing (it avoids scope creep) but sometimes the result is that we continue to work on low priorities (because they are in the planned scope) at the expense of higher priorities (the changes).
  6. Uncertain cost/duration. Up-front estimating and planning aims to give certainty to project cost and duration, and yet few waterfall projects seem to deliver on time and to budget. More on this later.
  7. Resource levelling and availability. Intensive periods of analysis, design, build and test involve ramping up and ramping down of BAs, designers, developers and testers. This is tricky logistically and also means that resources involved in the earlier phases aren’t always around to support the later phases, and knowledge is lost.
  8. Team motivation. Working on a waterfall project means waiting a long time before seeing any meaningful results, other than documentation. In such circumstances it’s easy to lose focus/motivation.

This is an exhaustive list, nor a comprehensive critique of waterfall – it’s just the things that are particularly relevant to the journey I describe below.

And What’s So Scary About Agile?

Most people I know who have worked on agile projects have never looked back. So what is it that makes agile such a hard pill to swallow?

I do know some people who have had negative experiences of agile. But usually it’s because the project wasn’t properly agile – the most common example being the development team trying to be agile but without management/business stakeholder buy-in.

Negative experiences aside, some people struggle with the concept, especially project managers with a long pedigree of plan-based methods. I think the problem is that, on paper, it just doesn’t sound like it would work. It’s counter-intuitive.

Let me draw an analogy from the world of economics. Most people would agree that a free market economy is more successful than a planned economy. Countries that have adopted free market economies, where companies compete with one another to win business, have generally outperformed those that adopted centrally planned economies, as evidenced by the collapse of communism in the USSR and elsewhere. And yet in theory a planned economy sounds like a great idea – everyone has their place and they work together to produce goods in the most efficient way. Conversely, the competitive free market sounds like a really bad idea. Several companies all working independently to produce the same products sounds massively inefficient. But in practice things are different. People are not machines, and the planned economy doesn’t provide the right incentives to get people working at their best, whereas the free market does. Despite its inefficiencies it produces better results. It’s not perfect and it has its drawbacks (greedy bankers spring to mind), but the evidence suggests it’s better than the alternative.

I think something similar happens with agile. The plan-driven (waterfall) approach makes sense on paper. Understand the “requirements” before you design. Design before you build. Build before you test. Simple! Whereas the agile approach sounds a bit mad. What do you mean you don’t know up-front what the scope will be for every iteration? What do you mean you don’t know when you’ll be finished? And you say you’ll also have to refactor the design as you go? Surely it would be better to do the design up-front. And you say we’re going to build stuff and show it to people and expect to have to change it? Can’t we just get it right first time?

So it’s understandable that some people are sceptical about agile. And it’s no surprise that the sceptics are often the project managers and stakeholders – they’re the one whose heads will roll if things do go wrong.

The Safety of Plans

The thing about plans is that they make people feel safe. A detailed up-front plan gives the impression that we know what we’re doing, and that we know when we’ll be done and how much it will cost. We can track progress against a plan and we’ll know whether we’re on target. If we’re running behind we can take action to bring things back on track. In short, a plan gives the impression of certainty.

I’ve never worked on a plan-based IT delivery project that has run to plan.

Here’s the problem: software delivery tasks are tremendously difficult to estimate. Analysis and design are so open-ended. Build is perhaps slightly more predictable, but even then every project is different and it seems that you only really know how long it will take until you’ve done it – or at least some of it.

Up-front detailed plans also assume that (a) we’ll get it right first time and (b) nothing will change. In my experience, neither of these is normally the case.

In truth, a detailed up-front plan gives the illusion of certainty, where no such certainty exists.

Nevertheless, most project managers and business stakeholders I’ve worked with feel very uncomfortable without some kind of plan. I’ve been told many times that “failing to plan is planning to fail”.

Agile by Stealth – One Step at a Time

On a recent waterfall project I felt that we were suffering from some of the problems I described above, so I proposed a change to a more agile delivery method. I talked about product backlogs, I talked about phased and time-boxed delivery, I talked about prioritization and embracing change, and all the benefits it would bring. I explained that I’d done it before on a very similar project and it had worked really well. I thought I had made a very compelling argument.

My proposal was blown out of the water. Both the project sponsor and the project manager felt the change was too risky and also they couldn’t really see the point. They didn’t like the idea of open-ended scope, nor the idea of breaking the project into phases. We know what we want and we want it all, so just get on and deliver it!

I skulked despondently back to my desk and cursed my lack of persuasive power.

In hindsight, my mistake was to propose a wholesale change to the method. It was too much of a radical switch for my stakeholders to stomach. It didn’t help that they were already wary of agile from previous bad experiences of sort-of-agile.

It was time for Plan B.

I decided to try to introduce agile by stealth – one small step at a time. Each individual step makes sense in its own right and also takes the project slowly but surely on a journey to a more agile way of working.

I’ve done this a few times now for a number of clients. Sometimes there has been support from within the client organisation – little pockets of agile here and there. Other times I’ve been largely on my own. Here’s what I have found to be key factors to success:

  1. Introducing change one step at a time
  2. Making sure each individual change has merit in its own right
  3. In particular, by implementing changes as project risk mitigators
  4. Introducing the changes only when the time is right, not before
  5. Avoiding agile terminology and the ‘a’ word itself, and if anyone suggests I am trying to introduce agile by stealth, vigorously denying it :)

Here’s a proposed sequence of steps based on my experiences. I didn’t use every step in every project, only what seemed appropriate.

The approach assumes the person doing the agile-by-stealthing is the BA on the project (the person who is specifying what is to be built). It’s possible to execute the strategy from other roles (such as development), but bear in mind it’s important to have the BA’s buy-in.

Step 1: Create a Feature List

My first step is usually to create a feature list – this is simply a list of all the system functions to be built, sometimes referred to as a scope list, use case catalogue or (high level) requirements catalogue. I’m not talking about a detailed requirements catalogue here – the items on the list are at a higher level of granularity, for example:

  • Register account
  • Log in
  • Browse products
  • Add product to cart
  • View cart
  • Check out

I don’t worry too much about the granularity at this point because I know I’ll be splitting features down later on. I do my best to capture all the scope items but I acknowledge that some will surely come up later that I’ve missed or we didn’t think of.

I give each feature a unique ID – F001, F002 etc. I do not call the feature list a “product backlog” and I do not call the items on the feature list “stories”. I also do not use the classic “as a/I can/so that” story format – I stick with simple free-format descriptions.

Also, at this point, my feature list is likely to be electronic (in a spreadsheet). No sign of any index cards or a card wall just yet. Here’s an example:

So far, so normal. Most projects start out with a scope list of some sort.

Step 2: Estimate Features

Very early in a waterfall project, the project manager is likely to want to draw up a plan, and for that he or she is going to ask for some analysis/build/test effort estimates.

There’s no reason to deny this request. So I find a convenient senior developer and get him or her to estimate the build effort for each feature on the scope list, in man-days. Some features will be easier to estimate than others, but as long as I get something for each feature it’s good enough for now.

I refer to these estimates as perfect man day (PMD) estimates. Now, it takes more actual man days than perfect man days to build an IT system because (a) no day is perfect and (b) developers are eternal optimists, and the job always takes longer than expected, for a variety of reasons. So we need a contingency factor. On most projects I’ve worked on, the contingency factor was a whopping 100% (i.e. double the estimates) – way more than anyone can believe it needs to be. Usually at this point, the PM can only stomach adding 40% or 50% at most.

For analysis and test I usually size these as a percentage of build e.g. using a ratio of 30:40:30 for analysis:build:test. Here’s my example feature list, updated with PMD estimates:

Perfect Man Day (PMD) estimates

Whatever contingency factor and ratios I settle on, the key point is that I now have a relative size for each of my features.

These are absolutely not story points :)

Step 3: Prioritize Features

The next thing I do is call my stakeholders to a meeting and explain that I’m going to start work on the detailed analysis for the project, but first I want to prioritize the features – I can’t do all the analysis at once and I want to start on the most important features first.

My rationale for this is as follows: the project plan says I have a certain window to complete the analysis so that build can get started. Whilst I’m confident I will get it done, there’s always a risk that things take longer than planned. So, as a de-risking exercise, I want to do the analysis on the most important features first. That way, if we’re running behind, we always have the option to start development on the important features whilst I’m finishing off the analysis on some of the less important features.

Importantly, my intention at this point is not to de-scope any features. It’s just so I know what to do first. We are fully intending to deliver all the scope. This is important in building confidence with stakeholders. If they think my plan is to de-scope stuff, they are generally less willing to co-operate.

Also, framing the argument to prioritize in terms of risk mitigation makes perfect sense to a project manager. PMs are comfortable with the language of risks. A really good way to set up for this step is to raise the risk first. So, I raise a risk that the analysis might take longer than planned. Then, I offer up feature prioritization as the risk mitigation.

I’ve previously written in detail on how I prioritize features. Here’s the summary:

  1. I get some index cards
  2. I write each feature on a card, including the size estimate
  3. I lay them out on the table
  4. I get my stakeholders to put them “in order”

Specifically, I want the features ordered primarily by business value. This is not always straightforward, especially when there are dependencies between features. At this stage I don’t worry about getting it perfect. The main objective for now is to separate the critical features from the less critical ones.

I usually avoid using MoSCoW terminology (must, should, could, won’t) – stakeholders tend to say that everything is a “must”. Rather, I go either for a stack ranking, or for three buckets labelled 1, 2, 3 or “first”, “second”, “third”. Remember, I’m not looking to de-scope anything. All I want to know is what to do the detailed analysis on first.

And if anyone suggests that I appear to be grooming my backlog, I just give them a strange look.

An example task board with features in priority orderStep 4: Set up a Task Board

Now that I have my features written up on index cards, and prioritized, it seems like a good idea to get them up on a whiteboard or wall somewhere visible. It will allow me, and the team, to see at a glance what’s in scope, and also the (analysis) priority order.

At this stage, I don’t have any columns on my feature board. My features are in a single column, in priority order.

Once I have a feature board, which is, after all, highly visible, I’m likely to start attracting attention to myself. Passers by will comment that I’ve “gone all agile”. I’ll play it down of course. This is a waterfall project, I’m just using a feature board to keep track of my progress, nothing more.

And in any case, it can’t be agile because my cards represent features, not stories :)

Step 5: Do Incremental Analysis

With my groomed backlog, sorry, prioritized feature list in hand, I can start my detailed analysis work. Some people call this requirements gathering/elicitation. I call it functional design. The ultimate output is a functional specification, including things like use cases, logical data models and interface mock-ups/wireframes.

In line with my risk mitigation strategy as declared in step 2, I do the functional design incrementally. Specifically, I do it feature by feature. For each feature, I collaborate with my stakeholders to produce the relevant use cases, logical data model fragments and interface mock-ups. In an ideal world, I have techies and/or testers involved in this process from the outset, but this depends on whether they are available early on in the project, more of which in the next step.

As my analysis technique has matured, I’ve developed a mini analysis lifecycle that I execute on each individual feature, including a “Define” phase (high level approach and options) and a “Design” phase (detailed functional design):

BADM - Overview

An important aspect of the analysis technique is “feature splitting” – breaking down a given feature into sub-features, either to divide the feature into bite-sized chunks or, more importantly, to separate out the low value parts of the feature from the high value parts. I complete the analysis only on the high value sub-feature and put the low value sub-feature back on the feature board for re-prioritization.

I prefer to create a separate (lightweight) functional design specification for each feature, but it’s also possible to build up a single document incrementally, which might be more appropriate in organisations where specific monolithic deliverables are mandated.

When the functional design is complete for a given feature, I try to get my primary stakeholder to sign it off. If I meet resistance to signing off the functional design piecemeal, I ask them to agree that we can “baseline” it instead, with the understanding that yes, of course we can change it later if we want.

And at this point I can also split my feature board into columns:

  1. Pending Analysis
  2. In Analysis
  3. Analysis Complete

Or, if I’m using my define/design analysis lifecycle, I’ll have these columns:

  1. Requested
  2. In Define
  3. Define Complete
  4. In Design
  5. Design Complete

So the team can see at a glance where I’m up to with my work, as per this example:

An task board with analysis columns

For more details on how I do feature-by-feature analysis, including sample outputs, see Business Analyst Designer Method.

Step 6: Measure Analysis Velocity

Once I have completed my analysis on a few features, my PM is going to start pestering me to know when I’ll be done.

I’ll be able to point him/her at the feature board to see the current status, but that doesn’t give them any idea of how fast I am going and how much longer I need.

So I create a burn-up chart to track my progress over time. The burn-up chart shows, week-by-week, how many PMDs’ worth of features I have completed (signed off/baselined). It also shows the total scope is in terms of PMDs. By extrapolating my progress we can see roughly when I’ll be done, as shown in this example:

Burn-Up Chart - Analysis

If the burn-up chart shows that I’m on track, great. If not, my PM is likely to want to discuss how we can recover the situation. And at this point he or she probably won’t want to change either the deadline or the scope. So we’ll look at the usual options: work smarter, work faster, work overtime, get help, reduce quality. Maybe we can use one or more of those strategies to recover some time. We can but try, and the burn-up chart will tell us, week by week, how we’re doing.

And because I’ve been doing incremental analysis, and baselining/signing off as I go, as a last resort we always have the option to start development on plan whilst I’m just finishing off the last bits of analysis. That was, after all, the risk I was looking to mitigate.

Step 7: Run a Proof of Concept

Once I get a little way into the detailed analysis work, I call a meeting with the PM to raise another risk. This time, the risk is technical. I’m worried that some of the proposed system functionality seems either functionally or architecturally novel, and hence there’s a risk that we might not be able to build it, or that it might not work functionally for its end users. I suggest that a proof of concept might be a good idea to mitigate this risk.

I also point out that getting a lead developer started on the project early has some other benefits:

  • They can participate in the analysis process and make sure that the functional design is technically feasible, and also contribute to the creative process
  • They can get the development environment, tools and ways of working set up
  • The proof of concept can potentially be productionised and so wouldn’t be wasted effort

I choose the subject of the proof of concept to be something that is both high business value and also interesting either functionally or architecturally. I create a separate feature for it, and I write a brief functional design for it, which might be a few simple acceptance criteria. Ideally, it implements an end-to-end thin slice of usable functionality that I can demonstrate to a user. If I have any testers on board yet (probably not), I might even get them to test it against the acceptance criteria. If not, I test it myself.

Finally, I showcase it to the project stakeholders. I get their feedback on how it looks functionally and whether there is anything they would like to change. I create a separate feature for any changes suggested and get them to prioritize it with respect to the other features in scope.

I point out that we’ve had an early victory: a piece of real, working functionality that we can build upon.

Step 8: Start Development Early

My next step is a proposal to the project manager. We have one or more developers on board. They’ve got going. We’ve mitigated some architectural and/or functional risks early.

We could do more of the same.

Because I have been doing the analysis feature by feature, we already have some functional designs agreed with the business – “baselined”. We could take one or two of the baselined features and start coding them. We’ve already talked about the possibility of starting development before analysis is complete (if I’m running late on analysis). We’re just taking that concept a little further by starting development early. Hey, we might even be able to bring the whole project in a little earlier if we start development earlier. Or, alternatively, we could start early and code for longer with a smaller development team. Fewer developers means less overhead in terms of team leadership and information share (as per The Mythical Man Month).

OK, there is a risk. As I go through the analysis of some of the later features, we might discover that we need to change some of the baselined features. That would involve rework and additional cost.

The first line of defence to this challenge is to claim that there are at least one or two functions that are very unlikely to change and we could get started on those at very low risk.

If that doesn’t swing it, I try this line. From the proof of concept we know that we will get some change requests from the business only after they see the built system. Better to find these changes sooner rather than later. This argument is particularly persuasive if, like most projects, time is more critical than money.

Another good argument is that once we have developers on board, we don’t want to lose them, so we want to keep them busy.

At this point, all we have done is started development early as a risk mitigation. We’re still planning to deliver a waterfall project with one big go-live at the end. Really.

Step 9: Measure Build Velocity

Once build is under way, we can start tracking build progress on the burn-up chart. This shows, on a week-by-week basis, how many PMDs of features are “build complete”.

Ideally, I count the PMDs for a given feature only once it is declared build complete (including unit testing) and ready for system test. However, in the real world this can give a very pessimistic view of progress, especially early on in a project, so sometimes I concede to show the build position including “work in progress”, relying on the developers to tell me whether they are 20%, 50% or 80% complete on a given feature. This probably gives an overly optimistic view of progress, so maybe I’ll show both measures on my burn-up chart – the reality being somewhere in between.

After a few weeks of measurement, this gives me and my PM our first hint as to just how optimistic the developer estimates were and whether our 40%-50% contingency factor was high enough, bearing in mind that build complete doesn’t mean test complete, because there will be defects to fix.

It does, however, give us an estimate of when we are likely to be build complete on all features in scope. Usually this comes as a bit of a shock.

Here’s an example burn-up chart showing both analysis and build progress. It shows both the actuals to date and the estimates going forward. The estimated velocity is, of course, based on past actuals, so it hopefully gives a realistic view of how long it will take us to complete build.

Burn-Up Chart - Analysis and Build

Step 10: Re-plan

At this point, the PM is likely to get a little twitchy, as he or she has got his first clear sign that things aren’t on track. They will very probably call for a re-planning exercise. Even if things look like they are on track, it’s a good idea to do some re-planning at this point anyway.

By now, I’ve done several weeks of detailed analysis and I have a much better idea of the size and complexity of at least some of the features. Also, the development team has a better idea on how long it takes to develop a feature of a certain complexity.

So, I get the PM and the lead developer in a room for an hour or two for some re-planning.

We go through the features and re-estimate each one in PMDs. Importantly, I ask the developer to try to size the outstanding features in relative terms compared to the ones already completed – that way our velocity is are being measured on a consistent basis.

Or, I might suggest that we use a relative sizing method instead say, um, “feature points”. I give one of the built features an arbitrary size in FPs and ask the developer to size the other features relative to that. We can then look at the actual time spent so far to calculate our velocity.

The new estimates give us a new total scope size in PMDs or feature points, and we can update the burn-up chart to see when we think we’ll be build complete.

Here’s an example of how that might look:

Burn-Up Chart - Re-Plan

Unfortunately, as shown in the example, the most likely outcome is that, with the benefit of the knowledge we’ve gained in the past weeks, the scope is actually bigger than we originally thought (hence the kink in the blue scope line), and it’s going to take even longer than planned to deliver.

At this point, the project manager’s shock usually turns to panic :).

The next week or two will then be spent trying to work out how we can recover. Can we get more developers? Can we do some overtime? Can we somehow work more efficiently or reduce quality?

Usually, we’ll find one or two things we can do to increase build velocity, but we’re unlikely to find a way to recover to the original plan dates.

We need a Plan B.

Step 11: Split Delivery Into Two Phases

Once it’s become apparent that we won’t be able to deliver all of the scope by the planned go-live date, sooner or later somebody will come up with the idea of some form of phased delivery.

This usually starts with the question: “so what can we deliver by the original go-live date?”

Fortunately, we have already prioritized the scope in terms of business value, and we’ve been analysing and building the most important features first. We could probably split the scope into two parts – part 1 being the stuff we absolutely must have to go live and part 2 being “the rest”.

This is a significant milestone in the agile by stealth strategy. Once the idea has bedded in of delivering some subset of functionality first, with the rest to follow “later”, the project dynamic takes a sudden shift. In particular, the concept of prioritization suddenly takes on a real importance to the stakeholders. Before, it was just to help the delivery team arrange the sequencing of their task. Now, it actually determines what will go in that all-important Phase 1. Likewise, feature size estimates were previously just a tool for tracking progress. Now, they will be used to determine what fits in Phase 1.

Further re-planning sessions are likely to follow to see what “fits” and to check with the stakeholders that we have the right things in Phase 1. Because we have already done much of the ground work (the estimates, measuring analysis and build velocity, the burn-up chart), it’s relatively easy to work out how many PMDs of functionality we can deliver in a given timeframe. We still need to factor in test and implementation, but we can be fairly confident on analysis and build.

Every time I have done this, I’ve always been amazed (and amused) to discover what the real must-haves are. We’re able to find interim business workarounds for features that were previously non-negotiable, and we realise we can manage without certain other features, at least initially.

It turns out there is a big difference between making a first/later decision versus an in/out decision. The former is way easier, even when your stakeholders know that some of the things in the “later” bucket might end up in the “never” bucket.

As a result, we now have a plan for Phase 1 which:

  • Contains only high value “must have” features
  • We are relatively confident in our ability to deliver it on time

Of course, this is absolutely not a Minimum Viable Product.

And at this point I can enhance my burn-up chart to show the Phase 1 scope, so we can see how we’re tracking against that.

Burn-Up Chart - Two Phases

Suddenly, things are looking a lot more do-able.

Step 12: Manage Change

There’s just one factor we haven’t accounted for yet.


On software projects, change is inevitable. In the time it takes to deliver a project, the business changes. And even if it doesn’t, as we build the system, we learn more about what we want it to be (put another way, we rarely get it 100% right first time). And even if we do get it right first time, the features we originally conceived turn out to be more complicated than we thought, because we didn’t think of every last scenario (the dreaded alternative flows).

So, in the previous step we drew up a plan that we are confident in, so long as the scope doesn’t change. But, of course, it will.

The standard plan-driven approach to managing change is, as ever, to resist it. This is especially true if we already know things are taking longer than planned. There is usually some form of change process that involves raising a change request, having it impact assessed and then deciding whether or not it gets added into scope. Generally, this process is also tied to the release of additional funds for the project, and thus is often rather heavyweight. The same process applies whether the change is large (and financially significant) or small.

The key trick for the agile-by-stealthist is to decouple the financial aspect of the process so that we make it easier to allow changes (especially small changes) into scope.

Here’s how I do this:

  1. Any proposed change becomes a new feature. The feature is added to the backlog, sorry, feature list, but very clearly marked as “not in scope yet”. This is effectively a lightweight and informal change request.
  2. The feature gets estimated and prioritized just like any other feature. This is effectively a lightweight and informal impact assessment.
  3. If the feature is not a “must have” then it can go in Phase 2, which doesn’t affect the Phase 1 go-live date.
  4. If the feature is a “must have” then it can go in Phase 1 but either something else has to come out (of equivalent size in PMDs) or the date has to move. And the swap only works if we haven’t already started work on the thing that’s coming out. If we’re making a like-for-like swap, the PM is usually OK – no extra work. If we’re adding extra scope without taking anything out we need him or her to agree we can start work “at risk”, because the new feature is not formally in scope yet. An email from the business sponsor confirming this is usually a good idea.
  5. If we have added scope, at some point we need to reconcile the finances. Usually this means waiting until the upcoming release is looking relatively stable, and then raising a single formal change request for all the features that have been added to it. Once the CR has been formally approved, the new features can all be marked as “in scope” on the backlog.

Because change is inevitable, I sometimes attempt to show this on the burn-up chart. I put a gentle upwards slope on the “scope” line, indicating that we expect change to arrive at an average rate of X PMDs per week. Here’s an example:

Burn-Up Chart - Including Scope Change Estimate

This is usually an unpopular move initially, because it suggests that build will be finished later than if scope doesn’t change. At this point I will remind people that the scope has already changed, and we can pretend it won’t change again but it’s more likely that it will. A good way to satisfy the PM is to draw two scope lines on the burn-up chart – a flat one and a sloping one – like this:

Burn-Up Chart - Including Scope Best And Worst Case Estimates

This approach effectively gives a range for the completion date, which (truthfully) reflects the level of planning uncertainty. As we get closer to completion, and we have more actuals, the range decreases.

Step 13: Move to Phased Delivery

When we split the project into two phases, we took a significant step in avoiding some of the problems of waterfall delivery as outlined at the beginning of this article. We moved from an “all at once” approach to a “first things first” approach, thus preventing analysis paralysis, scope bloat, and architectural and functional risk.

We also overcame a major hurdle in terms of mindset shift. Once the concept of a phased delivery was on the table, the project dynamic shifted and we were able to have sensible conversations about really what the priorities were.

Managing change also became easier because we only really had to worry about change if it was a “must have” for Phase 1. Any change going into Phase 2 we could capture as a new feature on the backlog and then worry about it later.

And at some point, “later” is going to arrive. Once Phase 1 is well into build (maybe even test), the business sponsors are going to shift their focus to Phase 2, and will start demanding a plan for when that will be delivered.

This is the ideal time to propose a move to a more regular phased delivery. We’ve discovered that we are able to split the scope down into features and deliver them a bit at a time. Rather than doing the remainder all in one go, why don’t we continue the phased approach and define a number of roughly equal-sized phases and deliver additional functionality to the business at regular intervals?

That way, they can get some of the extra stuff sooner, with the higher priorities coming first. And we can level our resources better, keeping our analysis, developers and testers busy with an ongoing stream of work. It’s also easier to manage change because we can drop changes into a later phase and thus avoid disrupting the upcoming phase.

If this idea gets the nod, an important question to answer is how big each phase will be – how frequently will we deliver? At this stage it’s not a good idea to suggest moving to two week sprints – that’s too big a change. It’s more likely to be two or three month phases initially – we still have a mini waterfall lifecycle to execute in each phase.

Once the phase duration is agreed, we should be able to determine the scope of the next phase relatively easily. We can calculate our capacity in PMDs – all we need to do is ask the business to select the features they want us to deliver in the next phase based on the PMD estimate for each one.

It’s important for us to resist planning more than one phase ahead – by now everybody will have gotten used to the idea that things change, and I can usually convince them that planning any further ahead is a waste of energy. As long as I’m able to show them a burn-up chart that gives a rough idea of the total length of the project they are usually happy not to have a detailed plan for every phase.

Step 14: Trim The Tail

And when I do present the burn-up chart showing how long it will really take to deliver everything the business has asked for, based on our actual velocity, and the PM tells the stakeholders how much it’s all going to cost, that’s usually when the business sponsor tells the business minions that they aren’t going to get everything they want and that they had better decide what they can manage without.

And so a culling session begins – a run through of the various features to decide which ones can be sacrificed.

At this stage, this activity makes very little difference to the work that’s happening on the ground right now – we’re only working on the upcoming phase anyway. But it’s relatively little effort to do because it’s merely a case of updating the product backlog with some kind of marker showing that some features have been withdrawn or at least put on death row. And I have to decide how to reflect this on the burn-up chart. Probably I want to exclude the hatcheted features from the “scope” line so the sponsors can see when we’ll be done after the cull is complete.

But of course this will rarely be the end of the scope story. Once Phase 1 is live, we will surely discover changes that absolutely must be delivered, and these will eventually end up in scope as new features (see above re. managing change). Managing scope becomes an ongoing process – adding features, estimating, re-prioritizing, culling.

You could almost call this “grooming” :)

Step 15: Repeat and Improve

It’s taken some time, and it’s been a long and drawn-out process (probably many months), but by now, we have broken the back of agile by stealth – we have moved from a single phase waterfall delivery to a phased incremental delivery, where the phases are of roughly equal size and we are delivering features on a priority basis.

Put another way, we have achieved early and continuous delivery of valuable software.

Whether you call this agile depends on your particular view of agile. We don’t have two week sprints. We don’t even have time-boxed phases – we’re still defining the scope of each phase up-front. We’re not doing TDD, or BDD, or pair programming, or refactoring, or continuous integration.

But what we do have is a starting point for all of these things. In particular, we can strive to make the delivery phases shorter and shorter. And then other agile techniques become relevant, such as:

  • Responding to change (especially user feedback)
  • Automated testing to reduce the manual burden of regression testing
  • Moving away from heavyweight process and documentation to a more collaborative style, in order to support shorter delivery cycles
  • Regular reflection on process in order to further improve the team’s productivity for the next cycle

That last bullet point is the most important one of course. There’s always room for improvement. And at this point, you can probably allow yourself to start using some agile terminology too.

So you can call it a retrospective, rather than a lessons learned session :)

One Size Doesn’t Fit All

In the above strategy I have tried to break the agile journey down into as many tiny steps as possible. It represents my experience from four separate projects and it’s worth noting that:

  • I didn’t use every step in every project
  • In some cases I was able to implement several steps at once

So it’s important to tailor the approach to the specific project context. In particular, you really need to judge the project manager and project sponsor’s openness to change. In politics, there is a concept called the Overton Window which relates to how far it’s possible to push policy either to the left or to the right of the prevailing political climate. The trick is to make changes that are within the Overton Window. Over time, this shifts the window in one direction or the other, as illustrated below:
Overton Window

As mentioned at the start of this article, on a recent project I misjudged the Overton Window and had my phased delivery proposal knocked back. On that project I had to resort to smaller steps that were within the Overton Window, but I did eventually get there.


For all its benefits, agile delivery does have some drawbacks, and the agile-by-stealthist can expect to be subject to some challenges, such as:

  • If you don’t design the whole system up front, you might discover something in a later phase which alters how you would have built an earlier phase. In other words, there is a risk of re-work.
  • Doing things in phases is inefficient because you end up re-visiting the same parts of the system over and over as you add features to it.
  • Every phase will require regression testing, and the amount of testing goes up each phase as the system gets bigger.

These challenges are all fair. For seasoned agilists, the answer to at least two of them lies in the discipline of automated testing. But trying to introduce automated testing to an organisation that hasn’t done it before is a lot of work, and probably not one to tie in to the decision to adopt a phased approach. It also sends the message to your stakeholders that they are wrong to be hesitant about the approach. So the answer I usually give to these challenges goes something like this:

It’s true that there are disadvantages to a phased approach. But in my past experience, the upsides have always outweighed the downsides.

I might also point out that when Winston W. Royce first described the waterfall model back in 1970, he also pointed out that it probably wasn’t the best way to do things, and that an incremental approach would probably be better. So it’s not like this is a crazy new idea.

How successful this response is depends very much on how much I have the trust of the challenger. If I’m new in I might not yet have earned their trust, and baby steps might be in order.

Another challenge is working with a third party supplier under a fixed price contract. This makes it harder to implement some of the above practices, but it’s not impossible, and I’ve had at least one success in this area. But that’s another article!


Being an agile-by-stealthist is, by definition, a lonely job. You don’t have a mandate so you’re relying on your own motivation and gut instinct to drive things forward. I’ve found it to be both frustrating and stressful at times…but also very rewarding when it goes well. But it does take a lot of self belief, courage, perseverance and sheer bloody-mindedness!

If you have had any success implementing agile by stealth, or if you have any other strategies to suggest, I’d love to hear about it please leave a comment below.

4 thoughts on “Agile By Stealth – Converting a Waterfall Project to Agile

  1. Stuart Rossiter

    This is a fantastic article; thanks Tony. That’s always one of the issues with agile: that the possible *transitions* to agile (and which paths you choose dependent on the culture and personnel) are much less talked about.

    Plus, you bring out that lots of the oddball-sounding agile terms/practices are in many ways just a logical extension of best-practice responses to risk in ‘waterfall’ projects (and maybe should be couched more often in such terms).

    (Plus I’m a fan of your more general ‘analysis and design are the same thing’ argument.)

  2. Stuart Rossiter

    Actually, I was curious as well as to how much you think the approaches you introduced ‘by stealth’ were the most important agile ones, or just the ones that you could ‘get past the culture’.

    Was there some dissonance because you couldn’t introduce things that more naturally go together? The obvious one is automated testing to partner potential iterative re-design (which you mention). Have you in some ways introduced new forms of (smaller) risk by the piecemeal approach?

    1. Tony Heap Post author

      Hi Stuart,

      For me, the most important agile practice is phased delivery, coupled with relentless prioritization and feature splitting, with the overall goals of (a) delivering high value functions sooner, (b) getting early feedback on those things and (c) avoiding delivering low value functions at all and thus not falling foul of the 80:20 rule.

      So, yes, I think I did deliver the most important agile practices.

      Re. dissonance – in the main, the agile practices I’ve introduced “by stealth” have seemed to follow on naturally from one another. Re. automated testing specifically, I haven’t really had the chance to introduce it on any of my “agile by stealth” projects so far, either because the technology hasn’t supported it or because I’ve been too far removed from the development team (e.g. offshore development). We do seem to have managed without it – it just means we’ve had to do some manual regression testing before each “go live”. On one project we went live every 4 weeks and somehow managed to fit in enough regression testing to keep the system stable – and this was a national UK retailer’s eCommerce site.

      I guess in an ideal world you’d introduce automated testing as early as possible, so as to build up your test suite. But it would really depend on whether you could sell it – the benefits only become apparent once you’ve agreed you’re delivering in phases.

      Anyone else had any luck introducing automated testing in a non-agile environment?

  3. Mikk Laos

    Amazing article. Thanks man! We have same issues in our organization. Leadership supports agile but clients are still learning what it really means. So it is difficult to explain them the benefits of agile. I never understood why, but the Overton Window is a good explanation of that. Thank you again I will use this sometimes.

Comments are closed.