How to Prioritize Requirements

On software development projects, requirements prioritization is a well-established practice, and there is a wealth of information, both in books and on the Internet, describing the various techniques you can use to do it.

But in my experience, there are two key points that nobody ever seems to emphasize enough:

  1. Prioritization is really important
  2. Prioritization is really difficult

In this article I’m going to explain why I think this is the case, and why we therefore need to be absolutely ruthless about prioritization if we want our projects to succeed.

There’s No Such Thing as a Requirement

Before we start, I want to address a terminology issue. Apparently we have requirements, and apparently we are going to prioritize them. So some requirements are required more than others. And it’s likely that some requirements are not required at all.

So why do we call them requirements?

In my view, the term “requirement” isn’t helpful when you’re trying to prioritize. I have previously argued that actually there’s no such thing as a requirement. So let’s lose “requirement” and replace it with something more appropriate. How about “scope item”? Or, even better, “candidate scope item”, because not all items will make it into scope if we do our prioritization ruthlessly. But that’s too wordy, so I’m going to settle on “feature”, which coincidentally is the term I use in Business Analyst Designer Method. Agile teams usually use the term “user story”, or just “story” for short.

Why is Prioritization so Important?

The simple answer to this question, and the one commonly given, is that most projects are limited in either time or resources or both, and it is therefore not possible to deliver everything requested, hence we must choose what to deliver and what not to deliver. In projects with multiple phases, this choice extends to deciding what to deliver in each phase.

And that’s all very true. But putting it so simply fails to emphasise what I think is a really important point.

That point is to do with the Pareto principle, more commonly known as the 80-20 rule. The 80-20 rule basically says that you can get 80% of the job done in 20% of the time. More importantly, it says that the last 20% of the job takes 80% of the time, which is why so many tasks seem to take much longer than you expect, and why so many tasks are often “almost finished”.

I do a fair bit of DIY around the house. A typical DIY task goes something like this:

The Wife: “How long do you think it will take then?”

Me: “Oh, about an hour.”

The Wife: “I’ll come and check in 2 hours…”

[2 hours later]

The Wife: “Aren’t you finished yet?”

Me: “Nearly…I’m just doing the fiddly bits.”

[one further hour hour later]

Me: “Finished! That took longer than expected.”

The Wife (smugly): “It didn’t take longer than I expected.”

My wife freely admits to tripling any estimate I give her for a DIY task. They do always seem to take longer than expected. And the main reason seems to be this: the fiddly bits.

Software projects are especially prone to the 80-20 rule. Here’s why.

When we think about a particular feature, we usually think about the “happy path” execution of the system. So, for example, when we think about a log-on feature, we think about the user typing their username and password (correctly), and then being allowed into the system. And that happy path scenario generally gives the majority of the value of the feature.

But for every happy path, there are usually several alternative flows (aka the fiddly bits). For the log-on feature, I can think of the following:

  • incorrect username and/or password
  • forgotten password
  • forgotten username
  • account lock-out after 3 failed log-in attempts

Oh look, there was one happy path and four alternative flows. So the happy path gave us the majority of the value (let’s say 80%), but was only 20% of the total effort. That’s the 80-20 rule in action.

And in fact it’s worse than that, because often the alternative flows are more complex to build than the happy path (more fiddly). So for example, account lock-out probably involves some back-end function to allow the support team to unlock the account, and maybe some permissions model that specifies which support team members are allowed to unlock accounts, and so on and so on. So sometimes the 80-20 rule ends up being the 90-10 rule, or the 95-5 rule.

Here’s another example. A retail system I’m currently working on has to deal with sales transactions. 95% of sales transactions are just that: sales. But not all sales transactions are sales – they can be:

  • returns
  • “negative” sales (lottery prize handouts being the main example)
  • voided sales (where the customer forgot their purse)
  • partially voided sales (where the customer didn’t quite have enough money)
  • voided returns
  • partially voided returns

These, again, are alternative flows which are small in terms of number of transactions, but large in terms of development effort, because they all require special treatment.

It’s useful to draw a graph to illustrate the 80-20 rule, as follows:

Cost Value Graph

Notice that the graph has an extremely long tail – the amount of time/effort/cost to deliver that last 20% of value is huge. And because it consists largely of alternative flows (fiddly bits), it’s easy to miss it when estimating the project size up front – because the alternative flows tend to surface only when you get into detailed analysis/design – just like the fiddly bits in a DIY task only surface once you’re already up the ladder with your finger pressed over the hole that started leaking water when you removed that piece of tape form that pipe!

The 80-20 Rule Kills Projects

The Standish Group’s 1995 CHAOS Report states that around 55% of software projects fail to deliver on time or budget, and a further 30% fail to deliver anything at all. Only 15% deliver on time and on budget. Various reasons are cited for this, and many of the reasons are related to the “requirements” process (unclear requirements, changing requirements, lack of user input, unclear objectives and so on).

I don’t have any real evidence to back this up, but I have a strong suspicion that a large part of the problem is to do with the 80-20 rule. The 80-20 rule means that software projects are way bigger than they initially appear to be (as per the graph above). Left unchecked, they cost way more and take way longer than planned. In extreme cases they take so long that they get canned.

So, I suggest that the 80-20 rule kills projects. And your only chance of project survival is to cut the long tail off the cost/benefit graph. Cutting that tail means de-scoping a full 80% of the project scope, delivering only the first 20% in terms of cost but 80% in terms of benefit.

And that’s why prioritization is so important. We don’t just need to prioritize. If we’re going to cut the long tail off the 80-20 graph, we need to be absolutely ruthless about prioritization.

Prioritization is Difficult

So we’ve established that prioritization is really important. Now here comes the double whammy.

It’s also really difficult.

Why is that? Well, actually there are quite a few reasons, but the executive summary is that prioritization has to be done by people, and people are irrational, emotional, egotistical idiots.

People are Idiots

Here’s what I’ve noticed. Business stakeholders tend to be enthusiastic about software projects. That’s not always true, especially if the plan is to replace the stakeholders with the software, but in general, the stakeholders are positive. They are getting a new toy and they are excited. They are emotionally invested. They want the new software to be the best it can possibly be, not just for their sake, but for the sake of all of their colleagues who will be using the new software once it’s finished. They want it to be right.

The very last thing business stakeholders want to be doing is deciding which of the features are not going to be delivered. Or even which ones are low priority, because we all know that low priority is a synonym for de-scoped, don’t we?

So when asked, business stakeholders tend to pretend that features are more important than they really are. Because they really want to have it all. They are emotional and they are irrational.

This is especially true if the people doing the prioritization are not the people holding the purse strings. This is often the case on larger projects. The project sponsor holds the purse, but doesn’t understand the detail well enough to make sensible prioritization decisions, so the prioritization task is delegated to the people “on the ground”. On the one hand this is a good thing – the people on the ground have the right information to make informed decisions. But on the other hand, their lack of budget visibility tends to leave them thinking that the purse is a magic purse which never runs out, and are thus less good at making rational cost/benefit judgements – to them, there is no cost, only benefit.

The situation is further exacerbated by the existence of the human ego. Upon being presented with a priority decision which is clearly inflated, the BA challenges the priority call. But the stakeholder sees the challenge on the priority as a challenge on their competence, and so they stand their ground even more firmly. The BA (who also has an ego) sees the stakeholder’s disagreement as a challenge on their competence, so they stand their ground.

I’m not being rude about stakeholders (or BAs) here. They’re only human. But humans are well known for being emotional, irrational and egotistical.

So, people are a part of the reason why prioritization is so difficult. Probably the largest part. But there are other reasons too.

Apples and Oranges

The next problem is that judging the relative priority of software features is like comparing apples and oranges. It’s sometimes very difficult to say whether feature X is more important than feature Y.

The classic response to this is to attempt to determine the costs and benefits for each feature. In theory, that ought to take the emotion out of the decision-making process. But it’s never as simple as that. Benefits are often very difficult to quantify, especially soft benefits such as usability. And it’s also rare for individual features to have benefits that stand alone. There are usually dependencies between features. X depends on Y. Or Y depends on X. Maybe X depends on Y, but X has the big benefit, so I have to do X and Y to get the benefit. Maybe I only get benefit B once I deliver X, Y and Z. So how do I prioritize X, Y and Z?

And then there’s risk to consider. It’s a good idea to start early on the features that carry the most functional or architectural risk, even if they don’t carry the best cost/benefit ratio.

Feature size is another aspect. Do you deliver a small, high value feature first or do you get started on a large, medium value feature, because you know it’s going to take longer to finish?

As I mentioned earlier, there are a number of techniques out there to help to compare apples with oranges. But they can be quite complicated. And none of them are perfect.

So even if you are able to remove all the people problems (emotion, irrationality, egotism) from the process, prioritizing inter-dependent apples and oranges with varying risk and size profiles is still really difficult.

Orange Segments

The next problem you’re up against is to do with the 80-20 rule and the particular way it manifests itself in software projects. As mentioned above, many of the the high cost, low value parts of a system tend to be the alternative flows for a given system function, not functions in their own right. So for example, the account lock-out scenario is an alternative flow within the log-in function.

If your features are at “function” granularity, it’s impossible to split out the high cost, low value alternative flows from the low cost, high value happy path flows. But that’s exactly what you need to do if you are to trim the long tail of the 80-20 cost/benefit curve.

So in order to prioritize well, you need to split your oranges into segments. You need to separate out the high cost, low value alternative flows so that you can give them a lower priority and, potentially, de-scope them altogether.

But be warned – if you split your oranges into too many segments, or into the wrong segments, you’re in danger of delivering something that isn’t coherent – it doesn’t hang together well. Too much pruning in the wrong place delivers a half-baked solution.

In my experience, getting the granularity of features right is really difficult.

Oranges and Lemons

As if it weren’t enough to prioritize apples and oranges (and orange segments), on software projects there is the added complication of change. Extra apples and oranges are thrown into the mix. The stakeholders announce mid-project that they have decided to sell lemons instead of oranges (which, as everyone knows, cost more than oranges and are way more bitter).

Things change. The business changes whilst the software is being developed. If the software is delivered in increments, then the users’ experience of using early increments throws up further change. We are constantly having to re-evaluate our priorities to account for change.

So even if you have successfully and perfectly prioritized all your apples, oranges, orange segments, lemons, limes and kumquat slices, you’ve wasted your time because you’re going to have to go and do it all again.

Like I said – prioritization is really difficult.

Help!

As Business Analysts, we have a big challenge on our hands. Prioritization is really important, but also really difficult.

But fear not – help is at hand. There are techniques that we can use to make the task a lot easier. But first, let’s take a look at how not to do it.

How Not to Prioritize – MoSCoW

MoSCoW is probably the most well-known prioritization method. In my experience, it’s also the least useful. I’ve used it many times and the result is always the same. 80% of the features are classified as musts, 10% are deemed to be shoulds, 10% are coulds and 0% are won’ts. And so then you have to go round the loop again, asking which of the musts are most musty, until you end up with categories like high must, medium must, low must and so on, which is just plain silly.

MoSCoW gives slightly better results if you set out the ground rules properly beforehand.

Firstly, you need to make sure everyone is very clear on the project objectives, and the objectives have to be written in the right way. So for example:

Project X objective: to replace legacy system Y

is not a well-written objective, because it implies that each and every feature of the legacy system is to be replicated in the new system, regardless of whether it is still useful. A better version might be:

Project X objective: to build a new system Z which allows department D to continue to function effectively and efficiently without legacy system Y

This version is a better description of the desired business outcome – the words effectively and efficiently allow for some robust discussions about how important a given feature is.

Once you have some well-defined objectives, it’s possible to be more objective about MoSCoW priorities.

But you also need to ask for the priorities in the right way. Specifically, I find it useful to ask the following question:

Are you saying you would refuse point blank to let system Y go live if it didn’t have feature F?

And even then, this isn’t a perfect solution, because the person you are asking is an emotional, irrational, egotistical idiot who isn’t paying for system Y out of his own pockets.

MoSCoW isn’t completely useless – it can give you a rough idea of relative priority, but you really have to take the term “must” with a pinch of salt.

But as a technique for trimming the long tail of the 80-20 curve, my experience is that MoSCoW sucks. It’s just not ruthless enough.

Re-framing the Question

The trouble with the standard approach to prioritization is that it’s somewhat like asking your stakeholders the following question:

Given that we have a bunch of apples and oranges, that are really hard to value individually, but most of which contain at least some really important segments, and all of which have some value, and given that you’re really excited about getting this collection of fruit, and given that you aren’t paying it yourself:

Which ones do you want to throw mercilessly to the lions so they never see the light of day?

In short, it’s a pretty tough ask, and that’s why the answer you so often get is: “I need it all”. Nobody wants to throw their apples and oranges to the lions, they want to keep them for themselves. Let’s ignore for a minute the fact that lions are carnivorous and probably don’t even eat fruit – it’s only an analogy after all.

We need an easier way. And thankfully, there is an easier way. We have to ask a different question. We have to ask this:

What shall we do first?

This is a much easier question to answer. It only requires our stakeholders to decide what is most important. Choosing what you want to do first is still tricky – you still have to compare apples and oranges, but it’s much easier than choosing what to throw away for good. It’s much less final.

Once we know what to do first, we can get on and do it. When we’re done (or nearly done), we’ll come back and ask this:

What shall we do next?

This is, of course, the same question as “what shall we do first?”, only it’s asked at a later point in time and we select from whatever is remaining. Again, the stakeholder merely has to choose what they want most. Nobody is being asked to de-scope anything. Not yet.

We keep doing the same thing over and over, asking the same question, delivering a little bit more, until we are “done”.

The tricky bit is, of course, deciding when we are done. More on that later.

Incremental Delivery

By now, you have probably recognised that I am describing incremental delivery – delivering software in phases.

Incremental delivery is most commonly associated with agile methodologies (such as XP, Scrum or DSDM). Indeed, the Agile Manifesto includes a principle of early and continuous delivery of valuable software, and you could therefore argue that any project using incremental delivery is agile to some extent. In the most extreme case, software can be delivered feature by feature (Kanban style). That said, incremental delivery has been a recognised practice since long before the Agile Manifesto was published, dating back as far as the mid 1960s.

The key property of incremental delivery that makes it useful for prioritization is that there is limited capacity in each increment. Ideally, the capacity is set first (for example by setting a go-live date), and then the scope is chosen to fit the capacity. This practice is referred to as time-boxing. So long as you have decent estimates on the size of each feature (more on this later), the scoping session is as simple as deciding what fits into the increment, as opposed to arguing about how “musty” various features are.

Here’s what I have noticed about how incremental delivery makes prioritization so much easier:

  1. Stakeholders love talking about what to do first – it’s very exciting – especially if breaking the project down into increments means they get something sooner.
  2. You don’t have to prioritise every single feature all at once – all you need to do is agree on the most important ones – enough to fit your first/next increment – so each prioritization event is shorter and much less painful.
  3. Prioritizing features against one another is easier – often it’s possible to prioritize based on gut feel rather than a rigorous cost-benefit analysis – because you’re not trying to prioritize all features, merely pick out the most important ones.
  4. The approach deals with change very well – so long as you lock the scope of your current increment, and delay the decision on what goes in the next increment until as late as possible.
  5. You avoid coming up against one another’s egos – so long as you set the rule that each increment has a fixed size, you never have to challenge the stakeholder for over-prioritizing all the features, because they are only choosing which ones to do next, not how important they all are.

When people talk about the benefits of incremental delivery, they usually focus on the point that it delivers business value early and continuously (as per the Agile Manifesto). They also talk about the opportunity to get feedback from the business users and incorporate changes into later increments. These are indeed huge benefits, and are well worth recognising. But it’s less common for people to point out (or even to notice) the equally important benefit of trimming the long tail of the 80-20 curve. It’s great to deliver business value early, but not killing your project is, to my mind, even better.

Trimming The Tail

A key objective of the prioritization effort is, of course, to trim the long tail of the 80-20 curve. To avoid delivering the high cost, low value features at all. To maximise the amount of work not done.

Incremental delivery supports this objective especially well. By the time anyone is talking about not delivering anything, the users already have a live system that is delivering real business benefit. They have discovered that they don’t need half the things they originally thought would be really useful (but they have of course thought up some extra things that really would be useful).

And, importantly, the budget holders can make an evidence-based judgement on whether the system is “good enough” yet – whether the originally-stated business objectives have been met. They can choose to fund further increments or not. They can choose to trim the tail.

They can also choose to reduce the capacity of the delivery team, effectively moving from project mode to BAU (or maintenance) mode. New features can still be requested, and prioritized, but the capacity to deliver them has been throttled.  The tail is not trimmed entirely, but only the highest value, lowest cost features on the backlog will get delivered.

Feature Splitting

Earlier I talked about orange segments – splitting features apart in order to separate the simple, “happy path” flows (high value, low cost) from the gnarly edge case flows (low value, high cost).

I can’t emphasise enough how important this technique is. It’s vital if we are to trim the tail of the 80-20 curve. It’s not easy, because it involves constantly challenging the scope of a given feature – questioning whether to include everything in a single feature (to be delivered together) or whether it’s possible to separate some bits out and deliver them later without spoiling the coherency of the initial delivery. The process is, however, familiar to agilists – it’s often referred to as splitting epics into stories, and so the agile community is getting more and more used to the concept.

Incremental delivery is really helpful when splitting features, because it allows you to say to your stakeholders, “let’s do these high value bits now, and we’ll do these low value bits in a later increment“. That’s much more palatable than “can we please de-scope these low value bits?” The funny thing is, the stakeholders usually know deep down that they probably aren’t going to get the low value bits, but delaying the decision helps to soften the blow – especially if you’ve already delivered something useful when the decision point arrives. It allows you to get on with something rather than waiting for a decision on everything.

As with prioritization overall, feature splitting combined with incremental delivery has a double benefit. Most people focus on the benefit of identifying something high priority to deliver early and thus derive business value sooner. But the happy side effect is that the low value, high cost part of the feature slips down to the bottom of the backlog, eventually to be culled in the Trimming Of The Tail. You really do get the best of both worlds.

An Ongoing Process

In my experience, prioritization within the context of incremental delivery is best done continuously, not just once. The initial prioritization is only the beginning, the starting point.

There are two key reasons to prioritize continuously:

  1. Business change. As time passes, the business changes and features which were high priority suddenly aren’t. More likely, some emergency will arise, pushing a newly-identified feature right to the top of the list. This is especially the case once the business starts using the system and discovers what they really wanted!
  2. Chicken-and-egg. For a given feature, before you have done any analysis you don’t really know how big it will be (in terms of delivery effort), and also the business value is usually mere conjecture. So you can only guess at the cost/benefit ratio, and hence prioritization will be approximate. Once you complete the analysis and the team provides a delivery estimate, you can re-evaluate the cost/benefit ratio. Often, the cost is higher than expected (the devil being in the detail) and often the benefits aren’t as great as initially hoped. Sometimes, a supposedly “must have” feature turns out to be not needed at all because the BA has identified a simple business workaround that can be used instead. So it’s a good idea to re-visit the priorities after the analysis phase is complete.

I generally aim to have a weekly catch-up with my business stakeholders to discuss new features, check the priorities, and give them a general progress update.

Just-In-Time Prioritization

As mentioned earlier, when prioritizing for incremental delivery, it isn’t necessary to prioritize every single feature. The question you’re trying to answer is “what shall we do next?” You’re aiming to identify enough features to fill the capacity of the next delivery increment.

But the chicken-and-egg problem means that initially (before analysis) you don’t know enough about either the size or the eventual priority of each feature – so you don’t know up front which features should go into the next increment.

If you think about it, you only need to prioritize enough features to keep your BA team busy. If you have a team of 4 BAs, and each BA can work on at most 2 features, then you only need to prioritize 8 features for analysis work. Placing a limit on work-in-progress like this is referred to in agile circles as a Kanban style of delivery. Some agile teams use Kanban all the way through the delivery lifecycle – for analysis and build. But it can be used just for the analysis phase, with build happening in fixed-scope increments.

So, the BA team works on features in priority order until they have enough to fill the next increment. At this point, you can agree the scope of the increment with the business stakeholders. But actually it’s a good idea to delay increment scoping until the last possible moment (i.e. just before build starts). The more features that are sized and ready for build, the more choice you give your stakeholders. It also allows you to react to late changes in priority.

So what you end up with is a stack of features that have completed analysis and are waiting to be scoped into the next increment. Ideally, the BA team is ahead of the game so that there are always enough features that are ready for build, and with some to spare. This means the business stakeholders can make a pro-active choice as to what goes in the next increment. In practice, it doesn’t always work out that way. In my experience, the BA team is often on the back foot trying to get enough features analysed to fill the next increment. This is less than ideal, because it can result in lower priority features going into delivery just to fill up an increment and keep the development team busy.

Conversely, if the BA team works too far ahead (Waterfall being the most extreme case), then they are liable to waste time on features which are subsequently de-prioritized due to business change. There is a sweet spot where the BA team is a little ahead of the game, but not too far ahead.

What Do We Do First?

So what do we do first? As mentioned above, this is still not an easy question to answer. All else being equal, we are looking to deliver features with the highest benefit and the lowest cost first. We’ve already talked about how the chicken-and-egg problem makes this difficult to assess. But also, the cost/benefit ratio is not the only factor to consider. Other factors include:

  • Dependencies – if feature A depends on feature B then we need to do feature B first.
  • Functional risk – if a given feature is novel, there is a risk that the first delivery of it will need refining, and we will only find that out by putting it live. Hence an early delivery is advisable.
  • Architectural risk – features that are architecturally significant or that might not work at all should be started early – better to find out your project is doomed sooner rather than later
  • Size – say feature A has a cost (effort) of 5 and a benefit of 5, and feature B has a cost of 25 and a benefit of 25. Is it better to deliver feature A first for a “quick win” or is it better to deliver feature B first because it will take longer to finish? In my experience there is no easy answer to this.

Various techniques have been devised over the years to help with this challenge. As I mentioned above, I’ve used MoSCoW in the past but I don’t rate it that highly. There are other techniques out there that are more sophisticated. Karl Weigers describes an approach in this article which sounds like a good, pragmatic balance between quantitative analysis and gut feel.

I’ll be honest though – I haven’t really tried anything more sophisticated than qualitative reasoning – otherwise known as gut feel!  With so many competing factors to consider, I can’t help feeling that there is no perfect answer anyway, so there’s not much point trying to be too clever about it. I stand by my earlier statement: prioritization is really difficult.

I’m open to suggestion though. If anyone out there has had some actual success with a particular technique then I’m keen to hear about it.

Prioritization in Action

Here’s what one of my prioritization sessions looks like.

First of all, as mentioned above, I do prioritization on an ongoing basis as part of a weekly catch-up with my business stakeholders.

As is common with agile teams, I use a task board to track our project’s progress. The task board is situated near the delivery team, and we have daily stand-ups around the board. Each feature is represented on the board by an index card, and the position of the card indicates both its status and its priority. Here’s an example feature card:

Feature Card

 

I hold the weekly stakeholder catch-up sessions at the task board. It’s the ideal place for stakeholders to get a very visual progress update, and we can also take full advantage of the power of index cards and re-prioritize features on the fly in a very tactile, collaborative manner.

Here’s how I like to set out my task board:

Task Board Layout

In general, features flow from left to right across the board.

The Delivery section of the board contains features which have already been scoped into an increment and are in the process of being delivered (build/test/deploy).

The Backlog section of the board contains features that we’re not working on at all yet.

The Analysis section contains features that have been prioritized for analysis work and are either waiting to be worked on, are being actively worked on, or have completed analysis and are waiting to be scoped into a delivery increment. Here’s a zoomed-in view of the Analysis section:

Task Board Layout - Analysis Section

The various columns indicate how far each feature has progressed through analysis. There are two analysis phases – Define and Design. Full details of what happens in each of these phases are given in Business Analyst Designer Method, but here’s a summary:

  • In the Define phase, enough work is done to understand the cost and benefit of the feature.
  • In the Design phase, enough details are provided so that the developers know what to build and and testers know what to test.

The horizontal bands indicate the priority of each feature, from 1 (highest) to 4 (lowest). The priority levels are relative, rather than absolute, so definition of priority 1 is “higher than priority 2”.

The objectives of the weekly prioritization are as follows:

  • Make sure there are enough features in the “Ready for Define” column to keep the BA team busy for at least a week (i.e. until the next session).
  • Make sure all features in the Analysis section are prioritized correctly, relative to one another.
  • If the next increment is due to go into build within the next week, then agree the scope of that increment.

Features can be re-prioritized regardless of which column they are in. A given feature might go through Define as priority 1 but then get moved to a lower priority once the costs and benefits are better understood. Re-prioritizing a feature is merely a case of moving it up or down the board, whilst keeping it in the same column.

To assist in the prioritization task, it’s useful to have as much useful information written on the feature cards as possible. At a minimum, I make sure we have the feature size on the card, if we know it yet. Also useful are indicators for high architectural or functional risk and dependencies.

The great thing about this layout is that it’s very easy to see  how things are going. Assuming the BA team is working in priority order, you can expect to see a pattern in the Analysis section – cards that are further up the board should also be further to the right, forming a diagonal line from top right to bottom left. This won’t always be the case – if a higher priority feature is taking a long time for one BA to finish, or is blocked pending a business decision, lower priority features might be further to the right.

My experience is that business stakeholders really like this approach to prioritization and scoping. It’s very transparent and it puts them very much in control.

The Downside

I have presented incremental delivery as somewhat of a silver bullet to solve the prioritization problem. And in my experience, the benefits are indeed great – which is why it has become so popular, especially within agile circles.

But there are some disadvantages to be aware of. Namely:

  • Delivery overheads. Every increment that is delivered requires some form of regression testing and also needs to be deployed. Regression testing can be especially acute when practising feature splitting because of the need to revisit the same functions over and over as edge cases are added to them. Agile teams try to minimise these overheads by practising automated testing and deployment, but nevertheless they are overheads.
  • Architectural coherence/rework. The architecture implemented in earlier increments can be invalidated by features arriving in later increments. The team can either re-work (“re-factor”) the architecture or leave it incoherent. The former requires effort and the latter leaves the system less maintainable, neither of which is ideal. Delivering architecturally-significant features earlier helps to mitigate this problem, and automated testing makes re-factoring less painful, but even so it’s not uncommon to hear someone saying “if only we’d known at the time…”
  • Perception of poor planning. People (especially managers) who are used to plan-driven software delivery can struggle with time-boxed incremental delivery, because their mindset tends to be that scope should be fixed and a plan drawn up that says when we’ll be done. Incremental delivery is perceived as encouraging scope creep and thus never-ending – especially when there is a backlog of work that keeps growing. In my experience, the best way around this is to agree the number of increments to be delivered up-front, which gives you a fixed budget and a fixed delivery date. The tricky bit is getting the capacity right. But you can still expect to have many battles with plan-driven folks.

That said, my experience is that the upsides of incremental delivery far outweigh the downsides. It’s just useful to be aware of them, and to accept them – especially in retrospective or lessons learned sessions where there is a tendency to focus on negatives.

Incremental Delivery by Stealth

What do you do if your project is not delivering in increments and your project manager doesn’t seem keen to change? Well, you could either live with it, or you could do what I have done in the past – introduce incremental delivery by stealth.

Here’s how it works.

First of all, you tell your PM that you’re going to deliver the functional specification in increments, just to break the work up into manageable chunks, and also to allow you to track progress better. And as a risk mitigation, you’re going to specify the highest priority functions first.

Then you suggest that, hey, maybe we could speed things up a bit by getting the development team started on building the high priority functions once they are specified. There’s a small risk that later analysis might invalidate earlier development, but the benefits probably outweigh the risk. It will also spread the work out and maybe even mean you can get away with fewer developers.

Then you suggest that, hey, maybe the test team could get started earlier too and spread their work out.

The final step is to suggest getting the high priority features deployed into a “trial” environment for the business to “have a play with” – again, this is a risk mitigation – the sooner you get deployed, the sooner you can resolve any deployment or functional snags.

Before you know it, your PM will be delivering incrementally without even know it.

The next challenge is trimming the long tail of the 80:20 curve. This is harder if your project has a fixed scope – you have to explicitly raise the question of de-scoping functionality. There are few tricks you can try:

  • Trading changes. Inevitably, the business will request changes. You could trade out low priority functions in order to keep the overall size of the project roughly the same.
  • Delay tactics. The longer you leave it before proposing a de-scope, the better it will be received, because there will (hopefully) be more to show on the higher priority features, and the business might be more willing to de-scope lower priorities.
  • Keeping the project on track. Let’s say you allowed 6 weeks for analysis. At the end of week 5 you can look at how much is still to be done and propose to only do what can be done in the next week – in the interests of keeping the project on track. At this point it’s useful to start hinting at a potential “phase 2” for the project. This is slightly sneaky, because phase 2 doesn’t necessarily exist, but I find the following sentence to be a good one to trot out: “well, if we have enough features left over that are worth doing, we should be able to develop a business case to justify a second phase”. And of course, if you can’t develop a business case then it’s absolutely right for the lower priority features to be binned.

Some of the above tricks are particularly difficult to pull if there are fixed-price commercials in play. In which case, basically, you’re doomed. Next time, don’t go fixed price.

Being Ruthless

At the top of this article, I talked about being absolutely ruthless in prioritizing features – because the long tail of the 80:20 curve can kill projects.

And on projects that don’t use incremental delivery, I think this is very true – I have witnessed some very bitter scope battles between the business and IT as the “evil” project managers draw swords with the “stubborn” stakeholders, neither side willing to back down.

But in my experience, incremental delivery does take some of the ruthlessness out of the process. In particular, it helps to remove the conflict between business and IT. Ruthlessness is still needed – in particular, carefully-considered, judicious use of feature splitting is absolutely the order of the day in order to maximise the amount of work not done. But my experience is that this is met with much less business resistance when using incremental delivery, because the business sees that the benefit of getting something sooner outweighs the downside of getting a low priority thing later.

With incremental delivery, business and IT can be ruthless together.

Conclusion

To summarise, in this article I have proposed the following:

  • Prioritization is really important – the long tail of the 80:20 curve can kill projects.
  • Prioritization is really difficult – people are irrational, emotional, egotistical idiots!
  • Incremental delivery is by far the best way to crack the prioritization nut…
  • And is most powerful when done in conjunction with ruthless feature splitting.
  • Prioritization (within incremental delivery) is an ongoing process which is best done collaboratively in front of a task board.
  • And even then it’s still one of the hardest parts of the BA’s job!

These proposals are based on my own experience. If your experience differs, I’d be interested to hear, so please tell me about it by leaving a comment below.

7 thoughts on “How to Prioritize Requirements

  1. Andre

    Great insight, thanks. You are correct, the word “requirement” is confusing.

    “Requirement I’m going to settle on feature”.
    Please help me out, what is your definition of feature and function ?

    “My wife freely admits to tripling any estimate I give her for a DIY task.”
    I double my estimates on everything and sometimes (mostly) take a bit longer. I think I will be following your wife’s advice, please thank her for me.

    “The fiddly bits”
    Yes you will never identify all the fiddly bits at the start of a project but from my experience, the more there are people on the project with an aptitude (and passion) for IT, the less fiddly bits seem to pop up later on. I’ve been on projects where I was just about jumping up and down about features being missed/excluded because to me it was clear the project could not proceed without them but, I was overruled by people with (in my eyes) no aptitude for IT. I have also been on projects where it was clear the majority had an aptitude for what I call “see” the system design in their mind and can see when a fiddly bit is a small screw in the works that is vital to the project.

    “The 80-20 rule means that software projects are way bigger than they initially appear to be.”

    Indeed. I would however like to add that many projects sink because of incompetence starting at board level downwards. If decisions are made by people that do not understand (can’t “see”) the connection (in the design) or impact of their decision . . . . . . . and there is never any comeback, then it is a recipe for another “lessons will (not) be learnt” project.

    “…..and people are irrational, emotional, egotistical idiots.”
    I would like to know how you handle these scenarios? I’ve come to the conclusion that when “the company” tells you they want a rubbish system then as a colleague once told me, just smile and take the nice man’s money . . . . . but I admit, I struggle with it.

    “Or even which ones are low priority, because we all know that low priority is a synonym for de-scoped, don’t we?”

    The “best” de-scope I’ve ever seen was as the project progressed, —> all patch <— in new work and re-testing, will make the final bill of the project much more than any estimates. An incremental delivery can save time and money but you need an overall technical design as detailed as you can get at the start and you need people on the project that have the aptitude for prioritising and identifying what is really needed. Even if it is adding something early on that is not required right now, to avoid massive re-work and re-testing later on. From my experience, in the long run an “Agile” project will end up costing more (in development, production and future modifications) than a project with a design at the start and then delivered in increments. Unless new increments are designed in you end up with spaghetti because the last time I saw real Quality Assurance on a project was more than 15 years ago.

    “Some of the above tricks are particularly difficult to pull if there are fixed-price commercials in play. In which case, basically, you’re doomed. Next time, don’t go fixed price.”

    Incremental delivery, forget fixed price.

    I agree with your statement it is all about design but I would trump it with aptitude. IT projects are a mess because there are so many people in IT with no aptitude for it. I had the privilege of working for an IT company that vetted their IT employees for aptitude, for 10 years. We nearly always delivered complicated and large projects on time and in budget. If we missed then it was by very little. It was fantastic to work in teams of people that had an aptitude for IT . . . . . how I miss those days.

    1. Tony Heap Post author

      Hi Andre,

      Thanks for your comments. Here are some responses:

      1) I define a feature simply as a unit of scope delivery. There is no fixed size for a feature – the only rule is that, as with an agile story, it should deliver some useful business functionality. You can read more about features in Business Analyst Designer Method.

      2) The wife says you’re welcome. The invoice is in the post.

      3) It is tempting to just shut up and take the money, but I’m a bit like you – I find it hard. Recently I’ve become rather passionate about de-scoping non-essential work – to the point where our project got in trouble because the BA team wasn’t delivering enough work to the build team – all the features we were analysing kept “evaporating” as we showed them to be not worth doing! On my projects, I try to keep the decision making down at the level of “on the ground” business stakeholders – unless the appear to be making bad decisions, then I will escalate. The good thing about incremental delivery is that, if you get it right, you get the board to set the project objectives and then fund the increments without them getting too involved in what’s actually being delivered at the detail level. You can then use the project objectives as an anchor to stop the on-the-ground stakeholders from going too wild on scope creep. Plus if you are time-boxed then there is a hard cut-off on what they can have.

      That said, sometimes management feels the need to get involved – especially if I have escalated some particularly tough problem. Often they make the “wrong” decision and sometimes you can fight it but sometimes you just have to roll with it – learn to choose your battles or you will exhaust yourself. That’s my experience anyway.

      4) We are successfully doing incremental delivery under a “fixed price” contract. But the contract is for a fixed capacity (i.e. a fixed number of increments), so you could argue it’s time & materials in disguise. That said, the supplier commits to the scope at the start of each increment, so once build starts it is truly fixed price.

      5) Re. aptitude, I’ve come to the conclusion that IT is plain hard. For more thoughts on this see Why is Business Analysis So Hard?.

  2. Kevin Chase

    I found this article very interesting. It touches on a lot of pain points from my waterfall past. I am interested in your thoughts on the biggest pain point in my current, semi-agile experience.

    Some background:
    We began a project 18 months ago, planned to be agile. It had a nicely prepared backlog, which the project team spent a calendar month reviewing with the business. From those inital meetings, a 9 month project was decided upon, with 4 week sprins, each of which was to be delivered to production.

    Well, the results were what you might expect. When we got into the details of some of the backlog items, they were much bigger than we thought. We also had not taken into account (entire team being new to agile) a large amount of re-architecture of the code that was required to support the new features (and current company standards). Of course, we also had emergency compliance items come up several times during the project that had to be added to the backlog. And, as you discuss, we discovered things that were more important than some backlog items once we rolled out features.

    My largest pain point has been, how could we have avoided spending twice as much time on this project, and still not have delivered some important items from the backlog? We delivered a lot that wasn’t in the original backlog, but sponsors focus on the work not done. Clearly, we didn’t do enough work up front to truly understand the scope, and to plan for the unanticipated.

    I say “clearly” above, but it’s not really clear how to best estimate scope for an agile project. How DO you do a better job of defining scope early enough in the project to properly match the capacity and time box (thus budget) you have?

    I’ve asked a lot of experienced agile practitioners this question, and have gotten surprisingly little helpful advice. I think you are on to the answer, and am interested if you have any further ideas.

    Thanks!

  3. Tony Heap Post author

    Hi Kevin,

    I’m glad you found the article useful. I could probably write a whole article on what I have learned about estimating project size/duration. Let me see if I can summarise what I think are the key points:

    1) Estimation is really hard!

    2) We have a tendency as an industry to pretend we know how big a project (or a feature) is when actually we don’t. So when a business stakeholder asks when a project will be done, we draw up a plan and give them a date, based on what little information we have about the scope at the time. Due to the nature of software (80-20 rule), the initial estimates are wildly inaccurate and we end up in cycles of re-planning.

    In my view we would do a lot better as an industry to stop pretending we know how big something is when we don’t. We could have a much more honest relationship with our stakeholders by confessing that actually building software is nothing like building bricks and mortar, and truthfully we have no idea how big a project really is until we are at least part way through it.

    3) The trouble is, that approach won’t get the budget signed off.

    4) So you still need to do some kind of high level estimate. Here’s the algorithm I have used in the past, with some success:

    Step 1: Identify as much scope as you can
    Step 2: Get your dev team to estimate the size
    Step 3: Double the estimate. Seriously, double it. Don’t add 20% or 40% or even 60% contingency. Double it. This caters for the fact that (a) even though you won’t deliver all the features on the original scope list, (b) the features you will deliver are bigger than you think, and (c) also that there are plenty of features that you don’t yet know about. The idea of doubling the estimates came from Kent Beck back in 1999, and I’ve found it to be a useful heuristic.
    Step 4: Set out your delivery schedule – timeboxed increments – with a clear end date.
    Step 5: Set out a clear expectation early on that you will not deliver everything that is on the original scope list – rather, you will deliver features by priority with an aim of delivering against the project objectives.
    Step 6: Practice ruthless prioritization (especially including feature splitting), as described in the article above, to make sure you are only delivering high value functionality.

    Steps 4 and 5 are important because together they change the behaviour of your stakeholders when prioritizing. Once they realise that they won’t get everything they’ve asked for, they will start doing prioritization properly.

    You also mention time lost due to architectural changes/refactoring. There are two mitigations you can apply:

    1) Identify architecturally significant features and deliver them early (these are sometimes referred to as “spikes”)
    2) Practice TDD and rely on automated tests to de-risk architectural re-factoring

    For more ideas, see this article on Business Analyst Designer Method

    Hope that helps.

  4. Alexandre Klaser

    After some years working with agile methodologies, there’s no doubt that an iterative and incremental model is a very good way of delivering working software in most of the project/product scenarios I’ve been involved.

    But we are still failing when it comes to leaving room for discovery and innovation. So, one thing I am doing on the projects I work on is to switch from an inductive model to a generative model.

    When writing epics and then dividing them into stories , we are coming from a presumed set of functionalities (which will be described as user stories). By doing this, we assume that these requirements are already known and are part of the imagined solution for a given problem. This is the inductive model, that goes like this: Problem → Solution → Features → Epics → Stories → Working software.

    This sequence assumes that the best solution to the problem is the one initially thought of, and that the software that represents the solution (and therefore solves the problem) will be obtained when all features are implemented as described in the epics (and consequently in the stories). So far, so good. But how do we know that all the features described are necessary to provide the solution? Why are we so sure that the imagined solution actually solves the problem? And more: we will base all of our prioritization in this fixed set of features and leave little (or no) room for learning. The scope may increase without control and there is no opportunity to validate how much is enough.

    In a generative model, we come up with the goals for a particular project and use them to derive progressively more rounded-up solutions. This model makes the business goal “generate” the user stories that will fulfil it. To do this, we assume that there is a problem to be solved, and come up with assumptions of what we believe will bring us closer to meet that goal. This is the sequence: Objective → Hypothesis → Stories → Working software.

    Someone may wonder if using this model could ever deliver the “required” functionality, given that they are not formally described. It is impossible to answer this without another question: Who “requires” that functionality? The analyst? The manager? In my opinion, the only person who can legitimately need some software, is the end user of it. The design team should meet only the needs of this user, so any “required” functionality for the project team is solely the opinion of the staff as to what would meet the user’s needs. No effort of analysis, in my experience, is enough to identify it. Working software is the best measure of success in this regard.

    I recently posted an article describing a few exercises a team can run to reduce its backlog while prioritizing it given a set of goals, trying to ensure that they build just enough to validate their assumptions and move forward, delivering working software and real business value.

    1. Tony Heap Post author

      Thanks for your input Alexandre. Your “generative” approach is interesting. I’m a big fan of making sure we all understand the project objectives and are working towards them. I try to practice this approach on every single feature I analyse/design – and in particular I make sure I consider all the options (including the “do nothing” option) before we agree what we are going to build for a given feature. I call this “Options Engineering” and I’ve included it as an explicit step in my own personal BA methodology Business Analyst Designer Method. Knowing the project objecives is really useful for prioritization too – you can test each feature against them to see whether and how much it contributes towards achieving them.

      I also like the use of the word “hypothesis” in your model – it nicely gets across the idea that we’re really not sure whether what we are building is going to deliver on the objectives until it gets used for real – and then we may have to iterate a little to get it right. There’s a nice parallel to the scientific method here.

      The challenge of course is knowing when you’re done. The iterative model doesn’t have a clearly defined end point – you can keep iterating, improving, honing for as long as you like. So when do you stop? I guess the answer is you stop when you’ve met the objectives – in which case you probably need some way of measuring that. Hmm…that sounds like an idea for another article!

      1. Alexandre Klaser

        “The challenge of course is knowing when you’re done” – exactly!

        My first attempt to tackle this issue was to create a matrix with multiple levels of attainment of goals (see this presentation).

        With small and finite increments at each of those levels, you can have functional tests, deploy to production, and test user experience. This is validated learning. We measure what we accomplished and check whether or not a given hypothesis led us to meet our goal. This helps us to either pivot or persevere.

Comments are closed.