In my previous article I described my first ever agile project as a Java developer. I highlighted some of the challenges that we faced in terms of the requirements gathering and analysis work.
In this article I’m going to talk about my first agile project as a business analyst, and how I took on some of the learning from the previous project.
I was hired for this project in Summer 2009 (a very rainy summer here in the UK, since you asked – 2010 was soooo much better). When I arrived I discovered that there was no project manager to speak of and no particular process I had to follow. The assumption was that I would take a standard waterfall approach. However, the project was relatively small (1 application, 4 developers, 8 months), and the lead developer was open to trying new things. So I decided we would instead take a ride down Agile Avenue.
Defining the Agile Business Analyst Role
As mentioned in my previous article, there is much debate over the role (if any) of a BA on an agile project. Writing myself out of a job didn’t seem like such a smart option, so I decided that we would indeed have a BA on this project. And if any agilists asked, I would describe myself as the (Scrum-style) Product Owner in order to put them off the scent.
Articulating the Vision
The first artifact I produced was the Vision. The terminology is taken from RUP, but basically it’s a Terms of Reference or Project Initiation Document by another name – background, objectives, high level scope, methodology, timelines, stakeholders, assumptions, constraints, risks and issues.
The key ‘agile’ thing I did with this document was to produce it as a slide deck instead of a standard text-rich document. My main objective here was to avoid all the ‘window dressing’ that usually goes with a written document (you know – full grammatical sentences, fonts, formatting, alignment and so on). But it had a very useful side effect too. On numerous occasions I needed to give a project overview to various interested parties. The Vision was a ‘ready to go’ presentation perfectly suited to this purpose.
You can download a copy of the Vision.
Moving from document to slide deck was also the start of an important mindset shift for me. In the past I’d been used to producing high quality documentation, and I took pride in my work, often spending valuable time fiddling around with the wording and sentence structure to get it ‘just right’. In slide deck format, with the window dressing stripped away, and the content laid bare, my artifact was revealed for what it really was: a means to an end, not an end in its own right.
Identifying Scope and Priorities
The Vision identified 5 or 6 key scope areas that the project would deliver. During the scoping stage I engaged with the various stakeholders (interviews, workshops, questionnaires, the usual stuff) and put together what we called the Feature List. Each feature was a single paragraph description of some desired system behaviour. The Feature List was a spreadsheet so, as with the Vision, there was a focus on content over format.
You can download a copy of the Feature List.
The Feature List was (deliberately) unprocessed. Some of the features were very small and specific. Others were huge and rather woolly. In some cases the features looked like they would probably overlap, and others looked like they might conflict. I tried not to let any of that worry me too much at this stage – I was trying to avoid doing too much detailed analysis up-front. I called them features rather than the more agile term user stories because I didn’t feel they were concrete enough to be classified as stories just yet. This was probably a mistake, of which more later.
I worked with the stakeholders to prioritise the features (using MoSCoW). I repeatedly made it very clear that the project was to be time-boxed (there was, in any case, an immovable go-live date) and we would deliver features in priority order until we ran out of time.
I then worked with the lead developer to estimate each of the features. Of course he complained that some of the features were too woolly and gave correspondingly high estimates. And as is common with developers, he also put high estimates against the features he didn’t agree with. We added a contingency of 50% to each estimate to account for the lack of detail.
We used the estimates to determine how many developers we needed to deliver all the ‘must’ and ‘should’ priority features in the available time for go-live (the answer was 4), and sized the development team accordingly.
We then organised the remainder of the project duration into 8 three-week increments, and that’s where the real fun began!
Detailing the Functional Design
From my previous experience on an agile project, I knew I wanted to go into each increment with more than just a list of features – I’d seen that approach cause problems last time. With this in mind, I deliberately planned in a couple of weeks before the first increment started, to give me time to get ahead of the game.
I worked with the business team to elaborate the features into something more concrete. I called these more concrete things user stories, and used the standard “As a…I want…so that…” format. I captured the detail of each user story in a fairly cunning use case/acceptance criteria hybrid notation (which I have discovered recently is not dissimilar to that used in Behaviour Driven Development).
I elaborated the top priority features first and after two weeks I had enough detailed stories to take into the first increment with a few to spare. This ‘just in time’ approach to elaboration was my attempt to capture the benefits of agile (deliver benefit early, respond to change etc.) whilst avoiding the problems raised by starting a development increment with incomplete requirements.
Rather than writing the user stories on index cards, I put them all together in a single spreadsheet and called it the Functional Specification, or FS for short. The FS was a living document and it was also a shared document (on a shared network drive) – I was adding new stories to it whilst at the same time the developers were reading it and ticking off acceptance criteria as they coded them.
I think it worked really well. The increment planning sessions ran smoothly because most of the tricky questions had already been answered and so the estimates produced were relatively accurate. The developers were able to get on with coding pretty much as soon as the planning session was over.
As ever, once development started on a story, questions were unearthed and further detail required. The benefit of the living document was that I could update it very quickly and with very little effort. Change control was managed simply via a column in the spreadsheet which the developers used to ‘sign off’ each acceptance criterion. Any gaps in the sign off column indicated a new or amended criterion. I used the same technique for business sign-off too – no need for lengthy and repeated review cycles after every change.
You can download a copy of the Functional Specification.
Prototyping the User Interface
I also built a prototype of the user interface. As this was a web application, I produced the prototype as HTML, using Adobe Dreamweaver.
I built the prototype in parallel with the user stories i.e. ‘just in time’, adding new pages to it as and when required. For a given feature, I would sometimes do the prototype first and other times I would write the user stories first. Generally, one informed the other, so I might start with a story, then do the prototype, then go back and amend the story with something I learned during prototyping. Sometimes I didn’t bother with a prototype for a story if I didn’t feel like it added any value.
The prototype was pivotal to the stakeholder workshops. I would display the prototype on the projector screen and have the FS on my laptop screen in front of me (good old dual-screen technology!). This allowed me to talk through the user stories using the prototype as a visual focus. If I’d had enough coffee that morning and was especially on the ball, I could even make changes to the prototype on the fly.
This method was a real success – stakeholder participation was high and we evolved and refined the prototype (and associated stories) over the course of a few workshops, even before ‘proper’ development had begun on those stories. Again, this was done ‘just in time’ rather than all up front.
The prototype also made life easier for the developers because they were able to lift the prototype HTML directly into the application (I made sure the prototype used all the same layout and format as the application itself).
Creating Other Artifacts
In the earlier increments, I produced two other artifacts: a Logical Data Model and a Screen Navigation diagram. Again, I took a ‘just in time’ approach and only included the changes that were relevant for the upcoming increment. By increment 3 or 4 it became apparent (during the ‘retrospective’ session) that neither artifact was being used: the developers were able to infer the Logical Data Model from the FS and the Screen Navigation from the prototype. The artifacts were duly discontinued, and a whole load of unnecessary work avoided – a real triumph of the agile approach.
Incorporating User Feedback and Evolution
One of the stated key benefits of incremental development is the ability to get user feedback early, and to evolve the system based on that feedback. Ideally this is done by actually putting the system live as soon and as frequently as possible. At the very least you are supposed to ‘showcase’ the system at the end of each increment.
Showcasing the system turned out to be unnecessary – the user feedback usually received at this point had already been received during the ‘just in time’ prototyping phase. But I was keen to get user feedback based on actual system use, and sooner rather than later.
We were restricted to a single go-live at the end of the project, so multiple go-lives were unfortunately not an option.
Instead we conducted two ‘mini’ phases of User Acceptance Testing (UAT) – after increments 4 and 7. Each UAT phase was focused on specific areas of functionality that we felt would most benefit from hands-on user feedback. We got feedback from around 30 ‘friendly’ users, looked for common gripes and scheduled extra user stories into later increments for the most important ones.
Tracking Progress and Managing Change
I used burn-down charts to track progress through the project. I had one chart per increment (tracking against user stories) and also a high-level chart for the project as a whole (tracking against the Feature List).
My charts were actually inverted burn-down charts (burn-up charts?) in that the progress line worked its way upwards over time towards a ‘100% scope complete’ target line (a normal burn-down chart works downwards towards the x axis). This allowed me to show scope creep on the chart by moving the 100% line upwards (e.g. 20% scope creep would take the line up to 120%).
This was an excellent tool for managing stakeholder expectations and worked hand-in-hand with the MoSCoW prioritisation – every time a new feature or change was requested mid-project (including those arising out of the UAT phases), I would ask the stakeholders to prioritise it, get it estimated, then show them the increased gap between current progress and the ‘complete’ line. At one point the gap got too big and we spent some time re-prioritising all features to make sure we were definitely focusing on the right things.
By the end of the project we had hit the (original) 100% line. We had delivered all of the ‘musts’, most of the ‘shoulds’ and a few of the ‘coulds’. Most importantly, the stakeholders were happy because they had been involved in deciding what to deliver every step of the way.
You can see the overall project burn-up chart on the ‘Progress’ tab of the Feature List. You can see the per-increment burn-up charts on the ‘Stats (i1)’-‘Stats (i8)’ tabs of the Functional Specification.
Learning from the Retrospective
Overall, I was really pleased with how this project went. By pretty much any measure it was a success. The system went live and is being used today by around 30,000 users.
In terms of managing the analysis artifacts, I did have one major headache, in that I was dual maintaining two separate lists of scope items – the Feature List and the user story list (in the FS). In order to keep track of progress (and to keep the various burn-up graphs accurate) I constantly had to make sure the two were in sync.
With hindsight, it might have been better to combine the two into a single list. I had wanted to keep a distinction between ‘woolly’ high-level features and detailed, elaborated user stories. But really I think that the latter are just a progression from the former. Agilists commonly refer to large, high-level stories as ‘epics’ and maybe I could have done that too.
I appreciate that I’ve glossed over some of the juicier details in this article – it would have been too long otherwise. In future articles I hope to deep-dive into specific aspects of the artifacts and techniques I used on this project. If they worked for me, they might work for you too, so if there’s anything you’d particularly like to hear about, please leave a comment.
Since writing this article, the method I follow has matured enough to be worth publishing in more formally – see the more recent article Business Analyst Designer Method.
And if you’re interested in becoming an agile business analyst, you could do worse than to take a look at my Distance Learning Course.
Pingback: Tweets that mention A Case Study: A Business Analyst on an Agile Project -- Topsy.com
Great post! I’ve been planning to do some research on the Agile BA, and it’s fantastic to hear it straight from the horse’s mouth. I appreciate the insights.
It’s always so interesting to hear of different agile experiences, every one is a little different, and the role of a BA in particular is so malleable in the Agile world. What it seems like you were grappling with, just as I do on every project, is how much information you need to gather upfront, and how much should be delivered just-in-time.
Iteration Zero is what we use to try and gather as much broad scope information as possible on the project, whilst also expanding on the stories to worked on in Iteration 1. I then work one iteration ahead of the team during the Iterations themselves (so I will look at stories in Iteration 2 while the developers are coding Iteration 1).
Keep your Agile BA stories coming!
Excellent case study! It shows how agile techniques can be adapted to a situation. It also shows how traditional waterfall artifacts can be blended with agile artifacts without doubling the amount of work. Nicely done!
Absolutely fascinating! I’m in the middle of helping to define a SDLC and trying to find a good balance between a traditionally waterfall culture with a desire to be more Agile. As a BA with most of experience in waterfall and only a little in what I’d call “dirty” Agile, this article gave me some great ideas!!
I’d love to know how the system has continued to evolve? Do you feel that the evolution of the system could be at risk since there is no detailed technical documentation (physical data models, integration mapping, etc) produced? What if you have high turnover in your development staff? This is a concern that I have about agile as I don’t like developers learning a system by digging in without documentation.
I’m also curious about how much time your users had to invest in this project. Agile seems to be very time intensive for users whereas waterfall invests more time up front but backs off a bit until UAT. Did your users have other efforts that required their attention or were they able to dedicate time to this effort?
Tony, thanks for sharing your story-great case study and example of hybrid and creative practices to deliver value to customers:
it’s indeed interesting that your acceptance criteria overlaps with BDD. (creating the actual acceptance tests as part of work-ahead is something we’ve found useful – using Given/When/Then or data tables). your story is a good example of applying a tester’s mindset to the requirements.
i found your story about the developer’s experience with the data model interesting.
On a number of agile projects i’ve coached or worked on, we organically build the logical data model during iteration work-ahead when we prepared stories for the next iteration, i.e. making them “ready”. (here’s a link to a technique for making ready we’ve found useful, fyi: http://ebgconsulting.com/Pubs/Articles/SlicingRequirementsForAgileSuccess_Gottesdiener-Gorman_August2010.pdf )
The data model become a key tool, along with the interface sketches, for requirements clarification with the team (developers, testers and of course the product owner).
i wonder if any of your work-ahead involved developers? in our situation, that data model has become an asset for maintaining the application today.
many thanks for sharing your story in so much depth tony!
Pingback: Tweets that mention A Case Study: A Business Analyst on an Agile Project -- Topsy.com
I agree with everyone, Tony, great case study. Jenni touched what I consider the biggest issue with agile project: what about evolving the system? She mentions technical documentation, but also information about the business logic behind system behavior is easily lost when they are “part of the conversation” started with a story, but not documented anywhere.
A recent project (which I joined midway) suffered from this problem. The business needs were changing, and identifying which parts of the system needed to change as a results was made much more difficult because we had the feature list and stories but they left behind a series of implementation details, system constraints defined in conversations between developers and stakeholders, etc. A lot of time was wasted by the business analysis team and developers trying to figure out how the system behaved under different circumstances (requiring, in some cases, that a BA logged in with multiple user roles to go through some test scenarios and then document the behavior).
“As ever, once development started on a story, questions were unearthed and further detail required. The benefit of the living document was that I could update it very quickly and with very little effort. ” What Tony describes is such an important part of solving the problem Jenni mentioned. The idea of “working software over comprehensive documentation” is fine until you later need to know exactly what the system is doing in order to make it adapt to new situations. The price can be high if you only worry about the “here and now”, and forget that the system will have to evolve with the business after going into production.
@ Jenni and Adriana –
I agree that good documentation for a (bespoke) system is an asset – the benefits are obvious in terms of training and hand-over. I used to think that it was a no-brainer that we should strive to produce such documentation. But these days I’m not so sure.
My experience is that it is extremely difficult and time-consuming to produce and maintain comprehensive, readable documentation. What I’ve noticed is that unless the docs are perfect, they are pretty much worthless – because people are never sure which bits to trust. I’ve also noticed that most people learn and understand a system best by either looking at it from the outside (i.e. via its user interface) or from the inside (i.e. via its source code).
The agile alternative is to make the *code* the documentation. Good agile developers (that I have worked with) are fanatical about code readability – to the point where they will re-factor, hone, perfect their code until it is virtually plain English plus brackets! Likewise the database – every table and column name is chosen to make plain its meaning and usage – so the database itself becomes the physical data model.
The final piece of the puzzle is automated acceptance testing (using a tool like Selenium) – the executable tests (again written in plain-English-Java) *are* the requirements – they tell you what the system is *supposed* to do – including all the business rules and logic that are “part of the conversation” as mentioned by Adriana.
The point about making the *code* the documentation (and in particular, the acceptance tests) is that they always reflect the built system exactly – there is no danger of them being out of sync.
Now I’m not saying this is the perfect solution, and I still have some doubts over whether fanatical developers are any better than fanatical BAs in producing the “documentation”. It also requires that BAs are sufficiently technical to be able to read the acceptance test code. But my experience so far is that (for bespoke systems) it seems to be better in terms of cost/benefit.
Re. user involvement:
Interesting question! The “just in time” approach to analysis means that it is spread out over the project. On my project, I ran regular twice-weekly workshops of 90 minutes each – so 3 hours per week commitment from users – which they were able to provide no problem – working their business-as-usual tasks around it. If anything, this is probably easier on users than having to block out several weeks for full-time analysis.
I dare say the user commitment is greater for agile projects with no BA (or Product Owner) – because business users will be having to field developer queries directly. But on my project I was there, “bridging the gap” and filtering out some of the noise.
Great post Tony. I really like the idea of using the “FS” document, even more so, I like the idea of making it shared throughout the process. We have a community for IM professionals (www.openmethodology.org) and have bookmarked this post for our users. Look forward to reading your work in the future.
Really useful post
I am involved with big projects with lots of linkages to other projects and integrating into Business as Usual business operations so agile approach is difficult and documentation essential. However I think I could use the ideas in the Vision, Features List and FS to improve the clarity of such documents.
I am familiar with the suggestion from agilists that the code should be the documentation, but based on my experience, I find this concept working only in theory…
Here’s an example. Last week we had to report a change request to the designer of a system. The system (which is very complex) was displaying a date that was not the one the users needed (say, it was showing begin date where the users needed to see end date).
When we raised the matter with the designer, he took a look at the code and said, “but, I’m seeing here that it is displaying the end date like you say you want”. “No, the story says to display begin date, and we tested in the application, it is indeed showing the begin date as stated in the requirement. The requirement now has changed and we need you to change the code.” The designer seemed puzzled and said he would investigate. (Note that this is not a case of badly written code; independent developers have agreed that it is very well written and easy to read).
This is not the first time I have seen this happen with complex systems — even the original designer or developer not being able to answer a question about system behavior accurately (the only way of knowing was actually testing the scenario in the actual application). The problem worsens when the original developers go away — it may take a long time for the replacement to be able to “read the code” correctly when the number of modules of components is very high. Very few organizations can afford this inability to quickly answer a question about the functionality of their key systems, especially when a business rule changes and you need to understand the impacts the change will have in the system.
I’ve written an article about the BA and agile documentation for BTG in the past (http://www.bridging-the-gap.com/the-positive-influence-bas-have-in-the-quality-of-documentation-in-agile-projects/), and after some additional experiences with agile, I’m even more convinced that for many organizations it will never work to replace documentation with code.
Re: “What I’ve noticed is that unless the docs are perfect, they are pretty much worthless – because people are never sure which bits to trust. ” I think that over the years I have found a good solution for this problem, but since I don’t want to make this a bigger comment than it already is, I’ll make it the topic of my next article for Bridging the Gap :-).
“A Case Study: A Business Analyst on an Agile Project” by Tony Heap. Informative article, well worth reading. http://bit.ly/hgluie #baot
RT @UKIIBA: “A Case Study: A Business Analyst on an Agile Project” by Tony Heap. Informative article, well worth reading. http://bit.ly/hgluie #baot
RT @UKIIBA: “A Case Study: A Business Analyst on an Agile Project” by Tony Heap. Informative article, well worth reading. http://bit.ly/hgluie #baot
Oh, I almost forgot to post here the links to the article inspired by our discussion:
It would be great to have you offering your thoughts, specially in part 2, as I’m curious to learn if someone has already used a similar strategy, and if not, the reasons why you think it would work for you, or not work.
Hi Tony, great post – how do you manage to write all that while doing actual work?! It’s really good to see solid evidence that all the techniques we recommend in our training courses, based on our experience on our own projects, do actually work in other contexts too.
Pingback: An Agile Functional Specification | Business, Technology and the Future
Re. ‘Make the *code* the documentation’
This is the developer’s stand in regards to documentation: ‘We do not need documentation; it is all in the code’. What if we’re talking about a big legacy system – let’s say 1 mill lines +. Thousands of class files, projects within the solution, components that are referenced (but managed by your development team), etc. I know ‘all’ about readability – but at some point down the road you’ll end up searching for something and find it in a complete different place than you expected it – maybe it was added a long time ago, changes to the dev.team, etc. Code = documentation is definitely true. But how to make it readable to others besides the developers?
We use a combination of Jira, Confluence and, of course, the code, to document our project. Requirements are documented in Confluence per module and transferred as user storied in Jira. The different tasks in Jira are linked back to the original Confluence page so both developers and QA can check if they feel something are missing. QA again use Confluence when they write their test cases in SpiraTest, so we’re pretty sure that we’re hitting the different rules set out by the business. Selenium is used to verify behavior with different browsers (haven’t thought about using it for acceptance testing yet – will look into that).
As our project doesn’t have a user that sits next to the dev.team, the traditional BA is the one closest to this role – liaising with the business’ ‘product owner’ (e.g. user). Until now this has turned out pretty good.
My main challenge at this moment is a new risk that came up over the last few weeks – ‘What to do when no-one in the business really know the business rules that should be implemented’.
The project is pretty straight forward. Take the legacy system and implement from scratch. Make it as efficient as possible, new user interface, etc. Make people work smarter and more efficient, and let their working day be a bit more joyful.
Until now the progress is good – development team is picking up speed, QA is happy, few bugs are found during testing, etc., and we’re closing in on beta test (by the product owner first). Prototypes have been used throughout, so we’re pretty sure we hitting close to home with the users.
However – requirements are being put in place by the BA close to the planning meeting for the next sprint. And one business rule in regards to how some documents should be treated was still unanswered. The product owner did not know, other ‘power users’ could not answer either. And going out broader in the organization showed the same. The functionality is important, but no-one can really tell how this should be solved. There is no documentation on this, and the legacy system would most likely cost me 1 month of research before the rules are put down (spaghetti-system).
So how should I move forward? And how can I reduce this risk in the future?
Tony, thanks for sharing this experience . Well documented, it was a great to read your post!
Thank you for sharing your experiences, Tony. I will definitely refer back to your articles as I work on teams transitioning to Agile methods. It will be interesting to see if and how Agile fits into development in highly regulated industries. I’ve worked in IT organizations in financial services and now healthcare, and both require very detailed documentation in certain circumstances. Have you had experience balancing Agile’s light documentation approach with regulatory requirements? On another note…in general, I’m disheartened with the trend toward Agile methods and rethinking my long-term career choices. While I understand the market driven need for Agile and I’m trying to remain open minded, most of what I’ve read and experienced relegates the BA role to that of a communication specialist/facilitator and strips out most of the “analyst” in “business analyst.” Executing the analytical process and then communicating the results to stakeholders in the form of BA produced artifacts is the most elegant and gratifying portion of being a BA and most of that is eliminated in Agile’s user-developer centric model.
A few thoughts on this statement: “most of what I’ve read and experienced relegates the BA role to that of a communication specialist/facilitator and strips out most of the “analyst” in “business analyst.” Executing the analytical process and then communicating the results to stakeholders in the form of BA produced artifacts is the most elegant and gratifying portion of being a BA and most of that is eliminated in Agile’s user-developer centric model.”
I understand what you are saying here and initially had some similar perceptions. What I found on my initial agile projects was that while the agile methodology does appear to “strip out” much of the analyst role, the analysis process does not go away. (Of course, as analysts, we know it can’t!)
Instead, the analysis begins to take new, purer, more action-oriented forms and a less document-centric forms. It’s actually extremely freeing to have what you are best at pared back to it’s essence and removed from the strict nature of formal artifacts. You find yourself reinventing your analysis process and deliverables to focus on what’s absolutely needed to be successful. If you have the opportunity to be an agile practitioner, I hope you have a similar experience and find it equally gratifying.
I’ve worked in Financial Services and Healthcare, but not using agile approaches. In particular, the healthcare project I worked on, which was huge, was literally swimming in documentation, and was much the worse off for it. The project process was completely document-driven, and everyone on the project was focussed primarily on producing their defined document, rather than being focussed on delivering value to the customer – and IMHO this is exactly what is wrong with such an approach.
Remember that the Agile Manifesto doesn’t say documentation is bad – it just says that other things (like working software) are more important – and as practitioners we need to keep our minds focussed on the end goal, even if we do produce some documentation along the way. For me, Agile is more about a mindset shift than it is about throwing away all formal process – so at every stage of the process we need to be asking ourselves – is what I am doing adding value, and if not, what should I be doing instead? Very difficult to do on a large process-driven project, but if enough of us think that way, sooner or later there will be a sea change.
But the good news for BAs is that I don’t think that Agile is the death-toll for the BA role. My experience so far is that the user-developer model doesn’t always work – many developers either don’t want to do the analysis or else aren’t very good at it. And in many environments it’s not possible to have full-time access to a single business user who has all the answers. So the BA can still fulfil a bridging-the-gap role – e.g. in Scrum by playing the “Product Owner” role – but this only works if the BA is willing to actually *own* the product – which means they need to be empowered to make decisions and also be willing to take the responsibility/accountability for those decisions. IMHO, BAs need to be solution definers, not just requirements gatherers – and such BAs will, in my opinion, prosper.
Re. artifacts – I still produce these, but they are a lot less formal than before – and they are “barely sufficient” – which in practice means they are there to support the face-to-face communications I have with the business and dev teams – not to replace them. I’ve had to throw away some of my preconceptions about my role, but overall it’s been a good thing to do that.
Thank you, Tony and Laura, for your insightful and encouraging feedback. I will remain open minded to see if I am as enthusiastic in the Agile realm as I was in the non-Agile realm (and try to ignore the sites that insist BAs are obsolete). Tony, in the highly regulated industry in which I’m currently working, “barely sufficient” means everything has to be documented in great detail with clear audit trails, so it will be interesting to see how the company accommodates regulatory needs in the transition to more Agile approaches. Thanks again for this site and the great information you’ve all shared! :^)
You are welcome.
Re: “it will be interesting to see how the company accommodates regulatory needs in the transition to more Agile approaches.”
Sounds like a great opportunity to step in and lead the charge so BA as you know and love it is not left behind.
Pingback: To spec or not to spec, that is the question… » pseudoplace.com
I’m 19 years old and haven’t commenced studying business analysis due to the foundational course (I.T) that i’m currently doing. I do however surf the net and read BA based books. I have no practical experience within the field but I’m well aware of the the complex theory behind the predefined determinations required when working through systems analysis projects. I personaly think the agile methodology worked well for Toni because he mastered the concept by making it work to HIS advantage and according to the constraints of the project. With an agile methodology, I think later implementation of the system’s evolvement can be commendable for minor projects because the source code is relatively legible and comprehensive. User involvement for me is the most crucial fragment of the analysis procedure and is a fundamental decider as to whether a project is headed in the right direction-at the end of the day, the users are the integral, functional system of an organisation and what better way than for both to be in sync? A very insightful article Toni, thank you for sharing.
Reading your case study was enlighting. I too was particular with the FS and took great pride in perfecting the document. The problem with this approach was that I was probably the only one impressed with it. The business felt they were simply reading what information I had already gathered from them (and they rarely read it), as I adopted a collaborative approach in gathering requirements. The developers simply wanted to know what the stories would be from this document and look at the prototypes. Again with a collaborative approach, they know exactly what is required without reading the document. So in short, the detailed requirements document became redundant.
The one challenge I am faced with is setting the priorities and not creating dependencies between stories. Could you elaborate on the different categories: “Must”, “should” or “could” and how do I prevent dependencies between stories so that individual bits of work can be completed and tested?
I’ve moved away from MoSCoW (Must, Should, Could, Won’t) – because business users tend to put everything as “Must”. That said, I do have a good way of unearthing the difference between “Must” and “Should” – I ask: “so, you are saying that the software can’t go live without this feature, even if it means delaying the launch date?”
These days I prefer to force rank the stories – see my more recent post The Power of Index Cards.
In terms of preventing dependencies – you can’t – there will always be stories that build on top of other stories, especially because the whole idea is to embrace change once the users have seen it. You just have to make sure you manage the dependencies.
Jacqui, just to add to Tony’s answer, an additional tip for you when you are struggling with setting priorities: talk to your product owner in terms of the goals and timelines. Which goals are more important to be achieved first?
For example, if you are building a web application that allows you to create and manage a contact list, obviously “delete contact” will be a “must”, but that function most likely could be left to go live one month later (the assumption being, most users would probably be just adding and editing contacts for a while). “Building and viewing a contact list” could be the goal of the first release, whereas “Trimming down the contact list” could be the next goal.
The product owner can then prioritize goals (and stories) for each iteration within the various themes, such as “Managing Contact Lists”, “Security”, Reporting”, etc., determining which smaller goals are most important at that point for each theme.
Regarding dependencies, there are things that the BA can identify and discuss with the product owner (e.g., a feature providing contextual help to the user on how to navigate the various settings displayed in an admin panel only makes sense after the actual settings are available). Sometimes it’s not as obvious that a story depends on another being built first (or more commonly, that the cost of building the stories in a certain sequence will make the product much more expensive). Arriving to such conclusions and making decisions accordingly should always be a collaborative effort with your development team. Show them the prioritized backlog, and they will be able to help you identify dependencies among your stories (e.g., predecessors, or user stories that must be completed before another user story can start or finish).
Tony, thank you for such a great case study! It really helps a lot to organize knowledge about Agile in practice.
Thanks for sharing your experience. I am starting out as a Agile BA and it was very helpful for me.
I had a question related to the working of the Agile methodology in case of a module which is relatively big in size. Say, x number of standalone features and y number of integrated features. Both x and y are big numbers. Does the approach of an Agile Business Analyst change in after some threshold size of a project?
I ask this because we faced this problem recently. There were so many features to test that the full testing scope was not fully understood at the start. This eventually caused delays and confusions. In this scenario, considering the size, what role should a BA play ?
Hi Pushkar. The only way you can eat an elephant is one bit at a time. As an agile BA I try to add value by breaking the elephant (project scope) down into individual scope items (features/user stories) that can be delivered (built & tested) separately. Where the features are “integrated” it gets really tricky because it’s hard to know where to start. It’s worthwhile spending some time (but not too much time) up front with the business stakeholder(s) and the technical team trying to make a best guess at the dependencies and putting the features in a sensible order. That said, most agile teams would endorse the following strategy: start with a feature which delivers a thin slice of functionality *end to end* – something that has a visible outcome (no matter how small) to a business user. Subsequent features should build on that by adding more user-visible functionality. Also recognise that you won’t get it right first time. You *will* get later features which “break” earlier features and cause them to need to be re-designed or even binned. So plan some contingency for re-work.
Thank you for the reply. That was useful information. Especially the part about identifying an end to end thin functionality and then building on it. A lot of good practices seems to work in small and medium projects, but chaos starts once there is a huge number of functionality, boundaries are vague and timelines are tight.
Great article. I find it quite educative.