Feature creep versus Bugzilla



  • Hi all

     we're in the middle of a testing phase, which in itself is neither special nor noteworthy. But this time, instead of getting 9 bug reports plus 1 improvement, we're getting 5 bug reports plus 5 improvements. So now I'm wondering, how do you handle "improvements" during a testing phase?

    Our development process is pretty typical: we have a bunch of change requests and minor improvement stuff in a "requirement pool". The scope of a new release is defined by picking out CRs and improvements (mostly based on customer feedback). We then implement the changes over the next 2-6 weeks, then test for one or two weeks, then deploy. Repeat. In earlier releases (and with earlier applications), the people working on the requirements were used to a very "dynamic" scope planning, essentially running a code-and-fix approach. Only in the last few months were we able to define a fixed scope for a release and then stick with that until deployment.

    With the current release, the scope planning and developement have run fairly smooth. Now we're testing, and too me, all the new improvements seem to be like feature creep. So I'm wondering, how do other developers handle feature changes or enhancements during the test phase. If the author of a CR clearly forgot one scenario, it would seem narrow-minded not the adapt and implement the additional code to handle that scenario. But then, you could just give a first draft of the CR in for development, then "code-and-fix* during the test phase to get where you want. What do we do? What would you do? Postpone any feature changes into the next release and risk deploying a half-finished release? Implement all feature changes and allow feature creep through the backdoor? Something in between? But where's the line?

    Any thoughts or "here's what we do" are welcome.

    Cheers
    Simon



  • The general rule I follow is that I will implement late enhancements if I feel I can do it properly and not disrupt the stability of the release by releasing a half-finished feature, a regression or delaying the release.  Most requests can be deferred until the next release if the party making the request has a firm date on when the feature will be in place.  Additionally, it helps to highlight the pitfalls of rushing something in at the last minute.  Most people are quite reasonable and would rather wait a few months and have something that works.  The important thing is to let the customer know that their concerns are being listened to and addressed.  For most people, this is sufficient and they will be grateful you took the time to assess their request and plan for handling it in the best way possible.  Sometimes you run up against someone who cannot be satisified in this way, though, and you are forced to rush a feature out.  When that happens, the best thing to do is just do the best you can and let the customer know that there will most likely be issues resulting from the rushed schedule.  If they are the type that insists on being tended to right away they probably will not be particularly understanding if there are flaws in the rushed product, but all you can do is grin and bear it and remind yourself that they are still paying you money for this.  The rushed feature will probably take longer to bring to a completed state than the one that was deferred until it could be handled appropriately, but that is a price the pushy customer is paying, not you.



  • In my case, we're developing standard software, so there is no single customer we're dealing with. Rather, our internal requirement department acts as a customer. The scheduling and risk taking is not even that much of an issue, if everything is communicated properly; just as you pointed out. (The requirement engineers will happily agree to a revised deployment-date if their new requirements are implemented)

    My concern is that if we prolong the test phase to finish all late enhancements, this might lead to the following, exaggerated scenario: the requirement department neglects to put much effort into writing a well-defined and well-thought change request up-front, giving the developers say 50% of the finally required features to start construction. The acutal construction phase is not 50-60% as usual, but more like 30-40%. Then during the test phase, the remaining 50% of the requirements 'pop up' as the requirement departement now, and only now, really starts to think about the change (oh, we didn't consider that scenario. I've got an even better idea how to do that, etc.). The test phase becomes the major phase of development, during which most requirements are defined (in a rather ad-hoc manner) and implemented and tested.

    In such a scenario, you might be under the impression that we still have a design, a construction and a test phase; but really, we have a 'build a prototype'-phase followed by a code-and-fix phase. How do we prevent the slip to such a "process"?

    Cheers
    Simon



  • @sniederb said:

    In my case, we're developing standard software, so there is no single customer we're dealing with. Rather, our internal requirement department acts as a customer.

    Yeah, but the same thing pretty much applies.  I always treat the rest of the company as my customer and let them worry about the end-customer.

     

    @sniederb said:

    In such a scenario, you might be under the impression that we still have a design, a construction and a test phase; but really, we have a 'build a prototype'-phase followed by a code-and-fix phase. How do we prevent the slip to such a "process"?

    Just make it clear that every late requirements change pushes the project back and may end up deferring the feature until a future release.  It sounds like most of the people in your org are pretty comfortable with the development cycle but some things are just slipping due to a lack of rigor.  Sometimes it takes a reminder like a delayed release to remind people why up-front requirements are a good idea, but it doesn't sound like you have a big political battle ahead of you.


Log in to reply