Does Agile work for standard software development


  • ♿ (Parody)


    @Dave Nicolette said:

    Why is it a "problem" that agile methods don't apply to every project, or that they can't be successfully applied by people who have poor technical skills? Is it also a "problem" that you can't tighten a slotted screw using a Phillips-head screwdriver? The average person can't build a set of kitchen cabinets; the job requires the skills of a competent carpenter. Does that mean wood is fundamentally flawed? Does the fact you can't perform brain surgery with a boomerang mean the concept of "boomerang" is fundamentally flawed?

    This goes back to maintenance issue. With a traditional methodology, we (ideally) get a comprehensive set of requirement and design documents. These documents do not go away nor should they become outdated; if business logic changes, the docs and the code change. But with Agile (and correct me if I'm wrong here) we do not get such documentation. Now, on top of this, due to the "design as you go" approach, trying to understand the system by looking at the code will be difficult.

    Now, these short comings are no obstacle for a strong programmer, especially one who had a hand in developing the system. But these guys aren't the one's who are going to stick around and maintain the system. That's the job of the weaker programmers, often time newbies.

    Code decay is inevitable -- but when you sic weak coders on such a system, they won't understand the system and will hack the system to pieces to do what they were tasked to do. The closer you get to "critical mass" (i.e., the point at which a system becomes so convoluted that 100% of the resources are dedicated to keeping it running), the cost of maintaining the system grows exponentially. The amount saved in both opportunity and development costs are quickly eaten up in in a few months as you near critical mass.

    Of course, the "decay" logic only applies to larger systems (1M+). But then again, who really follows that strict of a process on small projects, anyway.



  • @Alex Papadimoulis said:

    This goes back to maintenance issue. With a traditional methodology, we (ideally) get a comprehensive set of requirement and design documents. These documents do not go away nor should they become outdated; if business logic changes, the docs and the code change. But with Agile (and correct me if I'm wrong here) we do not get such documentation. Now, on top of this, due to the "design as you go" approach, trying to understand the system by looking at the code will be difficult.

    According to one proponent of Agile, Robert Martin, you get documentation if it's a requirement that the client asks for, done at any time during the project, i.e. whenever the client says "hey, let's get some software maintenance documentation, 'cause I know I'll need it!". In other words, Martin was basically saying "never", because the client will never think that it's important, add to that the fact that Mr. Martin said "documents always lie" at least 4 times during his presentation, I think you can safely assume that Agile folks don't do documentation.



  • Fixed previous post:

    According to one proponent of Agile, Robert Martin, you get documentation if it's a requirement that the client asks for, done at any time during the project, i.e. whenever the client says "hey, let's get some software maintenance documentation, 'cause I know I'll need it!". In other words, Martin was basically saying "never", because the client will never think that it's important; add to that the fact that Mr. Martin said "documents always lie" at least 4 times during his presentation, and I think you can safely assume that Agile folks don't do documentation.



  • @Alex Papadimoulis said:

    This goes back to maintenance issue. With a traditional methodology, we (ideally) get a comprehensive set of requirement and design documents. These documents do not go away nor should they become outdated; if business logic changes, the docs and the code change. But with Agile (and correct me if I'm wrong here) we do not get such documentation. Now, on top of this, due to the "design as you go" approach, trying to understand the system by looking at the code will be difficult.

    Now, these short comings are no obstacle for a strong programmer, especially one who had a hand in developing the system. But these guys aren't the one's who are going to stick around and maintain the system. That's the job of the weaker programmers, often time newbies.

    Code decay is inevitable -- but when you sic weak coders on such a system, they won't understand the system and will hack the system to pieces to do what they were tasked to do. The closer you get to "critical mass" (i.e., the point at which a system becomes so convoluted that 100% of the resources are dedicated to keeping it running), the cost of maintaining the system grows exponentially. The amount saved in both opportunity and development costs are quickly eaten up in in a few months as you near critical mass.

    Of course, the "decay" logic only applies to larger systems (1M+). But then again, who really follows that strict of a process on small projects, anyway.



    In most projects I know, maintenance is done by members of the original development team.
    Anyway, let's assume they are suddenly all gone. In an ideal traditional project, we have a complete and correct set of requirement and design documents. Because it is an ideal project, the design is sound and consistent and it has been implemented well.
    In an ideal agile project (let's assume XP), we have a complete and up-to-date set of "stories" (which are basically a requirement documentation) and a complete and covering set of unit tests. Because it is an ideal project, the code covers nothing but the stories (no dead or "for future use" code) and has been refactored to well-known design patterns.
    Basically, the main difference is that the traditional project has design docs and the XP project has unit tests.
    What gives us more confidence that the new programmers (who have no experience with the system yet) do not break it with their well-meant "improvements"? If I had to choose, I would go with the unit tests. In my experience, a lot of code decay is caused by workarounds for bugs introduced by incorrectly implemented changes.

    Alex, what do you mean with (1M+)? One million LOC or more? One man-year or more?


  • @Whiskey Tango Foxtrot Over said:

    According to one proponent of Agile, Robert Martin, you get documentation if it's a requirement that the client asks for, done at any time during the project, i.e. whenever the client says "hey, let's get some software maintenance documentation, 'cause I know I'll need it!". In other words, Martin was basically saying "never", because the client will never think that it's important, add to that the fact that Mr. Martin said "documents always lie" at least 4 times during his presentation, and I think you can safely assume that Agile folks don't do documentation.



    If you are (or represent) the customer, and you do not ask for the SW maintenance documentation, it's your fault, not the fault of the agile team or the agile method. Likewise, if I take over a traditionally made system, including the docs, but do neighter read them nor keep them up-to-date, I'm to blame, not the traditional method or the original team.



  • @ammoQ said:

    Basically, the main difference is that the traditional project has design docs and the XP project has unit tests.
    What gives us more confidence that the new programmers (who have no experience with the system yet) do not break it with their well-meant "improvements"? If I had to choose, I would go with the unit tests. In my experience, a lot of code decay is caused by workarounds for bugs introduced by incorrectly implemented changes.

    Unit tests are easy to get around through bad programming, and should not be relied upon as the sole documentation of a system. Only an understanding of the system can allow a programmer to maintain it, and that can only be achieved through well-made design documents. That's not to say that unit tests should be ignored or dropped, it's just that Agile says "that's all you need: the code and the tests", but that's just not true. You need know to how the system is supposed to work before you can maintain it. You cannot get that just by reading the code for even minimally complex systems, the 7 +/- 2 rule applies very heavily when trying to maintain a system you do not fundamentally understand. A programmer needs to know *why* as well as what.



  • ammoQ wrote

    >Consider a successfull company that reaches its limits with the old processes. [etc.]

    Point taken, but isn't the situation you describe a result of a long series of events rather than a single project?

    >In most projects I know, maintenance is done by members of the original development team.

    Okay, but that's not the only way IT departments might be structured. In larger enterprises, development, maintenance, and support roles are often separated.

    Where we have found value in agile methods for production support and for maintenance projects is that the test suite - not only unit tests, but automated tests at all levels of abstraction - provide a regression test base for the team to begin its work. When they check out, they get the code, tests, and documentation. By running the test suite they can assure themselves they're beginning with a clean slate. Otherwise, it means someone has "gone around" the process and modified the code without going through the usual testing procedure. They have to remediate that before they begin making their own changes.

    If you can keep that cycle going throughout the production lifetime of the application, you can slow the rate of code rot over the years. The difficulty here is not that the approach "doesn't work" but simply that it requires sustained self-discipline on the part of personnel in all professional roles in the organization. That's just plain hard work.



  • Alex wrote

    >I'm not saying that Agile doesn't work in the right enviornment / right people. But my argument is that it doesn't/cannot work in most environments due to this very requirement. That's why I consider it to be more of a "fad" that works wonders for a select few, but a failure for most overall.

    This appears to be a confirmation of my earlier comment:

    >[people who say agile methods don't work] may be thinking of things in a different frame of reference - for instance, some posters seem to be looking for a single approach that works in all cases, and they define any methodology that doesn't achieve that goal to be "flawed."

    But what methodology actually DOES work in all cases? The Chaos Reports that were brought up earlier show a success rate between 16% and 29% for IT projects in the time frame 1996 to 2004. During that time frame, nearly all enterprise software development was carried out with traditional methods. So, what is the definition of "flawed?" What is the standard of excellence to which you feel agile methods fail to rise?

    Even that really misses the point, though. A better question would be: Why do we need a methodology what works in all cases? What's wrong with having a palette of methodologies from which we can choose the appropriate one for each problem, just as we have a palette of development tools from which we can choose the appropriate one for each programming task? You wouldn't try to write scripts in C++ to embed in web pages, or JavaScript to write a mission-critical banking application, would you? Why, then, would you use agile methods for a predictive project, or vice versa? It's the same type of decision - choosing the right tool for the job.

    >But with Agile (and correct me if I'm wrong here) we do not get such documentation. Now, on top of this, due to the "design as you go" approach, trying to understand the system by looking at the code will be difficult.

    I mean no disrespect by this - it's only an observation - but this is just the sort of comment that led me to say some critics of agile methods just don't seem to understand how it is done. They seem to have read or heard something about it, but they don't quite get it. I can understand their concerns, because I've been where they are. But these are not new questions, guys.

    These issues have been covered already thousands of times all around the world, including several times in this very thread. Why are people still making comments based on the misconception that there is no documentation at all in agile projects, or that there is no design at all except "design as you go"? Is there any point in addressing these misconceptions again and again, when the evidence suggests no one is reading?

    Understanding code is always going to be difficult. It's not like reading a magazine. (No pictures, for one thing.) But is it really more difficult to read code that has grown like a tumor, with no refactoring, than it is to read code that expresses a clean design because it has been continually refactored during development and enhancement projects? I know which sort of code I would prefer to deal with in a production support situation. But that's just me. And I'm having a hard time visualizing how I could yank a page out of a document and paste it over the top of, say, a database deadlock to close a production ticket. The cause of the deadlock is unlikely to be described in the design document, because the deadlock was never meant to occur at all. The fact you have to examine the code, not to mention server logs, network traces, and database logs, has nothing at all to do with whether the code was originally built using agile or traditional methods.

    When people say they don't trust the documentation and they have to examine the code to see what's going on, they aren't necessarily expressing a preference, they're just describing reality. A certain amount of high-level design documentation is useful. It gives you the big picture of how the application is structured. Beyond that...?

    >Code decay is inevitable --

    Yes, and tracking of real applications in production over a period of a few years is starting to show that code developed the agile way decays slower than code developed the traditional way. There's not enough empirical data yet to measure the difference accurately.

    Because people have this bizarre notion that changing the design somehow proves their initial design was "wrong", and that this would reflect badly on them, they go out of their way NOT to refactor when they change the code (and by implication, the design). As a result, "code rot" has already started to set in even before an application is deployed to production the very first time.

    How is documentation going to mediate that sort of problem? Will the documentation say, straight out and honestly, "Two methods at a different level of abstraction than the rest of the class were tacked on to this class because the developers were afraid to refactor them into a superclass, lest someone laugh at them for getting the design 'wrong' the first time around?" Have you ever seen documentation that honest? I'm not one to say that "all" documentation "lies," but let's be realistic about just how accurate a document could possibly be, once the code has passed through a few hands.

    >Of course, the "decay" logic only applies to larger systems (1M+). But then again, who really follows that strict of a process on small projects, anyway.

    Code decay applies to all code. A lot of people are very rigorous on small projects as well as large ones. Every enterprise project, large or small, has a responsibility to deliver value to the enterprise. Part of doing that is to pay attention to engineering principles that affect code maintainability, because that has a direct impact on TCO. That's not a methodology question, of course. I don't think it's right to dole out one's professionalism in proportion to the size of the project one is working on. Based on your previous comments, I don't think that's what you meant, but some people might get that impression.

    >I agree that the proponents...

    Know what I'm a proponent of? Value. The only reason I became interested in agile development three years ago is that a group of us at our company were looking for ways to deliver quantifiable business value to our internal customers in a sustainable way. Agile development was one of the things we experimented with to see if it would help us do that. Guess what? It did. It was never a question of jumping on a bandwagon or following a fad. Whatever works, works. Whatever doesn't work, doesn't. This works. We consistently deliver an average of slightly more than 9 times the ROI than our traditional development group. We do that because we have the right people on board, we choose appropriate projects, and we constantly self-assess to ensure we're following best practices rigorously. It isn't magic, it's just work. I'll stand beside you and criticize anyone who thinks it's magic, or who promotes agile from a quasi-religious viewpoint. 



  • Whiskey Tango Foxtrot> Over. wrote

    >According to one proponent of Agile, Robert Martin, you get documentation if it's a requirement that the client asks for, done at any time during the project, i.e. whenever the client says "hey, let's get some software maintenance documentation, 'cause I know I'll need it!". [...] I think you can safely assume that Agile folks don't do documentation.

    What this means is that documentation is not a deliverable in its own right unless the customer wants it. The thing to be delivered is the working software. Quite often there is user documentation to be delivered, as well. But of course a business customer isn't going to request technical documentation explicitly.

    Robert, as he often does, was going to extremes. He's great at describing an ideal world for us to strive for. But he's not describing practical reality. Tools don't even exist yet to support some of the practices he considers indispensable for "true" agile development. So don't read too much into his comments about documentation.

    Let's see if we can address your misconception about documentation on agile projects in a more down-to-earth way.

    First, development teams have other requirements besides the user-defined functional requirements. Non-functional requirements include things like performance, recoverability, availability, scalability, technical documentation, adherence to UI standards, adherence to government regulations (if applicable), security, business event monitoring, hooks to support chargebacks, and so forth. Those factors don't magically disappear on agile projects, even though the business customer may be unaware of them and won't ask for them. This is true regardless of methodology.

    Second, development teams produce whatever interim artifacts they need in order to meet their goals. It may include test harnesses, test data, scripts, assorted utilities, and other items that won't be deployed into production. It will almost certainly include some documentation, too, and some of that documentation may be worth keeping for reference purposes. This is also true regardless of methodology.

    But the more detailed the documentation, the more likely it is to get stale very quickly. High level documentation is more stable than detailed documentation simply because the architecture of a solution tends to change less radically than the detailed implementation. This is also true regardless of methodology.

    So you're trying to make a point about methodology using arguments that aren't about methodology.

    I see you followed up and "fixed" the post where you stated that assumption. I wish you had fixed the assumption itself while you were at it! There's still time. ;-)

    >Unit tests are easy to get around...

    Nice! Now we've got two people who don't understand agile development debating each other about agile development. It would be great if some of you guys could get an opportunity to work on a real agile project alongside people who knew what they were doing. I think you would discover there are some distinct benefits to it, provided it's used for the appropriate sorts of problems. Don't you want to know about tools that can make you more effective professionals? Seems like you would. You're interested enough to discuss this whole subject intelligently, after all. (Having misconceptions says nothing negative about your intelligence.)

    Let's face it: Traditional methods yield code like the WTFs we all enjoy laughing about on this site. I can't imagine anything like "To the Hexth Degree" or "Enterprise SQL" being produced by pair programming. It would be a statistical anomaly to find two people on the same team, pairing together on the same story, who were both such unbelievable morons. And it boggles the mind to consider the mathematical probability that after four hours or so, when the pairs on the team switched around, that two equally stupendous morons would pick up the same story and continue writing the code in the same vein, without noticing anything or saying anything about it. And a black hole would spontaneously open up between my left and right mouse buttons before anything like that would pass a code review.

    Hey...come to think of it, is that the reason you're so dead-set against agile methods? Are you afraid we'd run out of WTFs?

    No worries.


  • ♿ (Parody)

    @ammoQ said:

    In most projects I know, maintenance is done by members of the original development team.

    This is very rarely the case -- most often you want your best and brightest designing and developing the new projects, and the others following up with support and maintenance, unless a big enough request comes along that needs those people. Plus, lets not forget that most systems will outlast their creators.

    @ammoQ said:

    Basically, the main difference is that the traditional project has design docs and the XP project has unit tests.

    "WTF?O." has addressed this pretty well -- all I'd like to add is that unit tests are useless to business analysts. If the client wants to change the workflow, it is the job of the BA to determine the business impact (e.g. if we add logic that says "all Texas policies will have a fire rider" -- what will happen to the rule that says "policies greater than $500,000 need a fire rider").

    Without design documents, this is impossible, and involves going to a programmer to ask. A programmer is going to feel as comfortable answering that question as a BA is answering questions about code. Totally different domains of expertise.

    The unit tests are important too, and I see no reason why they shouldn't be there. I've used nUnit on a number of projects, big and small, but there is just no way someone can understand the system from looking at the tests.

    @ammoQ said:

    Alex, what do you mean with (1M+)? One million LOC or more? One man-year or more?

    Sorry, forgot the $. With a development cost of $1,000,000 -- about 10k hours, or 5 man-years I suppose?



  • @Alex Papadimoulis said:

    @ammoQ said:

    In most projects I know, maintenance is done by members of the original development team.

    This is very rarely the case -- most often you want your best and brightest designing and developing the new projects, and the others following up with support and maintenance, unless a big enough request comes along that needs those people. Plus, lets not forget that most systems will outlast their creators.



    Depends on how much work maintenance is. I know a lot of cases where "the best and brightest" are supposed to work on new projects, while also doing the maintenance for old projects along the way. But I admit that most of my projects are relatively small, say 0.5-1 man year. My largest project so far is in the 10-15 man years range; currently I do the maintenance mostly by myself, but I have an excellent programmer as standby.

     all I'd like to add is that unit tests are useless to business analysts. If the client wants to change the workflow, it is the job of the BA to determine the business impact (e.g. if we add logic that says "all Texas policies will have a fire rider" -- what will happen to the rule that says "policies greater than $500,000 need a fire rider").

    I don't see unit tests as a replacement for docs or as the docs itself. It's just something that I supposed to have in an agile project. If anything at all can replace the design docs, it's well written code. IMO, in that stage, unit tests are an insurance against unwanted side effects of changes.

    Regarding the workflow and the impact of changes: IMO this is something outside the scope of the IT system. It's not the concern of the programmer, the system architect or anyone else in the IT department how changes in the business rules affect the workflow. The IT department just has to make sure that those business rules resp. the changes thereof are correctly implemented.


    Sorry, forgot the $. With a development cost of $1,000,000 -- about 10k hours, or 5 man-years I suppose?



    Here in Austria, it's maybe 7 man years or so, but the magnitude is the same.


  • @Dave Nicolette said:



    Let's face it: Traditional methods yield code like the WTFs we all enjoy laughing about on this site. I can't imagine anything like "To the Hexth Degree" or "Enterprise SQL" being produced by pair programming.


    I think it's fair to assume that most of these WTFs are not the result of traditional methods done right, but rather the result of eighter
    - pure cowboy programming
    - kind-of-a-method, laxly enforced
    - brainchild of lone moron without review



  • @Dave Nicolette said:

    What this means is that documentation is not a deliverable in its own right unless the customer wants it. The thing to be delivered is the working software. Quite often there is user documentation to be delivered, as well. But of course a business customer isn't going to request technical documentation explicitly.
    [...]
    Second, development teams produce whatever interim artifacts they need in order to meet their goals. It may include test harnesses, test data, scripts, assorted utilities, and other items that won't be deployed into production. It will almost certainly include some documentation, too, and some of that documentation may be worth keeping for reference purposes. This is also true regardless of methodology.

    The point that Alex and I are making is that in order to maintain software, accurate detailed documentation *is* necessary. 100% absolutely-cannot-possibly-do-your-job-properly necessary. If you're going to change the way an object works, which is likely during maintenance, you need to know how that change will affect the rest of the program, and you can only ever discover that information with accurate detailed design documents. Even if you already "know" the whole system, odds are you're gonna forget that changing the way a method works will break other objects. Yes, your test suite should go "bing!" when you do that, but you don't want to make the change and *then* find out you broke stuff, especially if you have a circular relationship such that fixing the "bing!" only leads to another "bing!", and by the time you've followed the "bing!" path to its completion, you've messed up the whole program.

    But the more detailed the documentation, the more likely it is to get stale very quickly. High level documentation is more stable than detailed documentation simply because the architecture of a solution tends to change less radically than the detailed implementation. This is also true regardless of methodology.

    100% true. Detailed documentation *is* far more likely to go stale. It is the job of the effective programmer to ensure that detailed documentation is accurate after each change the programmer makes. Waving your hands and saying "documentation always lies" only makes excuses for bad programmers. *That* is the problem that most Agile proponents refuse to face.

    Look. The basic concepts behind Agile is good. Iterative processes are good. Being able to change based upon new realizations and/or client mood is good. But if you don't maintain your documentation accurately from requirements to design to code, you're setting whomever gets the job of maintaining your software up for many Malox(tm) moments, and are likely creating many WTFs in the process. If you want to *really* be Agile, you must work in an iterative, waterfall fashion. Requirements. Design. Code. If you find an error, or realize something different at any step, go back to step one. Repeat until you are finished. That is the only "good" methodology. Call it Traditional, call it Agile, it's the only way to write software.



  • @Whiskey Tango Foxtrot Over said:

    The point that Alex and I are making is that in order to maintain
    software, accurate detailed documentation is necessary. 100%
    absolutely-cannot-possibly-do-your-job-properly necessary.


    I think it would be easier for all of us if you specify what "detailed documentation" means for you. I'm confident you don't mean the useless kind of docs an automated tool can extract from the source.


    If you're going to change the way an object works, which is likely during maintenance, you need to know how that change will affect the rest of the program, and you can only ever discover that information with accurate detailed design documents.

    The better the design, the easier this information is to get from the code. A change in one method should never have unexpected consequences throughout the whole system.


    Even if you already "know" the whole system, odds are you're gonna forget that changing the way a method works will break other objects. Yes, your test suite should go "bing!" when you do that, but you don't want to make the change and *then* find out you broke stuff, especially if you have a circular relationship such that fixing the "bing!" only leads to another "bing!", and by the time you've followed the "bing!" path to its completion, you've messed up the whole program.

    In an ideal world, you use source control. So, if all your changes lead to a circular "*bing" and it all becomes a big mess, better check out the last-known-good version and try again.


    It is the job of the effective programmer to ensure that detailed documentation is accurate after each change the programmer makes.

    We use CVS to make sure that if two people are concurrently making changes on the same source module, both changes (as long as they are not conflicting) are merged into a final version without manual work. Can we do the same with detailed documentation? If yes, which format should it have? Is the documentation tied to the CVS, so if we have to branch the source, we also branch the documentation? There's a lot of tools to help us make good code (like debuggers, lint, unit tests, metrics etc.) but the quality of documentation completely relies on the discipline of the programmers. Assuming that he maintenance programmers are average or below, this is a dangerous dependency.



  • >brainchild of lone moron without review

    That one gets my vote. I think I said the same thing as you, but in a more long-winded way. With pair programming, there are no lone morons. ;-)




  • >
    The better the design, the easier this information is to get from the
    code. A change in one method should never have unexpected consequences
    throughout the whole system.


    Right...and that's exactly what refactoring buys you. Seems we don't disagree about fundamentals, only about word choices.


  • >It is the job of the effective programmer to
    ensure that detailed documentation is accurate after each change the
    programmer makes.

    Sounds more like the job of the effective documenter.

    >
    Waving your hands and saying
    "documentation always lies" only makes excuses for bad programmers.
    That is the problem that most Agile proponents refuse to face.

    The behavior you describe would certainly pose a problem, if it were real. But why do you think "most Agile proponents refuse to face" it? Your comments to date about agile methods indicate you've never actually used them, so where would you have come into contact with "most Agile proponents?"

    For example, you write:

    >
    If you want to really be Agile, you must work in an iterative, waterfall fashion.

    ...which proves you're missing something basic. All agile methods are iterative and some waterfall methods are iterative (e.g., RUP), but you can't be both agile and waterfall. It's a non sequitur.

    >
    That is the only "good" methodology.

    Ah, there's only one good methodology. Well, it sounds as if I was right when I wrote that some of the disagreements expressed here are a result of different people having different frames of reference. In my frame of reference, there's room for more than one good methodology for the same reason there's more than one type of screwdriver in my toolbox. That difference alone would lead the two of us to different conclusions.





  • ♿ (Parody)

    @ammoQ said:

    I think it would be easier for all of us if you specify what "detailed documentation" means for you. I'm confident you don't mean the useless kind of docs an automated tool can extract from the source.

    I'm refering to a functional specification for every piece of logic in the system. Each specification has a number (like Requirement #22.5.3a), and that number is what's used as the comment in the code (along with any other technical info helpful to understanding the code). Yes, this is a *lot* of documentation -- hundreds, even thousands of pages.

    This is more work and requires some strict rules to keep up to date, but the advantages are huge throughout the system's lifecycle. Simply put, no code can be changed unless the specifications change or new specs are added. Once the new specs come in, analysts develop new test cases while programmers make the required changes to the code. The system is tested against the new specs, and everything is kept up to date.

    Now, I have yet to see a system like that. I've seen fairly close (1/2 of the modules were like that, others were a mess) But that is the direction I believe we should be striving for, not away from it. We need toosl to make this process less painful (perhaps some system that integrated code and documentation), not a process that avoids this documentation.



  • @Alex Papadimoulis said:

    @ammoQ said:

    I think it would
    be easier for all of us if you specify what "detailed documentation"
    means for you. I'm confident you don't mean the useless kind of docs an
    automated tool can extract from the source.

    I'm refering to a functional specification for every piece of logic in the system. Each specification has a number (like Requirement #22.5.3a), and that number is what's used as the comment in the code (along with any other technical info helpful to understanding the code). Yes, this is a *lot* of documentation -- hundreds, even thousands of pages.



    This sounds almost like the "stories" from XP, but I assume the difference is that you talk about technical specifications while the stories are functional requirements.

    This is more work and requires some strict rules to keep up to date, but the advantages are huge throughout the system's lifecycle. Simply put, no code can be changed unless the specifications change or new specs are added.

    What about bugfixes? (I know, in a perfect world, there are no bugs to fix) Do you consider a troubleticket ("crashes if clicked twice, shouldn't do that") a specification?


    Once the new specs come in, analysts develop new test cases while programmers make the required changes to the code. The system is tested against the new specs, and everything is kept up to date.


    Again, this is not too far from XP. Just call the spec "story", make the test cases automated and pair the analyst with the programmer.


    Now, I have yet to see a system like that. I've seen fairly close (1/2 of the modules were like that, others were a mess) But that is the direction I believe we should be striving for, not away from it. We need toosl to make this process less painful (perhaps some system that integrated code and documentation), not a process that avoids this documentation.



    When it comes to the detailed technical documentation, I think Javadoc and the similar mechanism in the .net languages is a good way to go. Easy to keep up-to-date, easy to use, revisions managed by the version control system. Of course there should be some higher-level abstractions of the overall structure, like diagrams, but these change less frequently.


  • @Dave Nicolette said:

    ammoQ wrote

    >Consider a successfull company that reaches its limits with the old processes. [etc.]

    Point taken, but isn't the situation you describe a result of a long series of events rather than a single project?



    Implementing the system that runs the new distribution center is a single project. Within a few days, all the stock has to be moved from the old warehouse to the new one. From then on, all work has to be done on the new system. Most parts of the system must work correctly and performant from day one; some parts (like inventory taking) can be delayed since they are not required immediately (though inventory taking is required sooner than expected if other parts of the system fail ;-)



  • Alex wrote about functional specifications and tracking them throughout the product lifecycle.



    One can cull a generalization from this - that the work we do is
    inherently complex and the methodologies we use require consistent,
    knowledgeable effort. The approach you describe to tracking
    requirements and managing change is probably the most common one across
    the industry. Like you, I have not seen it fully implemented or fully
    successful, but I have seen a number of organizations that make a
    pretty good stab at it. I have also noticed something about those
    organizations, though: Software development takes them a long time and
    there are a lot of administrative costs around project initiation and
    tracking. I think that is necessary when it brings business value, but
    is needless overhead if it doesn't bring business value. I know of several cases when a business unit decided not to pursue an IT project specifically because the project initiation costs were higher than the expected ROI of the final product. That sort of thing stagnates a business.



    The approach you describe is advisable for certain types of software
    development because it does bring business value. The example that
    springs to mind is product development, where you have a product or
    suite of products that is developed through a long series of releases
    over a long period of time, and may be sold commercially to a large
    customer base. In that context, the detailed, formal documentation is a
    business asset and the software development activity is a core business
    process.



    There is nothing in that model that precludes companies using lean
    development methods to streamline the process and minimize the amount
    of wasted effort. Process-centric approaches tend to grow by adding
    more and more checkpoints or handoffs and more and more "required"
    documents. Process improvement methods such as Lean Six Sigma and
    empirical process control methods such as the Toyota Processing System
    can mitigate the built-in sluggishness of those approaches. RUP is a
    very mature and practical iterative development methodology that fits
    in well with process-centric (typically waterfall-like) development.



    Where some people miss the mark is in thinking that this model
    represents the only viable approach to developing software of all
    kinds. Not all companies that do software development work are in the
    software development business. Many companies do software development
    to support business operations that are unrelated to technology. A very
    common situation in that sort of company is the creation of a tactical,
    vertical, customer-facing business application. This is project work as
    opposed to product development work. Often these projects are
    characterized by an urgent need for rapid delivery combined with high
    uncertainty about detailed requirements at the outset. The process
    overhead of traditional approaches can simply be too slow to meet the
    rapid delivery requirement, and can impose such stringent change
    controls that it becomes too costly to adapt as the customer learns
    more about his needs in the course of development.


    This is a situation where agile methods deliver better value than
    traditional methods. In this case, the software development process
    itself is not a core business process of the enterprise, it is only a
    supporting function. Similarly, the documents the team produces to help
    themselves build the code are not business assets in their own right,
    they are merely means to an end.



    An example at our company was a certain application that a business
    unit wanted to put into the field to see if it could speed up the loan
    origination process for high-end recreational sales such as boats and
    RVs. Our in-house traditional development group presented a proposal to
    do the work in 10 months with a team of 20 people at a cost of about
    $850,000. Because of the proposed timeline, there was an opportunity
    cost associated with missing the first year's peak sales season,
    estimated by the customer at $2 million. Our agile development group
    won the bid and completed the work in 6 weeks with a team of 4 people
    at a cost of $73,000. Did we build an "enterprise" solution that would
    stand the test of time, like the Pyramids? No. But we delivered a
    working solution in time for the peak sales season. A year later, the
    business unit was able to assess the results and make a decision about
    whether it was worth revamping the solution to make it more robust and
    "enterprise-ish." They decided it wasn't worth it. There wasn't as much
    profit to be made in that market sector as they had guessed. In this
    case, then, using the agile approach yielded greater business value
    than the traditional approach in several respects: (a) Immediate ROI,
    (b) opportunity cost, and (c) not wasting a lot of time and money
    building the Pyramids when it ultimately turned out to be a short-term
    business opportunity anyway.



    I offer that example to demonstrate that we aren't automatically doing
    the right thing if we do everything "big" from Day One. There are many
    times when it makes good business sense to do things "small" at first
    and see how they're going to play out financially.



    For that reason, I disagree that there is exactly one "right" way to do
    software development. I think our goals as IT professionals must be
    aligned with our customers' goals, and their goals are not always the
    same. In some cases, we need to choose a
    methodology that delivers results quickly even in the face of uncertain
    requirements. This is very different from the circumstance when
    detailed documentation is a genuine business asset, and when strict
    process controls enhance the market viability of the product.



    As far as your original comment about cowboy coding is concerned, if
    you get an opportunity to work with an agile team yourself I think you
    will discover it's every bit as hard to do correctly as any other
    methodology.


  • ♿ (Parody)

    @ammoQ said:

    This sounds almost like the "stories" from XP, but I assume the difference is that you talk about technical specifications while the stories are functional requirements.

    These are largely different. Correct me if I'm wrong, the Client is expected to develop user stories which are, for the most part, the system from the user's point of view, as far as what the system is supposed to do. Stories attempt to build the system from a "ground up" user POV, whereas functional requirements build the system from the "top down" systematic (how the system integrates within the existing workflow) POV.

     

    @ammoQ said:

    What about bugfixes? (I know, in a perfect world, there are no bugs to fix) Do you consider a troubleticket ("crashes if clicked twice, shouldn't do that") a specification?

    There are two categories of defects -- requirements problems (i.e., the requirement was incorrect) and coding problems (requirement was programmed incorrectly). For the former, the specifications need to change (and the programmer now has something to work with). For the latter, the requirements are the same, but the code just doesn't comply with it.


    When it comes to the detailed technical documentation, I think Javadoc and the similar mechanism in the .net languages is a good way to go ...

    I agree, this is a good start, but certainly not enough.



  • ammoQ,

    What you're looking at are two different approaches to solving the same problems. One approach depends on formal procedural steps, formal documentation (usually conforming to a standard format or template), checkpoints or quality gates in which each stage of the work is handed off to the next group of specialists (requirements, archiecture, design, coding, testing, deployment, support).

    The other approach deals with these issues differently, but it still must deal with the same issues.

    User stories in XP are not like formal specifications. They are only a one- or two-sentence statement of a requirement. The details of the requirement are elaborated when the story is played. Playing a story in XP involves all the work that takes place across the various stages of the waterfall in a traditional process, but only within the scope of the single story.

    There are also story cards for non-functional requirements. It would be hard to build a working software solution without them.

    I agree with your comment about javadoc and so forth. Tools have evolved to the point that the code itself can be largely self-describing. There remains a need for architectural diagrams, high-level design such as class models and data models, and summary narrative descriptions of purpose and of the rationale for particular implementation choices. When it comes to detailed technical documentation, a combination of things can provide more useful information than separate documentation: (a) build code to well-known design patterns - new people will understand what they are looking at when they see pattern-based code, (b) establish and follow an enterprise-wide data dictionary and common naming conventions, (c) include comments in source code for anything that isn't obvious from looking at the code itself, (d) use the full range of features offered by tools like javadoc to provide documentation at the package level and above, rather than just the default method and class descriptions, and (e) when you do create a separate document, check it into version control along with the code so that the next person doesn't have to go searching through disk archives or dusty storerooms looking for it or, worse, have to deal with a POS package like RequisitePro.

    Ideally, everything you need to work on an application should come to you when you check out - code, tests, test data, examples, supporting scripts and utilities, and documents. One of the root causes of the problem Alex describes about tracking requirements is the fact that these various items are often stored separately from each other. That tends to lead to things getting out of sync.




  • Alex, re ground up vs top down...

    Seems like most systems are approached from both directions at once, regardless of the formal methodology used on the project. Agile projects start out with a general high-level design that represents a top-down view. It is not elaborated as fully as it would be with traditional methods before beginning development, but it is still a top-down view. Similarly, in traditional projects, developers usually build a lot of components from the bottom up even though the design approach is top-down. They will often build the foundational layers of code first. The same thing happens on agile projects and for the same reason - nothing will run without that foundational layer of code.

    When you look at the earned value chart from an agile project, you will typically see an S curve that shows relatively little business value is delivered early in the project, then a lot of value in the middle, then only a little value toward the end. The reason for the flat curve at the beginning of the project is that the team has to build a certain amount of foundational code and may have to do things like setting up servers and defining database schema and arranging for network security before they can implement any of the customer-requested functionality. Then the value curve rises sharply because of the agile principle to deliver features in the order prescribed by the customer, which usually means the features that  represent the greatest value to the customer are delivered first. Finally, the team gets around to building the lower-value features that were farther down the customer's priority list.

    This is a time when agile methods allow us to do something traditional methods can't, due to the way they are structured: We can terminate early with success. The customer may decide they have gained enough value and there is no need to build the remaining, low-priority features. I have had this happen sometimes on customer-facing projects.

    It doesn't happen on internal, technical-type projects because the value of those projects is calculated differently. They are usually cost-justified on the basis of reducing operating costs rather than on the basis of increasing revenue. Technical projects are usually funded from an IT budget pool rather than directly by a business unit, too. Business units don't want to fund enterprise projects such as SOA implementation because they feel like they're funding someone else's ROI. That's another difference between types of projects that might influence our choice of methodology.

    I mentioned the earned value chart - traditional projects usually don't track progress in that way, because they expect to deliver all the specified requirements at once; therefore, it doesn't matter in what order the features are built, and there is no need to pay attention to which features correspond to the most business value. That's okay for technical projects funded out of the IT budget pool. It's less okay for projects funded directly by business units, because they need to know the ROI.




  • @Alex Papadimoulis said:



    These are largely different. Correct me if I'm wrong, the Client is
    expected to develop user stories which are, for the most part, the
    system from the user's point of view, as far as what the system is
    supposed to do. Stories attempt to build the system from a "ground up"
    user POV, whereas functional requirements build the system from the
    "top down" systematic (how the system integrates within the existing
    workflow) POV.

    @Dave Nicolette said:


    User stories in XP are not like formal specifications. They are only a one- or two-sentence statement of a requirement. The details of the requirement are elaborated when the story is played. Playing a story in XP involves all the work that takes place across the various stages of the waterfall in a traditional process, but only within the scope of the single story.


    This is true. TBH, I'm a bit sceptic about that part. For instance, in Wikipedia we find the following examples for user stories:
    Starting Application
    The application begins by bringing up the last document the user was working with.
    Closing Document
    If user closes the application, they are prompted to save.

    While those are valid requirements, I cannot imagine a team running through a full iteration cycle for just one of those cards, since it takes hardly more than 10 lines of code to implement them (assuming the methods for loading and saving the document are already implemented). At that level of detail, a system is likely to have hundreds or thousands of stories.
    On the other hand, if a user story goes like this:

    Order entry
    The user enters an order.

    It's hardly more than a title. Obviously not enough to start working on it, so the developers must request more detailed instructions from the customer. It's hard to imagine that such information is never written down.




  • ammoQ wrote,

    >I cannot imagine a team running
    through a full iteration cycle for just one of those cards, since it
    takes hardly more than 10 lines of code to implement them (assuming the
    methods for loading and saving the document are already implemented).

    I can't imagine that, either. There's a lot more rigor involved than you seem to realize.

    The statements on the card serve as a starting point for discussion. "Hardly more than a title" is okay. Remember that agile methods emphasize "individuals and interactions over processes and tools." The customer is present, as are team members having all the skillsets needed for the project. During the first half of the iteration planning meeting (IPM), the customer identifies the requirements he/she would like to be implemented next, in priority order. The team asks any questions they want to clarify the requirements. The team may recognize technical hurdles the customer's requirements present, and they can explain that to the customer. Collaboratively, they come up with some reasonable set of expectations for the iteration.

    During the IPM the description of the requirement is going to be expanded considerably beyond the statements on the story card. One difference between agile methods and traditional methods is that this elaboration is not done for the purpose of producing a comprehensive requirements specification document, but only to help the team build the actual code. For that reason, whatever the team members write down during the IPM may not be retained beyond the end of the iteration. It's only interim documentation.

    The second half of the IPM is the time when the team decomposes the stories into chunks of work, sometimes called "tasks" but sometimes the work is just broken up into additional stories. Technical dependencies are identified. For instance, if the methods for loading and saving the document are not already implemented, then that work is scoped out at this time.

    One goal is to decompose the work into chunks that are roughly the same difficulty. A common rule of thumb is that a story (having been sized and estimated) should take between 4 and 16 ideal hours to complete. Some stories will only take a few minutes to implement. Others may take days. The work needs to be organized into chunks that are roughly the same size. A number of trivial stories might be collected into one, and a very large story might be decomposed into smaller stories using a skill from the good old days, "functional decomposition."

    The reasons to break the work down into similar-sized chunks are (a) to make it easier for the customer to change priorities or substitute stories as needs change, and (b) to help with tracking progress. A popular tool for tracking progress is the burndown chart, a line graph showing time on the x axis and some unit of value on the y axis. The y axis could show story points, features delivered, or even the monetary value of the completed stories, depending on how progress is to be measured. If the stories were of wildly different sizes, the burndown chart could become meaningless. A team might deliver 45 tiny stories in one iteration and 1 giant story the next, and you wouldn't be able to tell what was going on by looking at the burndown chart.

    All this activity is time-boxed. IPMs typically take 8 hours. The first half takes 4 hours, is led by the customer and supported by the team, and has specific goals as mentioned above. The second half takes 4 hours, is led by the team and supported by the customer, and has a different set of specific goals. All this estimation and analysis takes place quickly. The experience is very intense and interactive.  It's been mentioned on the thread already that agile development requires highly skilled team members. This is one of the reasons why. Junior programmers lack the experience to be able to make reasonable estimates that quickly.

    The resulting estimates are not set in
    stone; they are a best guess at a point in time. That's one of the
    reasons frequent and open communication among all stakeholders is so
    important when you use this approach. When things change, everyone
    needs to know why they are changing immediately so they can adapt to
    the change in the most appropriate way. It's a very exciting and rewarding way to work, IMO.


Log in to reply