Does Agile work for standard software development



  • Hi all



    I work for a company which develops standard software for the travel
    industry. After several intense discussions on the development method,
    here's what's still bugging us (and what I'm wondering what other
    developers think about):



    Agile development methods suggest working in interations, each
    iteration resulting in a work package which benefits the customer. The
    scope of each iteration is defined on short notice, reacting to
    requirement changes et al. If an overall schedule can not be met, then
    the customer still gets "most" of what was planned, the remaining
    iterations are either delivered as a small upgrade package or in the
    next release.



    We get our scope for the next release from custom demands, in-house
    ideas etc. Most requests from external sources are not defined in
    detail, the requestor is usually not available to specify the
    requirements in detail. Nevertheless, our customers ask for a delivery
    date of the new version among with a list of new or improved features.



    Our current process is that we make an overall effort estimate for all
    requirements based on assumptions and previous, similar tasks. From
    this estimate, the delivery date is derived and FIXED! We all know that
    this delivery date is a mere guess, but: the customers want a delivery
    date (communicated as soon as known) and relies on it (e.g. in order to
    plan the migration).



    How do we manage the development process to

    1. meet the delivery date if somehow possible
    2. provide our management with reliable data on our progress



      Item (2) is important so our management can react quickly if the delivery date is in danger.



      After all the bla, here's the thing:

      If we develop in agile iterations and define WHAT we're going to do in
      the next iterations only on short notice, we can easily react to
      changes, but we can't say much about overall progress as we don't have
      a "Iteration Master Plan".

      If we develop in a kind of waterfall, maybe doing iterations but
      essentially planning ALL iterations beforehand, we can easily report
      our progress, but we can't handle unexpected issues (which alway arise)
      easily.

      Either way, on delivery date we need to fulfill all requirements,
      there's no "we'll deliver that feature in two weeks" option (deployment
      of our software is expensive).



      How do we best plan and manage our development work?



  • ♿ (Parody)

    I believe that it's easiest to put "agile development" into perspective when we refer to it as its name from yesteryear: Cowboy Coding. The fact of the matter is, "agile development" does not work and cannot work. You've identified one of the many problems: a complete inability to provide progress reports and plan, but there are far many more. Two of the deal killers are

    • Agile requires strong developers. after seeing the stuff I post each day, does anyone believe that it's possible to have such a "dream team" of developers?
    • The primary goal of Agile is deliver a product that meets the requirements. If you were to buy a car that just did that, you'd be buying a lemon. It's all about the maintence (75%-95% of TOC of software is maintence). When *all* the pieces of the puzzle aren't considered from the get-go, it's ipmossible to design an architecture that will work.

    Another way to get a clear head and avoid this nonsense is use the weight loss analogy. The only way to lose weight is to have a greater burn rate than intake rate. Any other method of weight loss (the Cabbage diet, the peanut butter diet, the remove-extraneous-organs diet) is a "fad" diet -- yes it gets the job done for the first week, but it's impossible to maintain. When you go with an "Agile" method of development, you'll pay dearly later.



  • Heh heh. I mostly agree with Alex' reply. From having worked at a company that has no formal methodology that does something similar to 'agile', here is a quick thought about what Agile may really be:

    The success of Agile Development stems from having no alternative: You have customers who think they are professional software designers and architects, and you have to make them happy by giving them lots of control and doing exactly what they say, instead of solving the problem from your own experience. The only way to achieve this is to convince them to act in a framework of constant feedback and critical thinking, so they can see their own design mistakes quickly and agree to correct them. I have first-hand experience working this way. It sounds utopian when phrased the right way, but the basis of it is that the market for custom programming is filled with ignorant control freaks.

    This means that you can't build software in a way that doesn't provide instant feedback; otherwise, it will take forever to iterate ideas with the clients, and the clients will spend more time tapping their fingers and less time feeling like geniuses for realizing that the logo SHOULD go on the top left after all. That is why tools like Rails are especially handy to 'Agile' developers.



  • Alex, quamaretto



    So what you're saying is that the entire release should be planned and
    managed according to the waterfall approach, with a short initial
    period of ruling out uncertainties, then the classic OOA/OOD/OOP?

    How would you handle late requests? How do you handle the situation
    where you show a demo version of the new release at an industry fare
    and customers give you valuable feedback which you want to integrate
    asap?

    One of the things I do like about agile is that it expects changes
    throughout the development process. If I do waterfall, obviously I can
    squeeze in some additional features at a late stage, but they won't get
    the full OOA/OOD because that's already been done. If I do iterations,
    the new requirements will get the same design attention as all the
    others.


    I'm tempted to say that if you are willing to accept requirement
    changes (or new requirements) during development, you simply CAN'T
    report progress, because you never know for sure what 100% actually is.


    Thanks for the insights, I still hope to get some more feedback ... this seems to me a major issue for SW managers.


  • ♿ (Parody)

    Simon,
    It's unreasonable to require that no variation from the original specifications are permitted post-analysis. The difference between traditional (waterfall, spiral) methodologies and these new "agile" methodologies is how changes are handled: exceptional versus expected.

    When building anything, change can be detrimental. Just imagine how impractical it would be to add a basement to new construction once you've laid the framework. The same is true with software; if your software is designed to manage widgets, but halfway through you want it to manager sprockets as well, you're going to be in the same trouble.

    Traditional methodologies have all post-analysis changes go through a change control process of some sort that will assess the feasibility and additional cost for the change. Some changes will never see the light of day (think the basement request), while others are perfectly reasonable midway (having the laundry room be wired for the second floor).

    Agile methodologies have minimal up-front analysis, meaning no "details" are worked out in the beginning. But the devil is always in the "details," (sorry for the pun) as the client is not qualified to understand what's a detail and what isn't. Either way, the developers are committed to implement whatever is requested, regardless of whether the client anticipated it or not. This leads to major hacks and an *extremely* costly to maintain system.

    Are you a software manager, or is your company looking to get mature processes? My company specializes in helping organizations to build an effective and maintainable processes for software development.

     



  • Hi Alex



    I'm a software developer, but the development department I'm working
    for is so small (3-5 people) that managing issues are a common topic.



    We do have software development processes in place, but aren't really
    happy how planning related to the actual progress. We've had quite a
    few discussions in the team on how to plan and manage a sw project and
    feel pretty comfortable with most issues. One of the main issues we
    still have where we just can't seem to work out a "best practice" (my
    consultant-years showing through) is this gap between developing
    iterative (as seems the modern way to go) and thus planning mostly
    short-term versus developing traditionally and thus handling changes
    exceptionally.



    From your posts, I gather that despite the "agile hype", you prefer
    traditional methods and tend to handle changes exceptionally, doing an
    impact assessment and just implementing those changes which "fit in".



    That's definitely an interesting statement for me, I assumed that the
    vast majority of SW development was being done "agile" these days.



    Thanks for your offer to help our company, I guess we're doing ok for
    now, despite the discussion. Some of it is actually almost a
    "philosophical" level, our daily development is doing well.
    (Nevertheless, do you actually do work in Switzerland?)



    Thanks all

    Simon


  • ♿ (Parody)

    There's a significant gap between the hype and reality.

    You will find that many organizations claim to be actively doing "agile" development but are really just shoving the development team in a conference room and handing them the requirments. With the lack of accountability that "agile" provides, I don't think it's even possible to do "properly" in a larger organization.

    I've found that the best practice for a situation such as yours is a "team oriented" process. Meaning, you develop and improve your process based on the strengths & weaknesses of the individuals on your team. I think the natural tendancy will be to have an intertwined requirements/development process, mostly because a small organizations don't have the clout to bill for analysis separatly from development.

    And no -- I'm based out of the States. I have yet to see a client from out of country, but I'd definitely love the opportunity. Especially to a "fun" place. Maybe I should try marketing across the pond ;-).



  • Having developed an application using a team of 3 programmers, using agile development, i must say that it was stressful (it was our first major project), but was HIGHLY effective in our case.

    It was for a Game Company, we were contracted to create MATEU - a Model / Animation / Texture Export Utility. (The client hated the name, but hey, who cares - He could call it what he likes when he got it.)

    We found the ONLY way agile works is when the project can be broken down into parts that can ACTUALLY be developed and implemented AT LEAST fortnightly. For our project, it required conversions of file types left, right, and center, and so each of these was able to be prepared separate from the rest, and the client was able to use them as we developed them.

    We found it worked really well in the end, and as the client was also a developer, the way they were constantly getting updates, and usable solutions, it assisted THEIR development cycle.

    In reality, you have to consider your choice of development cycle as you would the language you write in. Just because your current language is what you're used to, doesnt mean its the best choice for that particular job. Every language has limitations, and knowing what they are and the impact they will have on your project is what makes a Developer all the more valuable.

     

     

    If i had 2 cents, that's what it would be.



  • Sao,



    I believe the most important point here is, that your customer was
    itself integrating your deliverable into his product, whereas we
    deliver to customers who want the finished thing. Also, maybe even more
    important, deployment of your deliverable was virtually without cost.

    I'm coming to believe that if you plan to deliver an ENTIRELY COMPLETED
    SOFTWARE PACKAGE to the customer (which means that results of any
    iteration would only be used in-house for testing and the likes) and
    DEPLOYMENT of that software IS A COST FACTOR, there's no point in agile
    development.

    What I'm having a hard time with is that above two condition must apply
    to a great number of software projects, but from the literature it
    seems the entire sw community is doing agile. As Alex already pointed
    out, this seems to simply be a misconception.






  • Perhaps the client is not who you see it to be. Are you developing a system to be deployed to several 'clients' for their use? Is it your company that decides what is in the next release? At my company, we focus on US as the client for the purposes of this style of project. The team reports to senior management as the client. Why are small internal iterations a bad thing? Maybe a strict Agile styling will be detrimental, but a hybrid will suffice.

    Maybe thats the model you already use, and therefore dont think of it as Agile. I think the most important point is to get all the information and process you can of Agile, then if your not keen to be 'strictly by the book', just consider what principles you could include in your current dev cycle.

    On the other hand, if your company decides release objectives, then aren't you developing a product for yourselves, to be supplied to others? Would this perspective alter anything?

     

    I had a teacher who was THE greatest coder i have ever met, i might try to chase him up and see if he has any injectures.



  • Agile process calls for a Release Plan and Iteration Plans.

    The Release Plan and the first IPM happens in iteration 0 (zero) (aka i0).

    The plan calls for an exhaustive list of stories as known at that time.

    All stories are compared to a "known" story from the past.  This is an 'order of magnitude' compare. So new stories are about the same amount of work (this get a 2), half the size (gets a 1), twice is big ( a 4) or huge (and 8).

    These are called story points.  You add up all the story points and it gives you the total Release Story points. 

    You need to multiply this times your teams Velocity and by the duration (how long did it take to complete the known story). This gives you your projected (forecast) delivery.  It is what it is and you report this in the Release Plan. Then after each iteration in your IPM your compare actual to plan and forecast hitting the target by plus or minus days.

    Your team's velocity is how many story points are your completing in previous iterations. You keep a chart of this and update during your IPM session. A team that is working well together will see a consistent number for velocity.  It often creeps up, (a good thing) but expect it to vary by some number of SPs each iteration.

    We are a large shop and find it works great. We do have a need for change control when the change impacts scope, resources or delivery dates. Otherwise we uncourage continual refinements of the specs.  There was a comment that it does not work because dev just codes to spec.  This is actually the complaint of waterfall.  With shirt iterations (1 or 2 week), you are constantly adjusting the steering of the project. During each iteation the Bus. Analyst, should be drawing the client/customer into the dev shop to show them the results (user interfaces) and observe their reactions.  This gives the team and BA the user expectation.  The project should always be measure on expectation and not on specs.  This way you adjust the solution to what they expect.  



  • Simon,



    I'm sorry to see you haven't received many serious replies. Every
    mention of the word "agile" seems to set off a flurry of
    quasi-religious posts.



    When you say "standard software", it sounds as if you mean you are
    developing a commercial product. Agile development methods usually work
    best for one-off projects that have a single customer or user group.
    Once development is done and the solution is deployed, that's the end
    of it.



    For commercial product development, you are really dealing with an
    ongoing release cycle and incremental improvement of a product over
    time. In that situation, a more traditional management structure would
    probably give you the kind of controls and tracking you're looking for.




    Regarding iterations - that's not the defining characteristic of agile
    methods, although all agile methods are iterative. There are iterative
    methodologies that fit very well with a product development scenario.
    The best known (and probably most effective) one is RUP (Rational
    Unified Process). A lot of commercial software companies use it. It's
    also flexible and customizable, so you can tailor it to your needs
    pretty well.



    There are always unexpected issues. Different methodologies deal with
    that in different ways. The fact agile methods are touted to handle
    change doesn't mean there's no other way to handle change. RUP
    iterations are longer than, say, Scrum sprints or XP iterations. That's
    because each RUP iteration contains four little waterfall phases. That
    gives you a high degree of traditional-style process control, but the
    fact RUP is iterative also gives you more than one opportunity to
    manage change control between product releases.



    If you want to minimize unproductive process overhead while retaining a
    generally traditional project model, you might check into lean
    development methods as opposed to purely agile ones. Lean development
    basically tries to eliminate useless procedural overhead from a
    traditional project process. Based on your brief explanation, it sounds
    like that might be a better fit for your situation than agile
    development.




  • Incorporating SOME agile elements can work fine, but don't take on the entire bandwagon or you're bound for failure in all but the smallest projects.

    Waterfall I'd not recommend except for extremely static environments (maintenance projects on mainframe applications with release cycles of half a year or more), something like UP works a lot better. It has the agile system of shorter feedback loops without the problem of there being no design and division of responsibility.

    How do we manage the development process to
    1) meet the delivery date if somehow possible

    By setting a realistic release schedule, not by releasing whatever you have at the time the deadline approaches.

    2) provide our management with reliable data on our progress

    By keeping track of progress, something agile methods prohibit (they basically scrap all tracking and tracing as part of their drive to be "flexible").

    What we do is create a list of modules. Each module gets a booking code and time estimate.
    Hours get booked to track whether our planning was accurate and to check on budget status.
    A large sheet is printed with every module and boxes for each phase of development (FA, TA, implementation, testing, signoff) as well as a priotity number. When a phase is complete, it's marked off on that list which hangs in a central location. When all phases are complete the module is highlighted using a fluorescent marker.
    That way it's easy to see how far along you are in the project (of course it's not perfect, some days you may program 5 small modules while a massive one may take 2 months, but it's a nice thing for managers to see the list growing smaller and smaller. That's why we divide up the small modules and do a few of those whenever there's been no visible progress for a while, keeps the managers happy :)).



  • I don't think agile processes work well for standard software
    development.  The assumed flexibility of agile processes buys you
    nothing if you have to make many customers happy with the same product.



  • Simon,



    I see that you are receiving two types of replies - those that are
    using your question as a springboard to express subjective opinions
    about agile methods, and those that offer well-intentioned advice but
    cannot possibly have a thorough understanding of your company's
    situation.



    If you are trying to get information on which to base business
    decisions, I think you need objective advice based on a good
    understanding of your particular circumstances. Your profile states you
    are located in Switzerland. Maybe it would be convenient for you to
    contact Laurent Bossavit in Paris. He is a very well qualified,
    professional, and objective consultant who has experience in a variety
    of methods. His website is bossavit.com.



  • @Alex Papadimoulis said:

    I believe that it's easiest to put "agile development" into perspective when we refer to it as its name from yesteryear: Cowboy Coding. The fact of the matter is, "agile development" does not work and cannot work.



    I felt I had to reply when I read this, as this is outright wrong. Agile is not the best fit for everyone, but I have had the pleasure of working in agile environments at several companies as well as traditional waterall and RUP, and imho agile is a great way to go.

    However, this is subjective, it really depends on the team, the people, and how well it is implemented. Agile is not "cowboy coding" (in fact, if you ask me, waterfall is!), but rather just using best practices that have been around for ages, but just branded under a shiny new name. TDD ensures that you build code that is well tested, having the client write FIT tests ensures you implement requirements correctly. Pair programming helps share knowledge so that if your resident expert gets hit by a bus tomorrow, things still go smoothly. Not only that, but while pair programming novices can learn the system quick, while experts (not experts per se, but experts on the software system) who pair can help each other make better design decisions while coding.

    Having worked through several Death March projects, as well as working on agile teams, it is my opinion agile is the best way to develop software. On all the agile teams I have been on, we always met the release date, and not only that, but we had a workable system the client could use before the release date. and I have never been stressed out while on XP teams, and I have always been able to understand the entire system being developed very quickly (it's great when you can write working code for a system on your first day).

    I'd suggest doing research and subscribing to the xp mailing list on yahoo groups to seriously learn about xp rather than listen to subjective nonsense. There seems to be a lot of hate on agile methods like XP from people who have never tried it for a 12 week period, and from experience I have seen that companies who have tried it for a 12 week period have never gone back to anything else.
    You should also check out many of the great books out there on XP and software development. Extreme Programming Installed, Extreme Programming Explained,and Planning Extreme Programming. Agile Software Development by Craig Larman is also an excellent (and short) read. Finally, I would have to suggest "Ship It!" by Jared Richardson and William Gwaltway.

    Like I say with everything, although Agile works for ALOT of people, Your Mileage May Vary. ;)





  • jamesCarr


    How did you control your agile projects. How did you report progress,
    and how did management now if the release date was in danger?


    Simon


  • ♿ (Parody)

    However, this is subjective, [Agile] really depends on the team, the people, and how well it is implemented ... agile works for ALOT of people

    And therein lies the problem. A "lot" is a very relativistic term; do you mean 100 people? 1000 people? Data (yes, I've done my research) show that it works for a small percentage of situations. This is because, as you said, it depends on having a good team and a good environment.

    In the real world, the average programmer is not "good" (that's why he's called "average"), nor is the environment, nor is the team. The overwhelming majority of "things" (this is a statistics rule) are either "average" or "below average". That's a world that you (plural, as in, agile fanatics) don't live in. You live in the "fantasy world" of IT, where refrigerators are filled with free Jolt cola and interviewers screen candidates with stupid questions like, "how do they put the M on M&M's".

    The development methodologies you so callously toss aside were built and perfected to work in this real world. The world with mediocre programmers and dreadful corporate environments. When one takes away all these "bad things," any hack can create a process that will work. And wouldja know it, a bunch of them did!

    I've read enough of them (from the "original" eXtreme Programming to some of the more "refined" Scrum books) to get the gist. The books are 75% fluff (hard to find a trade book that isn't these days) and could be compressed to a thesis that, if submitted, would result in the graduate student laughed out of his master's program.

    They're nothing more than anecdotes and speculation; they're not built upon any known foundation (waterfall, spiral, etc) nor take into account centuries of knowledge dealing with human nature (back to that whole "average" thing, but it's a lot more than that). As I mentioned before, they're processes built from scratch with painfully incorrect assumptions.

    That said, yes, "agile" will work for some (the "good" teams), but then again, so would most processes. When you have strong people and a strong team, it's pretty hard to mess up. I will reiterate why Agile, as a whole, does *not* work:

    * it mandates that you have good people and a good environment
    * it does not take into account the long-term maintenance of the software it builds



  • @Alex Papadimoulis said:

    In the real world, the average programmer is not "good" (that's why he's called "average"), nor is the environment, nor is the team. The overwhelming majority of "things" (this is a statistics rule) are either "average" or "below average".

    Really? Given that in most categories there are very few "very good" and significantly more "very bad" things and most other things fall between, I would have expected that the majority of things are "slightly above average", just like the majority of humans having more than the average number of legs.


    * it mandates that you have good people and a good environment


    There is much truth in this argument, but even good people will eventually write bad programs without a process to guide them. And honestly, I can't see how a team without good people could ever build good software with any process.


    * it does not take into account the long-term maintenance of the software it builds



    This is not true. XP cares a lot about code quality, readability and simplicity. After all, the main point about XP is "making changes possible (at reasonable costs)".



  • @sniederb said:

    jamesCarr


    How did you control your agile projects. How did you report progress,
    and how did management now if the release date was in danger?


    Simon





    Well, what usually  happens is we have features that must be
    implemented, and at the beginning of each session our team sits down
    and we talk about what needs to be done to implement the feature(s) for
    that iteration. Each small task is written on a card and sized as to
    how many units it will take (units vary from company to company, for us
    it is one 90 minute session). Tasks are kept small, usually the largest
    is about 4 units, if it is larger than the card is broken up. the units
    for each task are summed as points that is the estimated number of
    points we should complete during this iteration (which for us is 2
    weeks).



    If 2 iterations pass with the actual points completed not meeting the
    estimated, it suggests that a problem exists somewhere, either wrong
    estimates or code that needs to be refactored to make changes easier.
    The points completed helps measure the velocity for our team.



    Hope that helps,

    JC


  • ♿ (Parody)

    @ammoQ said:

    Really? Given that in most categories there are very few "very good" and significantly more "very bad" things and most other things fall between, I would have expected that the majority of things are "slightly above average", just like the majority of humans having more than the average number of legs.

    The "number of legs" does not apply here at all -- that's a statistical "trick" because no one has more than two legs. All it takes is one to drag down the average to 1.999999999999999999999999999.

    In the case of talent, it's all about the bell curve ...

    picture of bell curve

    ... take everything to the left of +1 SD, and you now have an overwhelming majority.

    @ammoQ said:

    I can't see how a team without good people could ever build good software with any process.

    If they can't, then they aren't "good". "Good" is not how much C++ you know, but how well you develop software. A good team will naturally have experience and know what works (requirements, testing, etc) and what doesn't (just blinding hacking away).

    @ammoQ said:

    This is not true. XP cares a lot about code quality, readability and simplicity. After all, the main point about XP is "making changes possible (at reasonable costs)".

    I've discussed this in a number of previous posts, but the key problem with these methods is that change is an expectation, not an exception. This leads to having your initial design documents (for the less liberal agiles that actually allow documentation) being inaccurate to the point of uselessness.

    Poor design docs lead to a situation where coders have to reverse engineer the system in order to maintain it. This is bad news for the system and leads to a painful code decay (and absurd maintenance costs). This is just one of the many reasons that the "traditional" methodologies mandate strong documentation before developing and a "painful" change control process that involves updating the documentation.

    Yes, agile might get you there faster. But the tortoise will win the race with a lot less effort.



  • @Alex Papadimoulis said:

    In the case of talent, it's all about the bell curve ...


    So there's a genius for every idiot out there? Where are all those genuises hiding? ;-)


    I've discussed this in a number of previous posts, but the key problem with these methods is that change is an expectation, not an exception. This leads to having your initial design documents (for the less liberal agiles that actually allow documentation) being inaccurate to the point of uselessness.

    Poor design docs lead to a situation where coders have to reverse engineer the system in order to maintain it.


    The idea of XP is that the code is simple and readable so it isn't harder to read than documentation. The code covers the given stories and nothing else.


    This is bad news for the system and leads to a painful code decay (and absurd maintenance costs).


    The idea of XP is that code decay is avoided through refactoring. The promise of XP is low costs of changes. Because the code is simple and readable, and because the obligatory unit tests make sure changes don't break anything. If you take away the parts of XP that make low maintenance costs possible, it's no longer an agile method - just cowboy programming.


    This is just one of the many reasons that the "traditional" methodologies mandate strong documentation before developing and a "painful" change control process that involves updating the documentation.

    If documentation contains as many details as the code, it isn't easier to read. It's just as difficult to maintain and since there is no way to "test" it the same way you test code, it's likely to contain just as many bugs as an equally sized piece of untested code.




  • Your perception of agile development is understandable, and far from
    unique. Gartner reports that the average failure rate for organizations
    trying to adopt new technologies or new methods is 55%. There's no
    reason to assume agile methods would somehow be immune from the typical
    failure rate. Quite the contrary, when you consider the depth of
    mindset change and the extent of organizational culture change required
    to make agile methods work effectively, not to mention the extreme
    level of individual professional discipline required, I would be
    surprised if any organization's initial efforts reach even an average
    level of success.



    The fact we as a profession have come to accept the situation of not
    having "good people and a good environment" as normal reflects a deeper
    problem than anything inherent in agile methods, or in any other
    methods. I certainly agree with you that this is the norm today, but I
    prefer not to sit still for it. Because of the extensive reading you
    have done, I can see that your doubts about the effectiveness of agile
    development are based on a careful and intelligent assessment of
    information. I have to say that is <a
    href="http://www.davenicolette.net/agile/index.blog?entry_id=1432837">the
    right reason</a> to doubt any new idea, and I applaud you for it.



    However, your argument is weakened by some "painfully incorrect
    assumptions" of its own. The "known foundation" on which traditional
    software development methods rests is, basically, the principles of
    assembly line manufacturing. Once the kinks have been ironed out of a
    repeating process, the process can be run more or less unattended, and
    can be guaranteed to produce widgets that are identical within whatever
    tolerances are required. This can work in software development
    operations that are highly predictable, although of course we don't
    actually create exactly the same "widget" over and over again in the
    software industry. However, reduction of variation can apply to
    engineering practices, QA practices, and so forth. This is how
    statistical control methods like Six Sigma are applied to software
    development operations.



    The "known foundation" you feel is absent from agile methods also comes
    from the manufacturing sector. It is called "empirical process
    control." This is a method that applies to manufacturing processes that
    are subject to minor failures or differences all along the way. In
    manufacturing this is usually a result of the sheer complexity of the
    item being manufactured. In software, it's a consequence of the fact
    that customers really cannot specify their exact requirements in detail
    in advance; they have to discover their needs as they begin to work
    with partial solutions. In either case, empirical process control deals
    with unpredictability by introducing a cycle of inspection and
    adaptation at the relevant points. This approach matches up nicely with
    the realities of software development in many situations.



    You also mention agile methods don't take into account the long-term
    maintenance of software. Isn't it the case that no development
    methodology deals with production support or ongoing maintenance? It
    isn't within the scope of "development." Even if this particular point
    were correct, it would not be a differentiating factor between agile
    and traditional methods.



    Agile development methodologies are concerned with development and not
    maintenance. Even so, when agile methods are applied correctly, one of
    the results of any development effort is a comprehensive set of
    executable test cases at the unit, integration, and functional levels.
    At the functional test level, this amounts to a regression test suite.
    In our company, we have noted a significant reduction in the cost of
    production issue resolution owing to the fact the support people can
    run the test suite before and after they work on the code, giving them
    a good starting point and leaving them with a clean result afterwards.
    Similarly, an enhancement project begins with a complete test suite
    that is checked out along with the source, giving them a solid starting
    point for their analysis and modification of the application. Assuming
    they also follow agile methods in their project, the degradation of
    code quality is much slower than the traditional norm. Agile methods
    have not been in use on a large scale long enough to demonstrate just
    how much slower.



    The difficulty is not that agile methods are only a bunch of fluff, but
    that agile development is one of those things that's easy to talk about
    but hard to execute. It takes a lot of professional discipline. Another
    inhibitor is the entrenched mindset and culture of most large IT
    organizations.



    When it comes to larger projects, the key is portfolio management.
    Large initiatives can be decomposed into smaller projects, some of
    which may be amenable to agile methods. A thousand developers on a
    single project is really not manageable no matter what methodology you
    use. We in our profession have settled into a comfort zone with
    traditional project management processes because they are so good at
    hiding unpleasant truths about the status of a large project. One of
    the key concepts of agile methods is transparency. The root cause of
    every problem is exposed. You can imagine how the majority of
    traditional project managers feel about that. ;-)



    On a personal note, it's really amusing to read you insisting, time and
    again, that agile methods don't work when we have been using them
    successfully for the past three years, and they are being adopted by
    more and more companies every day. If the Standish Group's Chaos Report
    is any guide, it's the traditional approach to software development
    that doesn't work...and never has. A predictive manufacturing process
    is usually deemed adequate if the widgets come off the line with 99.9%
    acceptable quality. A quality level of 95% would be considered a major
    problem. In contrast, Standish reports that only about 29% of
    traditional software projects meet their stated goals on budget and on
    time. If this is the benchmark against which agile methods are to prove
    themselves, it is surely not much of a challenge. One could achieve 50%
    success by pure random chance. That suggests pure random chance is a
    better methodology than any "proven" traditional software development
    process.




  • @Alex Papadimoulis said:

    ... take everything to the left of +1 SD, and you now have an overwhelming majority.

    That's just ridiculous.  Take everything to the right of -1 SD and you also have an overwhelming majority.  Assuming a symmetric bell curve, the number of things which are better than average is exactly equal to the number of things that are worse than average.

    @Alex Papadimoulis said:
    If they can't, then they aren't "good". "Good" is not how much C++ you know, but how well you develop software. A good team will naturally have experience and know what works (requirements, testing, etc) and what doesn't (just blinding hacking away).

    You're right.  Bad programmers are those who produce bad code.  No matter the methodology, you will not get good code from bad programmers.  So why choose a methodology based on getting the best from the worst?  Isn't it more productive to get the best from the best?

    You also seem to throw some serious straw-man attacks at agile methods whenever you speak against them.  If you think agile methods are "just blindly hacking away", then I'd say you don't know what agile methods are.

    @Alex Papadimoulis said:

    I've discussed this in a number of previous posts, but the key problem with these methods is that change is an expectation, not an exception. This leads to having your initial design documents (for the less liberal agiles that actually allow documentation) being inaccurate to the point of uselessness.

    One of the fundamental ideas behind agile methods is that change will happen.  You said it yourself.  "It's unreasonable to require that no variation from the original specifications are permitted post-analysis. Doesn't it make more sense to expect change than to pretend that it won't happen and then force it to have a huge cost?  Change will happen.  It only makes sense to reduce the cost.

    I'm also forced to wonder how the big-design-up-front fixes the documentation issue.  If, as you say, change will happen, isn't the design document already doomed?  The only way the design document can be guaranteed correctness is if it's actively updated throughout the coding process (to incorporate those inevitable changes).  In that case, I fail to see how the methodology is really even relevant (with respect to the design document).  It seems to me that if you want a design document, it'd be better done after the design and implementation is complete, to document the final state of the product, rather than the initial (and now inevitably incorrect) vision.

    @Alex Papadimoulis said:

    Poor design docs lead to a situation where coders have to reverse engineer the system in order to maintain it. This is bad news for the system and leads to a painful code decay (and absurd maintenance costs). This is just one of the many reasons that the "traditional" methodologies mandate strong documentation before developing and a "painful" change control process that involves updating the documentation.

    I fully agree that poor design docs are bad news.  That's why producing a huge document up-front seems counterintuitive.  If it's bound to need change, and it's extremely painful to change it, isn't it harming more than hindering?  I can see the value of planning, and in fact, would be afraid of any methodology that espoused no planning (agile methods do espouse planning, just not huge up-front design).  However, investing large amounts of resources into a document which will be either 1) doomed to incorrectness or 2) expensive and painful to maintain seems like the wrong approach.



    • harming more than helping

  • ♿ (Parody)

    Dave, thank you for the well thought-out reply. You make a lot of strong points, some of which I'd like to respond to.

    The first is on the "highly predictable" development processes. Let's consider that overwhelming majority of information systems fall into this category (in fact, I'm unable to think of one which would not) and that the overwhelming majority of programming resources are devoted to information system development. This means that most software development is used to integrate software into a new, existing, or improved business workflow.

    I have yet to see is a business workflow ("requirements") that cannot be known before development of the software and, therefore, question whether or not such a workflow exists. I've worked with clients who were simultaneously building their offices (physically) and developing their supporting software, but they knew what they wanted ("a system to manage real estate") long before they laid the first brick. It's simply a matter of understanding what they wanted; the approach that agile seems to take is "assume now, code it now, fix it later." And it is precisely this approach that I argue leads to maintenance problems down the line.
     
    When I refer to "maintenance problems", I'm not talking about production support, but the quality of the product. With the assumption approach, code needs to be changed each time the requirements are more clearly defined. This creates a patchwork of code instead of a consistent product. Yes, it works when it gets out the door, but the cost of maintaining a patchwork versus a solid code base is much more costly.

    I have heard that this is mitigated through refactoring, but I just can't see how that could work. Whenever code is changed, test cases need to be run for modules that are directly and indirectly impacted. This results in a lots and lots of testing which gets to be pretty expensive; not only do you have to pay the tester for her time, but now she can't do her regular job, and her team's workload suffers.

    I have heard that this is mitigated through automated unit testing, but again, I don't see how that's useful for anything but a bit stronger "does it compile" test. Bugs in information systems are often not programmer errors, but requirement errors instead. Unless AI has significantly advanced to the point of replacing human testers, I don't see how it would be possible to unit test non-obvious requirement errors.

    But I'm just reiterating my earlier points.

    Actually, I'm hoping you could elaborate on how agile is more transparent. I think that this lies more in the competence of the PM than anything else. How does agile help in this? The main issue I have with agile as far as PM is concerned is that lack of accountability; some books I've read take the approach that no one is to blame for a problem but the process. That's all warm and fuzzy, but in the end someone needs to be "billed" for all the time fixing the problem.

    And finally -- I wanted to comment on the Chaos Report. I'm sure you're familiar with the objections to its methedologies, so I won't go into them in much detail. But with their definition of "success", wouldn't all agile projects end up in the "challenged" classification?



  • @Alex Papadimoulis said:

    I have yet to see is a business workflow ("requirements") that cannot be known before development of the software and, therefore, question whether or not such a workflow exists. I've worked with clients who were simultaneously building their offices (physically) and developing their supporting software, but they knew what they wanted ("a system to manage real estate") long before they laid the first brick. It's simply a matter of understanding what they wanted; the approach that agile seems to take is "assume now, code it now, fix it later." And it is precisely this approach that I argue leads to maintenance problems down the line.

    I think part of the problem with traditional methods is that they assume that the "requirements" are complete.  "A system to manage real estate" is pretty vague.  But lets say you hammer it down and force the client to attempt to define exactly what he wants.  What's the likelihood that you'll really get everything the client needs?  He'll forget something, or explain it poorly (so that you don't understand it, but possibly think you do).  He'll decide he needs something else added thats "vital" but wasn't vital a week ago when he signed off on the requirements.  And what happens if you put together exactly what he asked for and deliver it, and it isn't what he really needed?  If instead, you constantly had feedback from him, you'd catch misunderstandings and missing features much sooner.  That's part of agile methods.  Rather than assuming that you can really get everything the client needs up front, you get it from him incrementally, and it works fine, because the implementation and design is also incremental.

    To be honest, I think traditional methods are more likely to fall prey to assumptions than agile methods.  With agile methods, you're constantly asking, "is this right?"  With traditional methods, you're looking at the requirements and (potentially) saying, "this seems wrong, but it's what's on the paper."

    @Alex Papadimoulis said:

    I have heard that this is mitigated through refactoring, but I just can't see how that could work. Whenever code is changed, test cases need to be run for modules that are directly and indirectly impacted. This results in a lots and lots of testing which gets to be pretty expensive; not only do you have to pay the tester for her time, but now she can't do her regular job, and her team's workload suffers.

    You're mixing requirements and design too much.  If a chunk of code is refactored, the requirements that code fills haven't been changed, only the way in which the the requirements are implemented has been changed.  Automated tests can be used to make sure that the changed modules still exhibit the same behavior.  If the behavior is still correct, why would the requirements suddenly be unfulfilled?

    @Alex Papadimoulis said:

    Bugs in information systems are often not programmer errors, but requirement errors instead. Unless AI has significantly advanced to the point of replacing human testers, I don't see how it would be possible to unit test non-obvious requirement errors.

    I'm sorry, but it sounds like your saying that testers are supposed to refine the requirements.  That doesn't seem to be the testers' job at all, and attempting to allow testers to refine requirements would invite lots of assumptions, something you accuse agile methods of.  Requirements need to be refined by the client, which is, again, why agile methods stress constant interaction with the client.  Iterative cycles allow the client to clarify any requirement errors, rather than assuming that testers can somehow divine the correct bahavior.

    @Alex Papadimoulis said:

    Actually, I'm hoping you could elaborate on how agile is more transparent. I think that this lies more in the competence of the PM than anything else. How does agile help in this? The main issue I have with agile as far as PM is concerned is that lack of accountability; some books I've read take the approach that no one is to blame for a problem but the process. That's all warm and fuzzy, but in the end someone needs to be "billed" for all the time fixing the problem.

    I think the transparency he's referring to comes from the constant expose to the client.  If the client comes in every two weeks or every month to see the latest interation's results, and sees nothing new, he knows something's wrong (and so does the PM  and everyone on the team).  With traditional methods, it's a lot easier to say, "It's going great!" and keep problems in the dark.  This applies to both the client and the PM (hello Paula?).  It's not impossible to keep things in the open with traditional methodologies, but it can be harder.  Agile methods should make it easier to expose problems earlier.



  • Alex,



    IMO this is a good discussion. I don't think people are as far apart on
    this question as it might appear at first. Some of the differences are
    a matter of semantics. Others amount to assumptions on the part of
    people who haven't worked in both types of projects. And maybe a few
    are bona fide disagreements.



    Re: Predictable requirements. You're right that many, if not most,
    software development projects begin with a pretty clear statement of
    requirements. You say you haven't seen a case when adaptive methods
    would be better than predictive ones, and maybe that's literally true.
    It could be that all the projects you've worked on lent themselves to
    the predictive approach.



    It's easy to be too glib about categorizing projects one way or the
    other, since most projects and organizations have a number of unique
    characteristics that have to be considered in choosing an approach.
    Generally I've found that projects to build vertical, tactical,
    customer-facing apps often cannot be fully specified in advance. Sure,
    the customer knows his business process and has a general vision for
    improvement, but doesn't know (in advance) exactly the best ways to
    improve that process. Adaptive methods work well in those cases,
    because the customer can discover the details of the requirements as
    the solution is delivered incrementally. In the best case, solutions
    are literally moved into production at the end of each iteration
    (except the first couple of iterations, when a lot of the work consists
    of code frameworks and so forth). That enables the customer to declare
    victory at any time. I've had projects that were declared successful
    before their planned end date because the customer decided sufficient
    ROI had been achieved at the end of an iteration.



    But that's not the only type of work we do. Companies that sell
    commercial software products don't have "projects" in that sense. They
    have ongoing product development. That's a whole different ballgame. A
    generally waterfall-ish approach is often a better fit, even if
    iterative development methods are used. In corporate IT environments
    there are similar ongoing efforts to support technical infrastructure,
    such as Service Oriented Architectures and the like. The nature of the
    work is unlike customer-facing one-off business solution development,
    and the methodology has to be appropriate to the task.



    There are also other movements in the IT industry meant to address
    problems in project delivery, such as lean development and iterative
    development within a waterfall context. Vendors and consultants have a
    habit of labeling all these different things "agile" because that
    happens to be a marketable buzzword at the moment. Then there's
    "business agility" which really has nothing to do with software
    development, but gets mixed up with it anyway. No doubt this is a
    source of confusion.



    Re: Refactoring, test-driven development, etc. We could talk about the
    pros and cons of specific agile practices in isolation, but I'm not
    sure how meaningful that would be. I think it was you who wrote a few
    days ago that any methodology can work if good people are using it, and
    any methodology can fail of incompetent people are using it.
    Refactoring, TDD, and other agile practices are only tools. The best
    hammer in the world can't build a cabinet by itself.



    TDD practices and tools are actually very mature and can deliver a lot
    more value than just "does it compile" sort of testing. It's a powerful
    technique but not well understood or widely used (yet?).



    Re: Transparency. This just means that people are open to talking about
    what's going on. In many organizations, the formal project process is
    used to hide unpleasant truths, delay the delivery of bad news, and
    shift blame. Whether a project is using agile methods or not,
    transparency results in problems being exposed early and conflicts
    among stakeholders' priorities being worked out quickly.



    Clearly there are many superficial similarities and overlaps among
    different methodologies, whether traditional, agile, lean, or
    what-have-you. IMO the key difference between the agile approach and a
    process-centric approach is the emphasis on people. The first principle
    in the Agile Manifesto states that we value "individuals and
    interactions over processes and tools." It does not say we don't value
    processes and tools at all. The implied converse of that is that
    traditional approaches value processes and tools over people. The
    assumption is that a well-crafted process will cause the people to do
    the right things by guiding them along every step of the way. With
    agile methods, the assumption is that competent people who feel a sense
    of ownership of the problem and who are enabled to act on their best
    professional judgment will do the right thing. The process exists not
    to direct their activities but to facilitate their work. So the process
    is far more lightweight and non-intrusive, but also does not provide
    guidance about how to get the work done.



    That long-winded paragraph was meant to set up my response to the
    question of transparency. Transparency is always better than trying to
    hide the truth. That's not a question of methodology. But a
    process-centric approach provides a kind of safety net in that results
    can't be catastrophic provided everyone follows all the steps in the
    formal process. Agile methods don't have that safety net. If the
    project team doesn't step up and actually perform as an enabled group
    of competent professionals who accept collective ownership of the
    problem, there's no "process" for the project to fall back on. Failures
    can be catastrophic. By the same token, a process-centric approach
    imposes a sort of "ceiling" on the amount of business value that can be
    delivered. Agile methods remove the ceiling, but they also remove the
    safety net. Therefore, transparency becomes much more important than
    just a "good idea" when you use agile methods. It becomes a critical
    success factor.



    To me, this means one of the factors we need to consider in choosing an
    adaptive vs predictive approach for any given project is whether the
    customer thinks it's worth the risk of working without a net in hopes
    of breaking through the ROI ceiling of traditional methods. The
    technical staff can't make that call; it's a business decision. IMO
    this strongly contradicts the notion that agile development is
    tantamount to cowboy programming, because it implies a high level of
    professional discipline is required. There's no "process" to save the
    team if they drop the ball. They must understand best practices and
    apply them rigorously, without much supervision or direction. Agile
    development is not for everyone, it's not automatic, it's not easy, and
    it's not magic. And if you fail, you can't point to the process or
    anywhere else to assign blame. That's not for everyone, either. If the
    organizational culture can't support those things, then agile methods
    won't work even on projects that appear to be appropriate.



    Re: Chaos Report. I like to bring this up in presentations that
    introduce agile concepts. IMO the intermediate categories like
    "challenged" are only politically-correct ways to describe "failure."
    Success means that the customer is satisfied and the work is done on
    time and on or under budget. Everything else is failure. The 1996 Chaos
    Report (if you add up the percentages) showed that only about 16% of IT
    projects were successful. The Chaos Report for 3Q 2003 showed about 31%
    of projects were successful. That's substantial improvement, although
    still dismal. The time period corresponds exactly with the years in
    which practices such as iterative development / incremental delivery,
    lean development, and agile development began to gain traction in the
    industry. Logically, that is a correlation but not a proven
    cause-and-effect relationship; but it's a pretty compelling correlation.



    It's easy to see how you could have doubts about how agile methods
    could work, if you haven't actually had a chance to live through an
    agile project. We have an agile group at our company, and even they
    lose focus. It really isn't easy to sustain this mode of work, and
    we've been at it now for three years. Even now I'm preparing a workshop
    for our agile practitioners to reiterate exactly how each agile and XP
    practice contributes to business value realization, and how business
    value decreases when you start to slack off on one or two practices.
    There's not really a quick and simple answer to a question like,
    "what's the value of TDD, exactly?" There's an answer; just not a quick
    and easy one.




  • Thank you all for sharing your insights on agile and non-agile
    development. Personally, I find it very interesting to see that
    apparently traditional development methods are far from extinct.



    One question is still bugging me which I hope the "pro agile" group
    will be able to answer. Assume our project has the following
    cornerstones:

    • Delivery date Oct 1, 2006
    • Delivery goes to 1-5 large customers and a few hundred small customers
    • Large customers have the power to add new requirements or change existing ones during the development process



      How does the development team KNOW if they're going to meet the
      delivery date? They'll know around Sep 15, but what about in March 06?
      Then, they obviously haven't yet specified what to do in iterations
      happening in the summer (otherwise they wouldn't be embracing change),
      so essentially they don't yet know what they'll be doing a month from
      then.

      Now the project manager asks:
    • how much work is already done (%)
    • are we on track for the delivery date?



      Take the four variables of XP. If one of the large customers adds
      scope, and the team stays the same (and - usually - quality is a
      given), then it can only be time which changes. So, does EACH new
      requirement prolong the developement process? Does agile mean you gain
      flexibility by getting rid of a binding delivery date?



      I'm coming to believe that a software development process applies to
      the first 60% of project time, while everyone's still relaxed. When the
      late changes start coming in, the binding delivery date is looming and
      the bugs just won't go away, it's the time for the GOOD developers to
      step in and do their thing. And that is usually not very structured,
      but more like sophisticated code-and-fix.



      A good software development process ensures that the first 60% are so
      efficient that above scenario simply doesn't come up? I believe there
      is no process to prevent a change request 4 weeks prior to delivery
      except telling the customer you won't do it. And too me, that's really
      a major thing. This isn't about internal processes, but stepping up and
      simply telling your key account that, no, we're not able to implement
      this in a few days without compromising quality. And if the competitor
      say's he can, then he's lying.



      Cheers

      Simon


  • @sniederb said:

    How does the development team KNOW if they're going to meet the
    delivery date? They'll know around Sep 15, but what about in March 06?
    Then, they obviously haven't yet specified what to do in iterations
    happening in the summer (otherwise they wouldn't be embracing change),
    so essentially they don't yet know what they'll be doing a month from
    then.

    Now the project manager asks:

    • how much work is already done (%)
    • are we on track for the delivery date?

    In a similar vein, how does any development team ever KNOW if they're going to meet a date?  Answer: They don't.  Software projects are chronically over budget and past deadlines, because they cannot know when they'll be finished.  Instead, they make estimates.  This is the case whether the team is using agile methods or traditional methods.  A significant part of XP (for example) is dedicated to learning how to better estimate time required to complete tasks.  So while XP can't magically guarantee a delivery date, it aims to provide better estimates than traditional methods.

    @sniederb said:
    Take the four variables of XP. If one of the large customers adds
    scope, and the team stays the same (and - usually - quality is a
    given), then it can only be time which changes. So, does EACH new
    requirement prolong the developement process? Does agile mean you gain
    flexibility by getting rid of a binding delivery date?

    Yes and no.  Yes, adding requirements extends the development process.  That's just a fact of life, no matter what development process you use.  Agile methods can't change that.  They can, however, allow you to substitute functionality, basically for free.  If you decide that you don't really need feature A, but feature B is vital, you can substitute the (unimplemented) feature A for feature B, assuming of course that they are approximately the same complexity.

    In a very similar vein, agile methods allow you to leave certain features ill-defined
    until later, with the understanding that a certain amount of time is
    allocated to the to-be-clarified features.  Of course, leaving enough
    time for a simple search and then deciding that you want google
    rewritten for this "simple search" is not going to work.  Again, you
    have to make estimates about what you're going to want; have a general
    idea.  This isn't precise, but it's better than locking yourself to a
    design document and then realizing that the document captures the wrong
    functionality.  Basically, it allows you to set the scope for a feature without defining the specifics of that feature.


    @sniederb said:
    I'm coming to believe that a software development process applies to
    the first 60% of project time, while everyone's still relaxed. When the
    late changes start coming in, the binding delivery date is looming and
    the bugs just won't go away, it's the time for the GOOD developers to
    step in and do their thing. And that is usually not very structured,
    but more like sophisticated code-and-fix.

    I'd have to agree that this is somewhat true.  But that's exactly why agile methods exist.  They aim to minimize the problems caused by changes, and maximize flexibility.  The last part of development shouldn't be hack-and-go.  It should be as thought-through as everything else.  That's one of the promises of iterative cycles.  Every cycle is as important as the ones before.  So the last cycles shouldn't be hacked together.  They should be of the same quality as the first cycles.


    @sniederb said:
    A good software development process ensures that the first 60% are so
    efficient that above scenario simply doesn't come up? I believe there
    is no process to prevent a change request 4 weeks prior to delivery
    except telling the customer you won't do it. And too me, that's really
    a major thing. This isn't about internal processes, but stepping up and
    simply telling your key account that, no, we're not able to implement
    this in a few days without compromising quality. And if the competitor
    say's he can, then he's lying.

    It really depends.  If the request is to add, say, a search for accounts by user's social security number, it might be reasonable and easy.  If instead, the request is for an entire search infrastructure which doesn't currently exist at all, then it might be unreasonable.

    Agile methods aren't magic.  Changing things on release day still doesn't work.  Agile methods aim to minimize the negative impact of change.  They can't always eliminate it, though.  They recognize that change is inevitable, so you might as well expect it.  They also recognize that change can improve the product.  If the client needs something he didn't originally expect, why not give it to him?  (Possibly at the expense of other features he decided he can do without.)  It results in a better product, a happier customer, and possibly a repeat customer.

  • ♿ (Parody)

    Dave and Jörg, you have both made some great points, but I'm afraid I've only got a few moments to reply to one key issue that I see which makes the methedology a failure: it requires people with an above average competence level.

    @Jörg said:

    @Alex Papadimoulis said:
    ... take everything to the left of +1 SD, and you now have an overwhelming majority.

    That's just ridiculous.  Take everything to the right of -1 SD and you also have an overwhelming majority.  Assuming a symmetric bell curve, the number of things which are better than average is exactly equal to the number of things that are worse than average.

    Think of this more like a Reading Comprehension Level (scale of 1-10). People with a level of 5 have no problem reading something written at level 3, but struggle with something written at level 10. Therefore, it stands to reason that, the higher something is written at, the less people can read it, and the lower something is at, the more people can read it.

    The same is true with a process. I do not see Agile working at all for people with an average or below-aver competence level. It relies too much on personal responsibility and has too little accountability.

    @Jörg said:

    You're right.  Bad programmers are those who produce bad code.  No matter the methodology, you will not get good code from bad programmers.  So why choose a methodology based on getting the best from the worst?  Isn't it more productive to get the best from the best?

    You need to work with what you have. Getting the best programmers is not an easy task at all. Have you ever tried to find a good programmer? Statistically speaking, you will get many more average and below-average coders than anything else.

    What saves the day with below-average people is an above-average process. Given the right coding guidelines, the right instructions (i.e. a design document), you can squeeze out an above-average product with a statistical mixture of people (handful of really good, lots of average, handful of really bad). Agile is definitely not this process.

    @Dave said:

    The fact we as a profession have come to accept the situation of not having "good people and a good environment" as normal reflects a deeper problem than anything inherent in agile methods, or in any other methods.

    I think that this is a sign of rapid growth injected into an immature industry. We're still spinning from the PC revolution, the Internet, and Moore's Law; the meta-industry (trade book publishers, software tool vendors) doesn't help much with all the fads (Agile, XML, Windows DNA, etc). Once things slow down a bit, which they are starting to, I suspect we'll be closer to other professions.



  • Hmmm.... what's the difference between average programmers and above-average programmers?

    • incomplete knowledge of language, libs, design patterns
    • inability to find efficient algorithms to solve sophisticated problems
    • inability to find abstractions, thus doing too much copy-paste
    • introduce more bugs
    • generally work slower
    (Of course, not all propositions have to apply)

    So how is an above-average process enabling them to make a great product? Probably only if you have an by-far-above-average designer and team leader who tells them which libs and patterns to use, which algorithms to create, which class hierachie to implement, how to test.
    I don't think there is a magical above-average process that enables a team without a single overperformer to create anything better than average.



  • I'd like a definition of "average competence level", please, Alex. (Boy, that sounds like I'm on Jeopardy!)

    If we maintain your 1-10 scale, with, 1 being unable to understand anything programming related, 10 being complete knowledge of everything programming related (including architecture/design, algorithm design, UI, blah blah blah, there are tons of factors), what is the average competence level? I realize that the WTFs on this site make it appear that the average competence level is zero, but is that actually the case? I'm not defending Agile here, Alex, but it looks like you're equating your 1-10 scale with your average... you're assuming the mean is always 5. It's like comparing apples to oranges.

    It's quite possible that the "average" is 9, while what Agile defines as a "good" coder is having a competency level of, say, 7. That would mean that the "average" programmer is "good or better".

    That's not to say that the average isn't 2. This site provides plenty of anecdotal evidence for that.

    BTW, my competency level can only be measured on this scale if I invoke Automan: "On a scale of one to ten, think of me as, and eleven." [;)]



  • BTW, my competency level can only be measured on this scale if I invoke Automan: "On a scale of one to ten, think of me as, and eleven."

    Of course, my *typing* competency must be around 4ish.


  • ♿ (Parody)

    A meaningful measurement of competence is an entirely different area of discussion. For the workplace, there has been an entire industry dedicated to figuring out how to accurately do this for the past century: human resources. Because employee performance evaluation is by far the most pertinent, I can provide my [non-expert] opinion on the matter.

    Technical competence is easy to measure; take a test, write a program, etc. But a programmer's competency level is not just how well he knows Perl or the System.Web namespace; it's only about 30% of the overall employee. There are far more "soft skills" essential to life as a programmer, such as  communication, leadership, responsibility, reliability, etc. All of these are measurable (Google "employee performance evaluation" for a myriad of ways to do this) and, on a whole, the measurements are accurate.

    So, when I say "average competence level", I'm refering to the statistical median of all programmers. I erred in using integers to refer to this -- I should have used percentiles. So, wherever I said 5/10, think of the 50th percentile.

    To rephrase my core point, Agile seems to only work in the 65th percentile (estimate) -- meaning that only the top 30% of programmers are able to use this methodology successfully. This is a serious problem because that is an untainable goal for everyone.

    BTW, my competency level can only be measured on this scale if I invoke Automan: "On a scale of one to ten, think of me as, and eleven."

    I know this is a joke -- but it raises an interesting point as well. The average person thinks that he is above average (which is obviously statistically impossible). This is by far the worst type of person to have on your team -- someone who is unable to recognize his own in/competence. A programmer who scored in the 40th percentile on a technical test and estimated his score in the 45th is much better asset than a programmer who scored in the 70th and estimated in the 90th.


    @ammoQ said:

    So how is an above-average process enabling them to make a great product? Probably only if you have an by-far-above-average designer and team leader who tells them which libs and patterns to use, which algorithms to create, which class hierachie to implement, how to test.
    I don't think there is a magical above-average process that enables a team without a single overperformer to create anything better than average.

    You are exactly correct -- you put the strongest people doing the most complicated work (architecture, design, leadership, etc) and divy the less complex work *with very specific instructions* to the weaker people. This is the whole idea behind a detailed design document -- it's very specific instruction with trackable requirement numbers. The guy in the 40th percentile is never going to be able to walk away from the "daily scrum" and understand or be able to work from the notes he took. But give him specific coding guidelines, specific requirement numbers to implement, and he'll do it in a predictable amount of time. The same goes for testing and the rest of the lifecycle.



  • @Alex Papadimoulis said:

    A programmer who scored in the 40th
    percentile on a technical test and estimated his score in the 45th is
    much better asset than a programmer who scored in the 70th and
    estimated in the 90th.


    Good people (70th percentile) are normally also good at estimating their score.


    you put the strongest people doing the most complicated work (architecture, design, leadership, etc) and divy the less complex work *with very specific instructions* to the weaker people.



    So how many (assumed average) programmers can you assign to one (assumed overperforming) designer? IMO creating detailed instructions is nearly as much work as implementing them, especially if they are implemented in a high-level language that doesn't require too much stupid  work.


  • If I am a 40th Percentile programmer, I'm going to produce that quality of code, regardless of how the requirements are delivered to me.  Frankly, no matter how clearly stated the requirements may be, I'm certain that a 40th Percentile programmer will implement them badly.  Oh, they may "work", and may meet the letter of the law.  However, they will not work well.  This is a problem that both "agile" and "classic" methodologies must face.  I'd guess that, if we tracked the methodology used as part of the DWTF submissions, it would be a statistical wash.

    First, let me point out that "Agile" doesn't really need an entire team of above-average developers, as long as key individuals (the leadership) are above average.  Average developers will implement "Agile" just as badly as they would "Classic", if left to their own devices.  Conversely, a couple of very strong programmers can do wonders with a average team.

    Agile methodologies generally attempt to address these problems in the following ways:

    1.  Substitute automated, repeatable, "easy" processes in place of manual, laborious, "difficult" ones.  You don't have to buy into TDD to accept that NUnit is a great tool in your toolbox.  Use computers to do the jobs they're best at; use people to do the jobs they're best at.  Automated build tools not only keep the code deployable for testing, they also keep the team informed of build breaks, automated BVT results, etc.  All without unnecessary human intervention.

    2.  Use code oversight (code reviews, buddy builds, pair programming, "scrum", etc.) to elevate the quality of the code.  Two 40% programmers working together do not necessarily produce 40% code, because their relative strengths will tend to complement one another.  It won't be 80% code, of course - but take what you can get.  Likewise, teaming stronger/weaker pairs (like a 40% with an 75%) will help to train the 40% developer in the "right" way to do things rather than perpetuating their existing experience.

    3.  Make informed decisions, and keep people informed.  40% developers make poor decisions.  Don't let this happen.  Engage in debate.  Defend your own decisions and ideas.  Beat each other to a bloody pulp (metaphorically), then go out for a beer.  When a decision has been made, communicate it to everyone.

    4.  Don't over-engage anyone beyond their ability.  Learning is one thing.  Being thrown to the wolves is quite another.  Keep people talking to one another so that, if someone is stuck on something, they know who to approach for help.

    "Agile" does not mean that you don't engage in due diligence, or that you turn into a "code-like-hell" shop.  "Agile" doesn't mean that you have to fire your PMI guy.  "Agile" doesn't mean that you never produce design documents or estimates - or other forms of documentation.  "Agile" may shift the focus or priority of some of those items, but in reality some of them claim far more priority than they merit.



  • I hate this forum software.  It just ate my post.

    Summary:
    If you've got bad programmers, nullify them or make them better (both can be hard, I know).

    Traditional big-design-doc methodologies clearly aren't working to make poor programmers produce good code.  Agile methodoligies won't either.  I think it's impossible to get good code from poor programmers.  You'd do better to have them sit in the corner while the above-average programmers do all the work.

    I agree that it is hard to hire good programmers.  I blame the schools.  (Seriously.)



  • @Alex Papadimoulis said:

    Technical competence is easy to
    measure; take a test, write a program, etc. But a programmer's
    competency level is not just how well he knows Perl or the System.Web
    namespace; it's only about 30% of the overall employee. There are far
    more "soft skills" essential to life as a programmer, such as 
    communication, leadership, responsibility, reliability, etc. All of
    these are measurable (Google "employee performance evaluation" for a
    myriad of ways to do this) and, on a whole, the measurements are
    accurate.




    Yes and no - I think you are totally correct about competence being
    far, far more than whether they have an API memorized, but I'm not
    convinced that competence is fully measurable except by results.



    I work in a consulting environment so I've worked with a wide variety
    of developers.  The ones I call most competent are the problem
    solvers, or the "thinkers".  They think about what the
    requirements are, what makes sense and what doesn't, both from a
    technical perspective as well as from a business perspective. 
    They will push back to the business user coming up with the
    requirements if what they ask for seems contradictory to other
    requirements.  When they have an idea for how something could be
    done better than specifically asked for, they present the idea back to
    the business users, tech leads and PM as applicable in a way that shows
    the advantages and is usually acceptable.  When they are given a
    requirement they don't know how to meet right away, they do some
    research, tinker, and find a solution using the best practices they
    can.  They use basic programming and style standards so that what
    they write is easily traceable by the next person to look at the code.



    The ones I call less competent are the sheep and the rogues.  They
    will do exactly what the piece of paper says, no more, no less, whether
    it makes sense or not.  They will not take the initiative to point
    out errors or inconsistencies in their instructions, they'll just
    implement them anyway.  Or alternatively, they'll get it in their
    head that they know better than the business analyst and the architect
    and go rogue, developing something totally off-base without discussing
    it with their tech lead and the rest of the team.  They don't
    research the best ways of solving a particular problem, they reinvent
    the wheel and introduce all sorts of kludges to force a
    "solution".  They don't look at the overall architecture or style
    of an application nor do they use consistent styles themselves. 
    They favor copy-paste over writing re-usable code.  And sadly,
    even when instructed most of them don't really "get" it.



    IMO, you can take two people with the exact same level of "technical"
    knowledge in terms of a particular programming language and/or OOAD
    principles, but the competent developer will always produce a better
    functioning and more maintainable result than the sheep or the rogues.

    @Alex Papadimoulis said:


    You are exactly correct -- you put the strongest people doing the most complicated work (architecture, design, leadership, etc) and divy the less complex work *with very specific instructions* to the weaker people. This is the whole idea behind a detailed design document -- it's very specific instruction with trackable requirement numbers. The guy in the 40th percentile is never going to be able to walk away from the "daily scrum" and understand or be able to work from the notes he took. But give him specific coding guidelines, specific requirement numbers to implement, and he'll do it in a predictable amount of time. The same goes for testing and the rest of the lifecycle.


    This I totally agree with.  It still takes a bit of micromanagement unfortunately to ensure that the 40th percentiile guy actually comprehends his instructions well enough to produce what is on paper (I worked with one guy recently who left fields from the requirements & design docs off of a form on his web page and put new ones in without consulting either the tech lead (me) or the business user because 'he thought it was wrong' - and then acted surprised when the business analyst had a fit during testing that his pages bore little resemblance to the reqs or the wireframes), but if you have the "thinkers" mapping out the application and writing the documentation, then hopefully the application will have an underlying extensible architecture that is more easily maintained, even if some individual components aren't built with best practices. 

    In an ideal world, we'd all work with competent developers.  Sadly, that is not always possible, and so a team lead needs to choose a methodology that will be most effective for the people he or she leads.  I'm no expert on XP or Agile methodology, but my own preference would be to start with a detailed reqs doc and at the very least a solid architectural document that details the framework of the application, if there is not time to design each specific functionality.  Sure, the requirements will change and the design doc can be modified over time, but personally there are very few sheep I would trust to be able to maintain a coherent design when they only get the requirements piecemeal.  I've found that I have to assign anything that impacts the overall architecture of the application to my competent developers and assign "plug in" type functionality to the sheep.  What usually ends up happening is the competent developers finish their functionality early and start taking responsibility back from the sheep, at which point they have to rewrite a lot of it to mesh with the architectural standards of the application. 

    As far as XP goes, personally it doesn't work for me because I cannot code and talk at the same time.  The way I'm wired, XP feels traumatic and foreign because I need to finish my thoughts, get the code on screen, and then I can stop and discuss it with someone.  I just flat out can't have the conversation while I'm working.  And being the partner not in control of the keyboard bores me to tears.  I would much rather have frequent team code reviews than try to actually code two to a keyboard. But, other people's mileage may vary.  It isn't necessarily a bad thing if a team can work effectively that way.  I know for sure I can't though.


  • Simon writes:

    >How does the development team KNOW if they're going to meet the delivery date?

    You are familiar with the so-called Iron Triangle: Quality, time, and features. When all three of those factors are locked in, the project is said to be overconstrained. It will fail regardless of methodology.

    Typically, agile projects lock in the time. The development team knows they will meet the delivery date because the delivery date is fixed. The customer (or whoever represents the customer on the project) can choose to shift features and/or quality as reality begins to make itself felt in the course of the project.

    Whether that's the best or only way to deal with these issues is an open question, but your question was specifically how an agile team would deal with it. In most cases, they deal with it by fixing the delivery date and allowing the other two parameters to change, under the control of the customer.

    IMO you should take into consideration the differences in the way agile and traditional projects operate. On an agile project, the customer has a direct voice in everything all the time. The moment a new requirement becomes known, or the moment an unexpected delay occurs in the development work, the customer knows about it and can act collaboratively to mitigate the impact on the project. Part of the idea is that continuous change is accepted as normal, rather than treated as an exception. So, it is not scary at all to realize that requirements will change in the course of the project.

    Also consider that the customer can replace or reprioritize any story that has not been started yet, even in mid-iteration. There is no cost to doing so, because no work has been done on the story yet. With traditional processes, the customer has no visibility into the project except at predefined milestones or quality gates. So, it is more difficult to accommodate changes.

    But what you have been describing has never sounded like a good fit for agile methods to me. In the context of your work environment, it's unsurprising that agile methods don't sound very applicable. There's no sense in forcing a methodology into a context where it won't work. But you might still be able to gain some benefit from lean development. Check it out.

    >...a software development process applies to the first 60% of project time, while everyone's still relaxed. When the late changes start coming in, the binding delivery date is looming and the bugs just won't go away, it's the time for the GOOD developers to step in and do their thing.

    That is as succinct a definition of the CAUSE of the agile movement as I've read anywhere. Agile development is the cure for that sort of nonsense. (Ask yourself: Why is everyone "relaxed" at the start of a project? Should they be?)

    Alex writes:

    >...one key issue that I see which makes the methedology a failure: it requires people with an above average competence level.

    Yes, it does. But how does that make the methodology a failure? It succeeds when people of high competence use it. As many of us have stated before, agile development doesn't apply to every problem. Many - maybe most - IT projects just don't require an exceptional level of expertise. One of the things management has to be careful of is not to waste their top people on routine work.

    >I think that this is a sign of rapid growth injected into an immature industry. We're still spinning from the PC revolution, the Internet, and Moore's Law; the meta-industry (trade book publishers, software tool vendors) doesn't help much with all the fads (Agile, XML, Windows DNA, etc). Once things slow down a bit, which they are starting to, I suspect we'll be closer to other professions.

    That's an interesting comment. The funny thing is, I've been at this for over 28 years now, and I know others who've been at it even longer, and it seems as if we have been just at the cusp of being like the other professions the whole time. The "cusp" keeps moving out ahead of us! I don't see it slowing down. We're not an immature industry, after all these years.

    >You are exactly correct -- you put the strongest people doing the most complicated work (architecture, design, leadership, etc) and divy the less complex work with very specific instructions to the weaker people.

    On a predictive project, yes. On an adaptive project...also yes, but the implications are different. Rather than having the strongest people create detailed documents, you need them working directly on building the solution so that they can adapt to changing requirements on the fly without sacrificing quality. When requirements are in flux, there's no value in writing very specific instructions since they will become obsolete before the code is finished. When requirements are predictable, the predictive approach is appropriate.

    J&ouml;rg writes:

    >Traditional big-design-doc methodologies clearly aren't working to make poor programmers produce good code.  Agile methodoligies won't either.  I think it's impossible to get good code from poor programmers.

    I agree. If a programmer understands what to do, he understands it regardless of any detailed design document that might be lying around. If a programmer doesn't understand what to do, he won't understand the detailed design document, either.

    Processes don't write code, people do. Agile methodologies depend on people to get things done, and process exists to facilitate their work. Put unqualified people on an agile project, and they will fail spectacularly, because the lightweight process doesn't act as a safety net.

    Traditional methodologies depend on process to guide people to success, and the people exist only to follow the steps defined in the formal methodology. Put unqualified people on a traditional project, and they will fail - per the Standish Group, about 71% of the time - but not catastrophically. The process sees to that.

    By the same token, process-centric methodologies impose a sort of ceiling on ROI, thanks to the "heavyweight" process control overhead they require. Agile methods remove the ceiling and allow people to achieve whatever they're capable of. The question is, what are they capable of? ;-)

    It goes back to the old adage about risk and reward. If you want to mitigate the risk of project failure through strict process controls, the cost of doing that is to limit the potential earned value of the project. If you want higher earned value, you have to accept the risk of spectacular failure, too. IMO this is really one of the key reasons agile methods don't apply across the board, but only to select projects.



  • So, when I say "average competence level", I'm refering to the statistical median of all programmers. I erred in using integers to refer to this -- I should have used percentiles. So, wherever I said 5/10, think of the 50th percentile.

    To rephrase my core point, Agile seems to only work in the 65th percentile (estimate) -- meaning that only the top 30% of programmers are able to use this methodology successfully. This is a serious problem because that is an untainable goal for everyone.

    But you're still lumping "goodness" with "averageness". I agree that Agile requires "goodness", that is, competency. This is often referred to this as "above average", but what they really mean is "capable", or "competent". You're arguing from the rule of averages, in that to have "above average" implies that Agile only works with an elite few, the folks who really are "above average". But I'm arguing that "above average" doesn't actually imply either competency or incompetency at all. Without an actual measure of the level of competency that Agile requires for success, your argument that Agile only works with an elite few falls apart. You're basing it on nothing, except perhaps anecdotal evidence; you're waving your hands.

    Let's try this semi-Socratically: What if the average programmer is wildly incompetent? That is, assuming competency is measured theoretically on a 1 to 10 scale, what if the average is 2? Very few people are above average, but that doesn't imply that those people are actually competent. Most of those elite few who are actually above average might only have a competency of four. Sure, they're better than average, but they still stink. The actually "good" programmers in such a theoretical scenario would be extremely rare. In such a scenario, Agile would never work at all.

    But look at the flip side. What if the average programmer is actually quite competent and capable? Yes, I realize this site is devoted to the other side of that, but let's just assume for the sake of argument. [:)] What if the average programmer has a capability of, say, 8. What if, to use your own numbers entirely inappropriately, the actual capability that Agile requires is, say, 6.5? In such a scenario, the vast majority of programmers could use Agile methods without difficulty.

    I'm not saying I disagree with you about Agile; I think Agile is a flawed process. I'm just saying you need to harden your arguments; this particular argument doesn't really hold any water.

    I also agree with your response to my Automan joke. I don't *actually* think I'm an eleven, I just couldn't resist quoting it since it pops into my head anytime anyone mentions a scale from one to ten.



  • @Dave Nicolette said:



    Processes don't write code, people
    do. Agile methodologies depend on people to get things done, and
    process exists to facilitate their work. Put unqualified people on an
    agile project, and they will fail spectacularly, because the
    lightweight process doesn't act as a safety net.

    Traditional
    methodologies depend on process to guide people to success, and the
    people exist only to follow the steps defined in the formal
    methodology. Put unqualified people on a traditional project, and they
    will fail - per the Standish Group, about 71% of the time - but not
    catastrophically. The process sees to that.





    I think unqualified people in agile methods are likely to deliver a
    piece of crap, while unqualified people in a traditional method will
    deliver nothing at all. Hmmm.... not an easy choice.



  • @Whiskey Tango Foxtrot Over said:

    Let's try this
    semi-Socratically: What if the average programmer is wildly
    incompetent? That is, assuming competency is measured
    theoretically on a 1 to 10 scale, what if the average is 2? Very few
    people are above average, but that doesn't imply that those people are
    actually competent. Most of those elite few who are actually above
    average might only have a competency of four. Sure, they're better than
    average, but they still stink. The actually "good" programmers in such
    a theoretical scenario would be extremely rare. In such a scenario,
    Agile would never work at all.





    The way Alex defines "average" (a more exact term is: median) it is
    always 5.5, by definition. But it doesn't say anything about the
    required score to qualify for agile. In a group of world-class top
    coders, there are still 50% of the people in the 50th percentile.
    Likewise, in a group of losers, 50% are better than median. The only
    difference is that in the first group, agile requires the -3rd
    percentile, while in the second group it's the 121st.



  • @ammoQ said:

    @Whiskey Tango Foxtrot Over said:

    Let's try this semi-Socratically: What if the average programmer is wildly incompetent? That is, assuming competency is measured theoretically on a 1 to 10 scale, what if the average is 2? Very few people are above average, but that doesn't imply that those people are actually competent. Most of those elite few who are actually above average might only have a competency of four. Sure, they're better than average, but they still stink. The actually "good" programmers in such a theoretical scenario would be extremely rare. In such a scenario, Agile would never work at all.



    The way Alex defines "average" (a more exact term is: median) it is always 5.5, by definition. But it doesn't say anything about the required score to qualify for agile. In a group of world-class top coders, there are still 50% of the people in the 50th percentile. Likewise, in a group of losers, 50% are better than median. The only difference is that in the first group, agile requires the -3rd percentile, while in the second group it's the 121st.

    Yes, that's exactly what I said. What I want to know is... where are those "Agile requires folks in this percentile" numbers coming from? That's my point; they're based on nothing... they're based on whatever whomever happens to think they should be. Without a precise translation from competence measurement to statistics about the competence, those numbers are absolutely meaningless.



  • To horribly misquote Dave Barry (I think), it was a highly involved and scientific study in which he wrote down numbers until one of them looked right.



  • ammoQ wrote:

    > I think unqualified people in agile methods are likely to deliver a piece of crap, while unqualified people in a traditional method will deliver nothing at all. Hmmm.... not an easy choice.

    I think this is a salient observation.

    Ask yourself why large IT shops continue to emphasize traditional methods despite the dismal track record of those methods. I think it is because heavyweight processes offer a kind of safety net against human error. To paraphrase ammoQ, "nothing at all" is better than "a piece of crap." The cumbersome administrative overhead of traditional processes places a de facto limit on the amount of value a traditional project can earn for the organization, but it also protects the organization against catastrophic failure. A failed traditional project doesn't bring the company to its knees. That's why large organizations can live with a success rate under 30%. In the aggregate, the value they get from the successful projects cancels out the waste from the failed projects.

    But there are times when a business opportunity is so compelling that a company wants to try and break through the value ceiling imposed by traditional process controls. To do that, you have to put highly qualified people on the job and then get the heck out of their way. Don't burden them with quality gates and requirements to produce deliverables other than working code. Assign people you can trust (the easy part), and then go ahead and trust them (the hard part, culturally).

    What ammoQ says here is very true, though. For the same reason that there is no ceiling on the potential earned value of an agile project, there is no safety net under it. The results depend on the capabilities of the team. If the team doesn't have the right capabilities, then they will fail. When they fail, they might fail catastrophically, since there is no "process" safety net.

    I used to believe success with agile methods was mainly a matter of mentoring people in the proper application of best practices. I used to think the difference between agile development and traditional development was more a question of mindset than of technical skill. I've come around to a different view, though. There really is a difference between well-qualified and underqualified technical people. When you take the latter away from the comfort of following a cookbook recipe for development work, they simply don't know what to do. You can pair with them and show them how to do TDD and all that jazz, and they just won't get it. They won't refactor properly because they don't understand sound software engineering practices well enough even to know what to refactor, let alone how. They can't recognize a "code smell" or understand what it means to "refactor to patterns", because they don't think in terms of patterns in the first place. They learned to write code by rote, and that's all they can do. The difference is that when they are working under the guidelines of a detailed, formal process, and coding to a set of detailed design specifications, someone or something tells them what to do and how to do it. That's how the "safety net" works in traditional methods.

    When I read participants on this forum declare that agile methods just don't work, in a blanket way, it sounds very strange to me. At our company we have a three-year track record of success with agile methods. How could that be possible if the methodology itself were flawed in some fundamental way? One success might be sheer luck, but a dozen consecutive successes and no failures, on a diverse set of projects of various sizes, carried out by different teams? Hmm. OTOH, our agile group has about 70 people, out of a total IT department of some 1300 people. Practically all our IT work is done with traditional methods. That puts it into some perspective, I think.

    I'm in no position to question other people's experiences or observations. There are reasons why some people think agile methods don't work. They may have experienced poorly-run projects. They may be thinking of things in a different frame of reference - for instance, some posters seem to be looking for a single approach that works in all cases, and they define any methodology that doesn't achieve that goal to be "flawed." It's hard to say. I know for a fact agile methods work, and they can deliver astonishingly great business value. But I also know agile methods don't work in all cases, for a variety of reasons. It's interesting to read all the different perspectives presented here.

    Why is it a "problem" that agile methods don't apply to every project, or that they can't be successfully applied by people who have poor technical skills? Is it also a "problem" that you can't tighten a slotted screw using a Phillips-head screwdriver? The average person can't build a set of kitchen cabinets; the job requires the skills of a competent carpenter. Does that mean wood is fundamentally flawed? Does the fact you can't perform brain surgery with a boomerang mean the concept of "boomerang" is fundamentally flawed?

    I suspect a lot of the opinions expressed here are the result of unrealistic expectations or inappropriate application of agile methods. As professionals, I think we should be more concerned with learning how to assess problems and choose solution approaches based on objective and quantifiable criteria than with defending a quasi-religious or emotion-based opinion about particular methodologies or tools.



  • ♿ (Parody)

    @ammoQ said:

    So how many (assumed average) programmers can you assign to one (assumed overperforming) designer?

    The question is quite vauge but can be answered by going back to the percentiles. If you have a perfect process and work with a 100% spread, you will end up with an average (50%) output. Neither of these are going to happen, so the trick is to get your process as close to perfect as possible, minimize the damage done by the 20%, and put the 80% people in the most risk-prone parts of the process.

    @ammoQ said:

    IMO creating detailed instructions is nearly as much work as implementing them, especially if they are implemented in a high-level language that doesn't require too much stupid  work.

    This is true, and often times its more work to write it in English than C#. But this is where you put your 50% people: have the 80% business analyst do the high-level overview (let's say a scope document) and have the other BA's do the grunt work of meeting with the client to come up with detailed requirements. For development, the strong programmer does the "big picture" and the weaker programmer translates English to C#.

    Once the "big picture" is set, translation of customer requirements to English and English to C# are both rote and repetitive tasks. But they are all verifiable and accoutability is built in: if the code doesn't do what the requirement says, it's the programmer's fault. If the requirement's are vauge, then it's the BA's fault. If they are wrong, then it's the customer's fault. With Agile, it's no one's fault -- it's accepted as part of the process and it just gets fixed. And the customer still pays for all of these errors -- this is why you can't accurately estimate an Agile project.

    @Whiskey Tango Foxtrot said:

    where are those "Agile requires folks in this percentile" numbers coming from?

    ....

    Without an actual measure of the level of competency that Agile requires for success, your argument that Agile only works with an elite few falls apart. You're basing it on nothing, except perhaps anecdotal evidence; you're waving your hands.

    There are very few things that can be absolutely measured; most things in our day-to-day are relatively measured, including competency. Let us consider what competentcy means from an employment perspective: the ability to perform expected tasks for a given job. The reason that competency is relative is because the "expected tasks" change as workers become more or less skilled. For example, in our industry, the difficulty level of the tasks has gone down over the past few decades (ever try programming on paper in COBOL before?) and, as a result, more people have signed up to become programmers. This, in turn, lowered the expectations and the skill level required to be competent.

    All of the terms you used ("good", "bad", "weak", "strong") are relativistic and based solely on the median. Try defining "good" without using a definition that's effectively "better than adequate." Unfortunately, I do not have my library with me, but the requirement of "good people" (and, oftentimes, "good environment") comes directly from the literature. In the "failure" case studies I've read, the conclusion was that agile didn't work because of the people or the environment. The seminars I've gone to reiterate the "good people / good environment" rule. It's not me dictating that this process requires good people (though I will agree with that), but its proponents.

    @Dave Nicolette said:

    {Agile} succeeds when people of high competence use it. As many of us have stated before, agile development doesn't apply to every problem. Many - maybe most - IT projects just don't require an exceptional level of expertise.

    I'm not saying that Agile doesn't work in the right enviornment / right people. But my argument is that it doesn't/cannot work in most environments due to this very requirement. That's why I consider it to be more of a "fad" that works wonders for a select few, but a failure for most overall.



  • @Dave Nicolette said:


    A failed traditional project doesn't bring the company to its knees. That's why large organizations can live with a success rate under 30%. In the aggregate, the value they get from the successful projects cancels out the waste from the failed projects.

    It's not always like that. Consider a successfull company that reaches its limits with the old processes. They are running at 120% and still have problems to fullfill the growing load of orders. To solve that, they invest, say, 100 mio USD to build a highly automated distribution center. To use it, they also need new software that is able to handle the new processes. They simply cannot run the new distribution center on the old software. They cannot run in manually with paperwork. Not having the software ready when the rest of the building is finished means they cannot use the new building. Such a failure is not unlikely to have fatal effects on the company.



  • @Alex Papadimoulis said:

    All of the terms you used ("good", "bad", "weak", "strong") are relativistic and based solely on the median. Try defining "good" without using a definition that's effectively "better than adequate." Unfortunately, I do not have my library with me, but the requirement of "good people" (and, oftentimes, "good environment") comes directly from the literature. In the "failure" case studies I've read, the conclusion was that agile didn't work because of the people or the environment. The seminars I've gone to reiterate the "good people / good environment" rule. It's not me dictating that this process requires good people (though I will agree with that), but its proponents

    I agree that the proponents are dictating that good people are required for the process, but I disagree with your definition of "good" -- I disagree that it's relativistic. I agree that it's very difficult if not impossible to actually measure, but that doesn't mean that you can only measure it by the median. Trying to force "goodness" to the median doesn't work -- it's a spurious relationship.

    Essentially, I'm saying that you're basically arguing apples while the Agile proponents are arguing squares -- one is a solid, physically real quantity, another is an abstract representation. Agile proponents don't mean "good" as in "above average"/the bell curve, they mean "good" as in "capable of doing what needs to be done". Whether or not people are "capable of doing what needs to be done" is not derived from the median capability of all programmers. The median capability of all programmers may indeed be high enough that all programmers are, indeed, "capable of doing what needs to be done" in order for Agile to succeed.

    I'm not necessarily saying that such is the case, as this site offers evidence to the contrary daily, I'm merely saying that you stating that such is the case is equivocation. You're equating "good" spuriously (is that a word?) with "above average", but Agile proponents don't actually mean it that way.  


Log in to reply