Agile != cowboy



  • Quote: [b]I believe that it's easiest to put "agile development" into
    perspective when we refer to it as its name from yesteryear: Cowboy
    Coding. The fact of the matter is, "agile development" does not work
    and cannot work.[/b]



    Agile development is not cowboy coding.  There plenty of cowboys
    out there calling themselves agile, and they are (of course) failing,
    but don't let that confuse you.



    (Minor nit: "agile development" is an umbrella term used to describe
    various altenatives to the classical methodology, so you have to be
    careful with generalizations.  For example, XP is agile, and scrum
    agile, but XP is not scrum.)



    The classic method and the agile methods are both designed to minimize
    the cost of changes during the development process.  The class
    method does it by proceeding carefully, so you can find the problems
    and make the changes as early as possible, when changes are
    cheap.  The agile model sacrifices some of the processes that
    minimize the rate of change, and instead use processes that minimize
    the COST of change. The rate of change goes up, but the cost of change
    stays small, so you come out ahead in the end.



    Cowboy coding is the worst of both worlds - no planning to minimize the
    rate of change, and no practices to minimize the cost of change. 
    But don't confuse that with the agile approach - agile teams still put
    a lot of work into their process (read: things other than banging out
    code), they just put it into things like comprehensive automated tests
    rather than comprehensive design documents.



  • ♿ (Parody)

    Unlike the term "agile", there is no society/association/manifesto providing a definition of "cowboy coding". It's therefore difficult to say that "agile != cowboy" because we really can't define what cowboy coding is; I say "agile" is a new word for it, you say it isn't. What I can do, however, is demonstrate that the Agile Methodology is (and will continue to be) a failure.

    First and foremost, it goes against basic common sense -- time passed is exponentially proportional to cost of change. Be it a skyscraper or a novel, it's ridiculous to believe that some magical process will make this go away. The *only* way we can reduce this cost is by sacrificing on the quality of the product.

    Your skyscraper will need to be built with Legos and your novel will need to work with a protagonist who is either a lonely old man seeking one last adventure or a college graduate with the aspiration to take on the world. Your software will need to be generic to the point of uselessness. It all ads up to a crappy building, a crappy book, and a crappy application.

    The second main failing of Agile is that it ignores the most expensive part of software: maintenance. Sure, a barrage of automated unit tests is great and all, but three years down the line when all the original coders left, you're going to be facing a new team of clueless coders. Not only will the new guys have no comprehensive documentation, but after the hackfest known as "initial development," the system will be too complicated to understand from the source code alone. This will lead to bugs, bugs, and more bugs, and you'll be left with a very expensive system to maintain.

    I consider this cowboy coding. Sure, it's all "cleaned up" and has a few books written on it, but the utter disregard for the quality and maintainability of the product, all while ignoring forty years of tried-and-true software development methodology sure seems pretty "yeeeeee-haaaaawww" to me.



  • @Entendre Entendre said:

    The classic method and the agile methods are both designed to minimize the cost of changes during the development process.  The class method does it by proceeding carefully, so you can find the problems and make the changes as early as possible, when changes are cheap.  The agile model sacrifices some of the processes that minimize the rate of change, and instead use processes that minimize the COST of change. The rate of change goes up, but the cost of change stays small, so you come out ahead in the end.

    Wrong. It minimises the cost of change in the short term by making rate of change faster.
    In the long term though the cost of change goes up exponentially because of the sacrifices needed to make that rate of change possible (lack of documentation and design, lack of foresight and contemplation of possible future requirements leading to a lack of expandability, etc.).

    That's not to say agile methods can't be used as part of a development process, but they need to be tempered down by process if the product is to endure over the years instead of being a short term one-off.



  • @Alex Papadimoulis said:

    Your software will need to be
    generic to the point of uselessness. It all ads up to a crappy
    building, a crappy book, and a crappy application.





    "Generic to the point of uselessness"? You are forgetting that one of
    the main requirements of Extreme Programming is to have a
    representative of the customer on the team. In other words, you have a
    living functional specification working with you. How could a written
    document be better? And a set of transcripts from the design meetings
    might be superior to an overwritten design document that ignores the
    realities in the field. Provided you bother to keep records, but that
    has nothing to do with any particular methodology - it's a human issue.



    As for your analogies, I'd say a skyscraper is a one-shot project by
    definition (and thus it's nothing like software development), while
    many good novels are vowen from disparate pieces, sometimes even
    written by different authors. And from a certain perspective, putting
    together a coherent book is harder than putting together a coherent
    piece of software - you have less technical restrictions to guide you.



    Then again, some good comparative case studies would be more relevant than any theoretical discussion.



    Cheers,

    Felix



  • ♿ (Parody)

    @felix said:

    You are forgetting that one of the main requirements of Extreme Programming is to have a representative of the customer on the team. In other words, you have a living functional specification working with you.

    The "on site customer" is as big of a joke as "pair programming." An individual who has the in-depth knowldege required to accurately define requirements isn't going to waste his days sitting in the "dev room" with coders. At best, you'll get a junior-level employee (or contractor) who the business can live without for N-months. Or, do you seriously think that you can force a manager or senior-level resource to sit with coders for all or part of the day?

    I suppose on advantage to having a kid define the system is that it'll be real easy to use him as a scapegoat when everything goes to hell.

    @felix said:

    How could a written document be better? And a set of transcripts from the design meetings might be superior to an overwritten design document that ignores the realities in the field.

    When you have to change or add to the system, how possibly are you going to know what the system does? Reverse-engineer it from the UI? Pray that the users can spit out what it really does? Rely on coders to translate code back into functional specifications? Or just hope that the original project team is able to get back together and work on this new project?

    And then there's the matter of accountability. If a feature is not working as desired, who's to blame? If it was in the requirements, and the business "signed off" on the requirements, then it's pretty clear. Since the concept of "sign off" doesn't exist in agile, you really can't blame one side or another. Are you really willing to take the fall for the crappy requirements defined by your "on site" junior-level customer? Or is the plan just to keep him as a scapegoat?

    ---

    The "Agile" movement is a perfect example of an publishing-industry generated fad. Someone came up with a pretty bad idea (eXtreme Programming) and wrote a book on it. That book made lots of money. Other authors improved on the idea (how hard was that?), and the idea "evolved" into "Agile" methedologies. More and more books were written on it. Lots of money was made.

    The methedology is continuing to "evolve." The latest "bright idea" is the concept of Prefactoring[1] -- i.e., doing some up-front design before starting to work. I suspect this will take off and continue to "evolve" until we are left with the classic, spiral methodology defined in the 60's and perfected in the 70's.

    [1] see http://www.oreilly.com/catalog/prefactoring/



  • @Alex Papadimoulis said:

    @felix said:

    You are forgetting that one of the main requirements of Extreme Programming is to have a representative of the customer on the team. In other words, you have a living functional specification working with you.

    The "on site customer" is as big of a joke as "pair programming." An individual who has the in-depth knowldege required to accurately define requirements isn't going to waste his days sitting in the "dev room" with coders. At best, you'll get a junior-level employee (or contractor) who the business can live without for N-months. Or, do you seriously think that you can force a manager or senior-level resource to sit with coders for all or part of the day?

    Someday I'd like to work on a project that is so small that a single representative of the customer can provide the requirements. The specs I've written and worked from have all taken the input of at least four of the customer's people to adequately define the system.

    @Alex Papadimoulis said:

    @felix said:
    How could a written document be better? And a set of transcripts from the design meetings might be superior to an overwritten design document that ignores the realities in the field.

    When you have to change or add to the system, how possibly are you going to know what the system does? Reverse-engineer it from the UI? Pray that the users can spit out what it really does? Rely on coders to translate code back into functional specifications? Or just hope that the original project team is able to get back together and work on this new project?

    Not all specs are "overwritten", nor do they "ignore the realities of the field". Good specs are concise and complete, and draw from a variety of sources, including people "in the field". So the answer to the question is that transcripts would be superior to a poorly-written spec. But a well-written spec is better still. And I for one would hate to have to review the transcripts of design meetings to figure out what the hell the system is supposed to do - in meeting 1, the customer says A, in meeting 3 he says B, so you code B only to discover later that buried in the transcript of meeting 4 it went back to A.

    Not that I'm necessarily against agile programming, not familiar enough with it, but these arguments just got me.



  • @Alex Papadimoulis said:

    The "on site customer" is as big of
    a joke as "pair programming." An individual who has the in-depth
    knowldege required to accurately define requirements isn't going to
    waste his days sitting in the "dev room" with coders. At best, you'll
    get a junior-level employee (or contractor) who the business can live
    without for N-months. Or, do you seriously think that you can force a
    manager or senior-level resource to sit with coders for all or part of
    the day?





    Allright, you do have a point. But in my experience, the officially
    approved spec is always insufficient, and the customer's contact person
    is never there to give clarifications, not even by e-mail. Not even
    when that's his/her assignment. I wish I had them sitting next to me so
    I could show them how poorly thought their requirements were.



    Oh well, perhaps I was just unlucky.



  • ♿ (Parody)

    @felix said:

    in my experience, the officially approved spec is always insufficient, and the customer's contact person is never there to give clarifications, not even by e-mail. Not even when that's his/her assignment. I wish I had them sitting next to me so I could show them how poorly thought their requirements were.

    Oh well, perhaps I was just unlucky.

    I can see where you're coming from; it is a sign of an an immature organization. Development (via the project's lead programmer) always needs to sign off on the spec before delivery to the client for sign-off. This is essential for reasons beyond good requirements -- what if your analyst (or whoever writes the spec) promises that the system will have a mind-reading interface, or something equally impossible?

    Some organizations (especially smaller ones) belive that this is not possible to do; how can you sell a software system if it takes 30% just to figure out how much it will cost? I consult for these organizations to demostrate that it really is possible, and not as bad as they thought. The trick is following a few simple rules and providing stepped-estimates, starting with the scope overview to the final spec.



  • <font style="font-family: tahoma; color: rgb(0, 0, 0);" size="3">Funny you should start this topic. I'm a dev manager. Dealing with the fine line between agile and cowboy is my daily task.


    Cowboy coding as a modus operandi makes for un-releaseable projects. I'd de-hire people if that was their only way of working.



    We try to be agile, and indeed most of the time we are. It gets ugly
    when one of the senior guys gets an Idea. Ideas scare the shit out of me when its close to release date. Cowboy coding always
    starts from an Idea.



    Ideas are great during spec phase. I have the luxury of setting
    "Technical Debt" as a clear task on my schedule and thats when we do
    most of our great Ideas.





    We've developed a strategy to reduce cowboying: When an IDEA shows up late in a release, I <strikeout>force</strikeout> strongly encourage pair coding. Its not perfect, but it does help.




    Pair coding itself has on only a few advantages: a dramatically lower error rate and usually better documented code: they keep each other honest. Debugged lines per hour coding is likely higher, divided by two. Pair coding is a loss if your team is mostly careful people... and why would you hire careless people?




    This lack of preview is annoying




    BTW, first post for me


    </font>



  • It's my understanding that yahoo is effectively transitioning from Cowboy to Agile.

    Granted, that doesn't prove anything -- but clearly, some people with a lot of money behind them seem to think there are some benefits to adopting agile methodologies.

    I think it's also worth considering that different methedologies may be better fits for different projects. For example, web apps tend to be evolved products -- not only is the technology powering them changing constantly, but there's no single customer, and hence no single set of requirements.   The demand for certain features may not be discovered (nor even exist) until long after the initial release.



  • @Alex Papadimoulis said:

    I can see where you're coming from; it is a sign of an an immature organization. [...]

    Some organizations (especially smaller ones) belive that this is not possible to do; how can you sell a software system if it takes 30% just to figure out how much it will cost?


    You are sooo right. That's exactly what happens. Add to this the always-too-short deadlines (longer ones would drive costs beyond what most customers are willing to pay) and you get an environment where agile (at best) development is the only way to go. At least I get to organize my code in a sane way. The stories I hear from some of my friends...

    @Alex Papadimoulis said:


    I consult for these organizations to demostrate that it really is possible, and not as bad as they thought. The trick is following a few simple rules and providing stepped-estimates, starting with the scope overview to the final spec.



    Too bad we live on different continents, my boss could use your expertise :-)



  • @Alex Papadimoulis said:

    @felix said:

    in my
    experience, the officially approved spec is always insufficient, and
    the customer's contact person is never there to give clarifications,
    not even by e-mail. Not even when that's his/her assignment. I wish I
    had them sitting next to me so I could show them how poorly thought
    their requirements were.

    Oh well, perhaps I was just unlucky.

    I can see where you're coming from; it is a sign of an an immature organization. Development (via the project's lead programmer) always needs to sign off on the spec before delivery to the client for sign-off. This is essential for reasons beyond good requirements -- what if your analyst (or whoever writes the spec) promises that the system will have a mind-reading interface, or something equally impossible?

    Some organizations (especially smaller ones) belive that this is not possible to do; how can you sell a software system if it takes 30% just to figure out how much it will cost? I consult for these organizations to demostrate that it really is possible, and not as bad as they thought. The trick is following a few simple rules and providing stepped-estimates, starting with the scope overview to the final spec.



    Agile programming is what you get when you expect unresourced and overworked developers to fit in 6 months of work in 6 weeks, because the salesman / PM / boss decided it. My unfortunate experience has been one of continually chasing our tails, with never enough time to 'do things right', and document thing properly and compromising my principles over and over again.

    Agile programming is simply management's new term for screwing over developers with little pay for too much effort.



  • The "on-site customer" does wonders for scope cre... ahm "the natural process by which clients discover what they really want."

    nonDev



  • it is a sign of an an immature organization.


    True.

    It is what I'm experiencing at the company I work for. We've been growing rapidly, and more and more, we're seeing the need to really be as strict as possible, to get the spec signed by the client and live by that spec. If the customer get Ideas, they're filed under Extraspecular Activities and billed separately.


  • @dhromed said:

    If the customer get Ideas, they're filed under Extraspecular Activities and billed separately.




    We have that practice, too, but it doesn't solve everything. The specs
    being vague sometimes - for reasons explained above - arguments do
    occur ("But we thought that was implied by the original spec!").



    You wouldn't believe the level of modularity and extensibility I've learn to embed in my designs :D



  • ♿ (Parody)

    @Quinnum said:

    Agile programming is simply management's new term for screwing over developers with little pay for too much effort.

    I've spotted a similar trend as well at large organizations. Executive IT management will have defined a minimal set of guidelines to follow for software development projects. These guildeines will undoubtadly be a more "traditional" project lifecycle, either because the C_O-level folks see right through the "agile" fad, or are simply too slow to change things. Either way, the rule stand that projects go through the "traditional" lifecycle.

    Middle managers, fully aware of the rules, will declare that their group is an "agile" group. Fad-crazed developers (generally the type that don't work at large organizations) will flock to this group. As part of the "agile" methodology, the entire team will be "de-cubicled" and shoved into a conference room. A business analyst will be declared the "on-site" customer. Days will be long because "hey, this is agile -- we need to prove it will work."

    Granted, the entire project will still follow the guidelines: specs will be declared up front, signoff will be required, changes must be managed, etc. But, the whole thing will feel "agile" so no one on the team will notice.

    The end result is an "agile" success story. The manager was able to deliver the same product in less time while consuming less resources (both space and personnel).

    Props to the middle-manager for suckering the team to work long hours cramped in a conference room by calling it "agile."


  • ♿ (Parody)

    @felix said:

    The specs being vague sometimes - for reasons explained above - arguments do occur ("But we thought that was implied by the original spec!").

    I have to share the most vauge spec I've ever come across. The project happened to be the largest that the development company had aquired ($50,000 budget) and the entire spec document was about three pages. This part ended up being a very expensive piece to implement ...

    Administrative Functions
    Administrative functionality will be provided to administer and maintain all functionality within the product.

    ... especially after the fact when the client demanded that they be able to admin everything -- not just the products, users and their subscriptions.



  • @dhromed said:

    If the customer get Ideas, they're filed under Extraspecular Activities and billed separately.

    In my experience, it's the Ideas from the devs that need to be managed carefully...  [;)]  Honestly, I've seen guys (senior guys who SHOULD know better) decide to throw "cool" ideas into production code at the last minute - not only missing their original task deadlines but also introducing major new bugs that had to be rolled out of the code at delivery time two days later.  I've come up with some "cool" ideas from time to time, but at least I've been smart enough (knock on wood) to put them on a future roadmap rather than try to throw them in piecemeal.



  • @GalacticCowboy said:

    @dhromed said:

    If the customer get Ideas, they're filed under Extraspecular Activities and billed separately.

    In my experience, it's the Ideas from the devs that need to be managed carefully...  [;)]  Honestly, I've seen guys (senior guys who SHOULD know better) decide to throw "cool" ideas into production code at the last minute - not only missing their original task deadlines but also introducing major new bugs that had to be rolled out of the code at delivery time two days later.  I've come up with some "cool" ideas from time to time, but at least I've been smart enough (knock on wood) to put them on a future roadmap rather than try to throw them in piecemeal.



    We ideate to help the customer formulate his desires.

    We do not invent new stuff just before the deadline, "just cuz". That's insane. We have work to do, you know.


  • @Alex Papadimoulis said:

    @felix said:

    The specs being
    vague sometimes - for reasons explained above - arguments do occur
    ("But we thought that was implied by the original spec!").

    I have to share the most vauge spec I've ever come across. The project happened to be the largest that the development company had aquired ($50,000 budget) and the entire spec document was about three pages. This part ended up being a very expensive piece to implement ...

    Administrative Functions
    Administrative functionality will be provided to administer and maintain all functionality within the product.

    ... especially after the fact when the client demanded that they be able to admin everything -- not just the products, users and their subscriptions.


    I've hung myself on several occasions by having to rush out a spec (again thanks to no time given !) and put in a careless sentence.

    Because the system I worked on had become a big ball of mud (and a lot of which I'm sure I had subsequently contributed in my seven years there. Never got a chance to rewrite from the ground up like I wanted to!), the careless sentence - on examination - suddenly amounted to about 2 weeks extra work. Damned lack of code re-use and too much copying and pasting. Granted, the language was really dated anyway...



  • @dhromed said:

    @GalacticCowboy said:

    @dhromed said:

    If the customer get Ideas, they're filed under Extraspecular Activities and billed separately.

    In my experience, it's the Ideas from the devs that need to be managed carefully...  [;)]  Honestly, I've seen guys (senior guys who SHOULD know better) decide to throw "cool" ideas into production code at the last minute - not only missing their original task deadlines but also introducing major new bugs that had to be rolled out of the code at delivery time two days later.  I've come up with some "cool" ideas from time to time, but at least I've been smart enough (knock on wood) to put them on a future roadmap rather than try to throw them in piecemeal.



    We ideate to help the customer formulate his desires.

    We do not invent new stuff just before the deadline, "just cuz". That's insane. We have work to do, you know.


    Go on, put it in. It'll only take a few minutes!



  • @Quinnum said:

    Go on, put it in. It'll only take a few minutes!

    Exactly...  [:D]

    To your previous comment:  One of our (previous) account managers used to put in the following "boilerplate" statement in her requirements docs:

    "Anything not explicitly included in this document is implictly excluded."

    Which may have been halfway useful if it were ever actually enforced...  However, her requirements were usually vague and she'd always respond with "well, we'll clarify that later..."



  • At least she (and you!) had something to fall back on when and if a customer would come complaining about something that wasn't in the product after delivery, which is what such statements are for.



  • @Alex Papadimoulis said:

    Administrative Functions
    Administrative functionality will be provided to administer and maintain all functionality within the product.

    ... especially after the fact when the client demanded that they be able to admin everything -- not just the products, users and their subscriptions.



    Bwahahahaha! That's a great one! What did you do?



  • I just discovered this forum, and I have to say I found the discussion
    of agile software development methods to be very interesting and
    unexpected. Interesting, that so many people can have such
    strongly-held misconceptions about the subject. Unexpected, that this
    would be true in the year 2006.



    Some state flatly that agile methods do not and cannot work. Hmm. Zone or Limits? Or just April Fool's, maybe?



    Some say they are afraid of ideas. Okay, whatever.



    Imagine how bizarre it feels to read such things after three years of
    consistent success in delivering quantifiable business value to happy
    customers using agile methods, following 25 years of following the
    traditional methods that have given rise to the 71% - 84% software
    project failure rate reported by industry analysts such as Standish,
    Gartner, and Forrester for the past 4 decades.



    Sure, agile methods don't fit every project and aren't amenable to
    every type of person. But where they do fit, and when they're applied
    correctly, they work just fine.



    Oh, well.



  • For some projects, an agile method is the right thing and for other
    projects, it isn't. I'm not surprised that agile methods have not many
    friends here because this forum is dominated by users of MS'
    development tools. Though things changed a bit with .net, it is my
    strong impression that MS' tools are focused on the traditional
    development process. It's only natural for MS to see it that way; the
    kind of software they make could hardly be made with agile methods.

    So when are agile methods the right way to do it?

    Well, some projects are very innovative and there is a strong
    multidirectional influence of the system-to-be made, the business
    processes it represents and the environment; it's very likely that
    changes will be necessary, but it's not possible to tell which changes
    this will be until the system goes live. I'm working in the field of
    warehouse logistics, and most projects we make do not simply model the
    current business process, but they are installed concurrently with new
    processes, new rules. While it is already hard to perfectly plan the
    computer system in advance, it's even more hard to perfectly plan the
    logistic processes that will take place in the future. There will be
    bottlenecks, there will be congestions and the processes have to be
    changed. Fast. Believe me, if a warehouse is unable to deliver the
    incoming orders in time, there is no time to go through a lengthy
    requirement change signoff process. Processes have to be adopted till
    noon, rollout during the lunch break.



  • You know what I think? I think agile is getting such a bashing here because peer programmers don't surf the net when they're supposed to be coding :)

    I don't quite get where the "agile will fail" camp is coming from. Agile can and has succeeded, sometimes spectacularly. To state your position so absolutely is simply wrong. Equally, to suggest that spiral dev was "perfected" in the 70s is a misnomer. I'm not saying spiral isn't good (the sheer volume of spiral products in the wild attests to that) but it's not perfect. Spiral dev will still fail, especially if you take the approach "spiral dev is perfect, if I use spiral nothing will go wrong".

    Exactly the same thing can be said of agile methods, though. I'd agree with just about every negative comment made about agile dev in this thread, but I still use agile methods. The important thing is to recognise and avoid all the common mistakes associated with your chosen dev method.

    Some things I like about agile:

    • Getting a deliverable to the customer within a couple of weeks (as long as they understand that the functionality delivered is subject to change). Essentially you create a feedback loop between you and the customer, ensuring that at all times your spec is as close to "real world" as possible.
    • Peer programming. Maybe you're good at writing readable, intuitive, bug-free code, but I'm still not there yet. In my experience, two programmers at one computer will generate more useable code than two programmers at two computers.
    • No code ownership. The idea of self-documenting code works because the code needs to be re-understood on a daily basis. If I pick up a module and I don't know within two minutes what's going on, I'll call over the guy(s) who were working on it yesterday and ask them. Then, we re-write it so that it makes sense to an outsider. Doesn't take long. The code is cleaner, and the coders learn a little bit more about what it is that makes code unclear.


  • Agile methods can work, to a degree. But in the real world most often the project requirements don't lend themselves to the rather shoddy design systems (no design basically, just wing it) approach that many agile methods employ.

    If you're working on a small application that won't need to be maintained and expanded over several versions and years that's fine, for large enterprise level systems it's asking for disaster.
    Problem is that the consultants who prefer agile methods build such large systems and then leave, leaving people like the common denizen of this site to pick up the pieces a year or so later when something needs to be changed.

    What you're describing as strong points of agile development are the few points where it is indeed strong (though I hate peer programming, I can't work with someone looking over my shoulder constantly).
    And those are precisely the parts of "agile" development that are often left out by those same consultants to save money (after all, why send 2 people to peer program when one person can create the same amount of code in almost the same time, you'd just be undercut by the competition for the contract).



  • Interesting. I would have said that in the real world the project requirements can't be expected to remain static for several months at a stretch. But of course, every project is different.

    At a first glance, agile methods look a lot like cowboy programming. So naturally, people will use the phrase to hide the fact that they're just cowboys. Which, in turn, makes agile methods look more like cowboy programming, since a lot of it just is. If someone using "agile methods" tells you that it requires no design basically, they are cowboys. Do not confuse them with dedicated agile programmers. Agile methods rely on very good design in the short term to enable good design to emerge in the long term. This is not the same as just winging it.

    And of course, there are the people who just plain do it wrong. As you say, peer programming is a practise not used often enough. People have this idea that it's just a fringe concept of agile methods. It's not. It's an integral part of the process, along with ownerless code, that ensures code remains at a standard where self-documentation is viable.

    Basically, agile methods will only work if you take them seriously.



  • It's easy to see how some people could get the wrong impression from a superficial reading about agile methods, and come away believing there's absolutely no documentation at all, or that there's no design at all, etc. To be successful with any approach you have to understand what it's all about, and not just believe in some myths about it. After that, you have to apply the approach to the right sorts of problems. The best slotted screwdriver in the world can't turn a Philips-head screw. Finally, you have to apply the approach rigorously and knowledgeably. Fail on any of those three counts, and it's Game Over.

    Far from being like cowboy programming, agile methods actually demand quite a high level of self-discipline on the part of practitioners. Typically, your day to day activities are not tightly governed by a formal process, you're not closely supervised, and there are no third-party code reviews. If you're sloppy about the work, things fall apart pretty quickly. Agile methods demand self-managing teams. It's no mystery what will happen if the team fails to step up to the plate and manage itself.

    Culturally, this is one of the biggest obstacles to overcome in introducing agile methods to an organization. People are used to being told what to work on, and then being judged by some external authority. They're not used to being enabled to act on their own professional judgment, and then held accountable for results. Scary? Sometimes. Exhilirating and empowering? Always.

    In that general sense, I guess agile methods are no different from any other approach. If you don't understand your tools, or you use the wrong tool for the job, or you use the tool improperly, then naturally your results won't be so good. The question is, what will you do next? Blame the tools?
    														</span>


  • @Alex Papadimoulis said:

    The methedology is continuing to "evolve." The latest "bright idea" is the concept of Prefactoring[1] -- i.e., doing some up-front design before starting to work. I suspect this will take off and continue to "evolve" until we are left with the classic, spiral methodology defined in the 60's and perfected in the 70's.

    [1] see http://www.oreilly.com/catalog/prefactoring/

    Alex,

    Although I happen to agree with you for the most part about Agile, I'd just like to point out that I read the sample chapter for this book, and it contains many Good Ideas(tm). Not for Agile, but for design in general. It may be worth picking up just for the design aspects.

    On Agile:

    If any Agile zealot tells me (again) that the code and the test suite are the documentation for the code, or uses underhanded terms that mean the same thing, such as "the documentation always lies", and "refactor", I shall shoot them in the face with a bazooka. I've met some of the key players in the Agile movement, such as Robert Martin (so funny I'd have his manbabies) and Scott Ambler (does he *ever* take that cowboy hat off?), and some of the things they have to say make sense, but others make me want to scream. No software effort can be considered complete without (a) completed document(s) that explains how the program works and what each individual object/method/variable represents/does/means. Agile zealots tend to fail to understand that.

     

    P.S. The above is not actually a death threat, for the more litigious WTFers here.



  • Author Ken Pugh states in his summary of the Prefactoring book on
    Amazon that the idea of prefactoring resulted from a discussion of "why
    software came to need refactoring and practices that might lessen that
    need." The statement is a red flag. A person who thinks refactoring is
    inherently undesirable is overlooking something fundamental about the
    nature of software development, regardless of methodology.



    (Ironically, just as I finished typing that a colleague came to my desk
    and said the same thing. A 20 minute discussion ensued. So Pugh is not
    the only person who is put off by the idea of refactoring. Anyway...)



    Software has always needed refactoring. The problem has been that
    refactoring was impractical until relatively recently. Just consider
    any legacy application more than ten years old. How difficult is it to
    enhance the application without introducing a lot of defects? My guess
    is you would answer "very." And why is that? Isn't it because over
    time, as people make enhancements and fix bugs, the codebase gets to be
    progressively messier? It's only natural.



    The good news is that in recent years development tools have started to
    include support for refactoring - not a synonym for "any kind of
    change," but a short list of specific types of code modifications, such
    as moving a method out of a subclass and into a superclass, for
    instance. With such tools (Eclipse, IntelliJ IDEA, Microsoft VS, IBM
    WSAD, etc.) you can propagate a change through the codebase safely. If
    you mess it up, you can revert to a stable version. Before these tools
    became commonplace, people didn't make massive changes to the codebase
    because it would have been a tedious and error-prone manual effort, and
    not because codebases didn't "need" to be refactored.



    Alex's observation that "prefactoring" sounds like a return to earlier
    methodologies is well taken. As I see it, there has been an
    overreaction to the general failure of traditional approaches to
    software development. When you talk about agile zealots not wanting to
    document anything and not wanting to design anything, I wonder if it is
    a consequence of that overreaction. It leads to the misconception that
    agilists are totally against design and documentation. It's
    understandable.



    Those of us who successfully use agile methods try to come up with the
    "right level" of up front design and the "right amount" of formal
    documentation. Obviously that is greater than zero and, realistically,
    it must be less than the traditional norm. I put those phrases in
    quotation marks because the "right amount" depends on the particular
    project. I've mentioned before that agile development is hard to do
    because it requires such a high level of discipline. There's no way to
    avoid using professional judgment about how much up front design to do.
    There's no process "cookbook" to tell you exactly what sort of design
    "document" to produce before you can start coding.



    But there is help: Patterns. These days, we all tend to think in terms
    of patterns. It's taught in school, and the older ones among us have
    picked it up over the years. Most software development in business
    organizations deals with problems in well-known domains, and the
    solution patterns that apply to those domains are mature. In that
    environment, design work doesn't begin with a blank slate on every
    project. We already have a pretty decent head start on design because
    our solutions are based on patterns.



    Mr or Ms "Whiskey Tango Foxtrot? Over" worries that there won't be
    detailed documentation about every method and variable. I understand
    his/her concern. But the school of thought that comments and
    documentation are useless predates the agile movement by decades. I've
    heard many arguments about that subject throughout my career. A lot of
    people say you should have no comments at all in your source code.
    Others favor so-called "literate programming," in which it's hard to
    find the source statements among the comments. All I can say is
    anything that's not obvious should be documented. That's not a very
    good answer, of course; who is to say what "obvious" means?



    What about the code and test suite serving as documentation for the
    code? Not really. They won't give you an overview of the purpose of the
    application, or the specific purpose of some portion of the
    application. Who says we shouldn't write such documentation? But when
    you talk about writing documentation to explain the meaning of every
    variable, it sounds like that might be taking documentation a bit too
    far. If you can't tell that the purpose of the variable i is to control
    the for loop in which it is declared, then we have a more fundamental
    problem than the lack of documentation. If you think a variable named
    accountNumber might be a reference to a strawberry milkshake or a stack
    of firewood, then can I really expect you to understand plain English
    words in a document any better?



    As a practical matter, formal documentation that is stored separately
    from the codebase is rarely useful for production support or for future
    projects that make enhancements to the application. The people who
    might want to read it aren't going to go looking for it. You can check
    documentation files into the version control system alongside the code,
    of course. I think that's probably the most effective way to keep
    technical documentation. In my opinion, anything a developer might need
    in order to work with the code should be checked in and versioned with
    the code itself. That's not exactly what you were saying, though.



    In the past when people have said the documentation "lies" and the code
    "tells the truth", I would argue that you can tell what the code
    actually does by reading it directly, but you can't tell what the
    programmer intended it to do. For that, you have to have documentation.
    One would hope that with the rigorous application of test-driven
    development, the test suite really would serve as a reliable form of
    documentation. The test case states what the programmer intended the
    code to do, and if it runs successfully then the code actually does it.
    That's in an ideal world. In our world, it really depends on how much
    attention the developers paid to writing the test cases. All too often,
    people flag a test case as "broken" so it won't run, and they can get a
    successful build and go home for the day. That's not a problem of
    methodology, but of individual professional discipline.



    Mr or Ms WTF?O is mistaken in thinking that agilists eschew
    documentation. The manifesto states (in part) that we "value...working
    software over comprehensive documentation." It does not say we don't
    value documentation at all. The opposite view, represented by
    traditional methods, is that the documention itself is actually more
    important than the working software it describes. I realize that must
    sound silly (it must have sounded pretty silly to the people who
    crafted the manifesto, too), but the behavior of people in the IT
    industry for many years suggests they really do value the documentation
    more than the actual software. Many corporations have waterfall
    methodologies that define specific documentation artifacts at every
    step of the way through a project. Quality gates consist of meetings in
    which people ensure the documents have been prepared. No one ever looks
    at the solution itself to see if it works or if it even exists. The
    whole process has become focused on producing documentation. It is a
    question of reversed priorities.



    You say some of the things the key players in the agile movement say
    make sense. Some of the things you say make sense, too. One thing that
    doesn't (quite) is the idea that the book Prefactoring has something to
    do with agile development. My review is among those posted on Amazon,
    and I think you'll agree it isn't exactly a rave. It's the one that
    starts with, "Avoid this book."








  • Wow, that was amazingly long. I may have to pause for a minute to re-read it.

     

    Ok, I'm back.

     

    I admit that Prefactoring doesn't have a bunch to do with agile, after all agile involves diving headfirst into the water and then refactoring your way back to the surface. :) I said that it had good *design* ideas, and it does. Anyone interested in actual architecture of a system (complex or simple) might enjoy some of the ideas in the book. I'm guessing that since you say "avoid this book", you're not interested in architecture or design.

    Regarding agile vs waterfall... the bottom line is that you have to design *something* upfront, whether you're a waterfaller or an agilist. With agile, it's almost always use cases and test cases, without architecture, structure, or interaction specifications. All that is in the user story!!!11one1!! But let me ask you something... how can you possibly write the code for a test case if you don't know the design of the system you're testing? You can't. You have to hook into the code somehow, and you have to know the code to do that.

    Now, suppose that you're an agilist who wishes to code up a test case, but you're not the guy who wrote the original software. How do you go about doing it? If you follow the agile method, there's no design documentation, so you have to analyze somebody else's code, which, if you've ever done it, is like intentionally gouging out your own eyeballs, then pouring sodium-fortified lemon juice in the bloody sockets; no matter how good the other code is, if you have to figure out how it's supposed to work just by reading it, it's hell. This is the purpose of properly maintained design documentation. You know already what the methods are, and how they work, what their expected inputs and outputs are.

    Now, I will openly admit that many waterfallers forget that. They are so caught up in the process (or as an architect here calls it "checking a box") that they don't care much about the finished product, as long as the process was followed. They are the reason software projects fail, not the waterfall process itself.

    Waterfall, as it was originally proposed, is itself *supposed* to be iterative. By fully analyzing the problem up front, you ensure that the implementation is more likely to go smoothly. When you have failed to analyze the problem properly, when you realize, for example, that C# 1.0 doesn't have generics and you're going to be doing too much casting during collection iteration and performance will suffer halfway through implementation, you have begun a new iteration. You go back to the drawing board, analyze how the problem will affect the design, modify the design, and return to implementation.

    The problem with waterfall as agilists see it (and they're mostly right), is that sometimes people spend *too* much time on design. Sometimes, you can't know that your non-generic collection iteration performance will be an issue until you've gone into the code and tested it. That's the purpose behind agile -- to be resilient to that change. The problem is, you *have* to maintain your design so that you can analyze how you will change the design. If you don't, you're not agile. Truely agile programming should be able to bring in new programmers and have them up to speed in a matter of moments.

    The problem is, many agilists believe that "good communication" is what's necessary to bring in those new programmers. Talk to the programmers familiar with the system, and they'll get the new guy on board. Bullocks, says I. No amount of programmer interaction can possibly compare to well-maintained design documents.

    *That's* where your good communication should be focused. The design.

     

    Wow, that was a ramblingly incoherent rant. Sorry! :)



  • WTF?O,



    >Wow, that was amazingly long.



    Sorry, I tend to be a bit verbose sometimes. Well, actually, it's the other way around: Sometimes I'm not verbose.



    >I'm guessing that since you say "avoid this book", you're not interested in architecture or design.



    That wouldn't be the best possible guess, but I can see how you might reach that conclusion.



    >Regarding agile vs waterfall... the bottom line is that you have to
    design something upfront, whether you're a waterfaller or an agilist.



    Yes, absolutely. I think I said the same, too. Probably got lost in the
    mass of text. Anyway, that's where some of the participants in this
    discussion are missing the mark...they seem to believe agilists don't
    understand that point. It would be great if they could work on a real
    agile project, first hand. It would clear up so many misconceptions
    that are hard to address with words alone.



    >Now, suppose that you're an agilist who wishes to code up a test
    case, but you're not the guy who wrote the original software. How do
    you go about doing it?



    That's a very interesting and relevant question. Unfortunately, I don't
    have to "suppose" anything to relate to it. So, you and I are on a
    project to do some enhancements, then. We check out the code. At that
    point...



    >If you follow the agile method, there's no design documentation,



    ...you've repeated the myth about no documentation, so I'm going to
    have to alter your hypothetical scenario a bit. There are a couple of
    possibilities. (1) The original project did follow agile methods, or
    (2) didn't. In either case, we're at the mercy of whatever the first
    team actually did, regardless of which approach they claimed to be
    following, or believed they were following.



    In case (1) I would hope to find the design documentation and a
    comprehensive set of test cases come flowing down to my workstation on
    check-out, all nice and neat. But I would not hope too fervently, since
    I've been in this business a long time. Realistically, we might find a
    one-pager giving a cryptic overview of the application's overall
    architecture, and a set of test cases providing maybe 20% coverage.
    Okay, fine. We glean what we can from the doc, and run the test suite.
    Unless someone has mucked with the code in production, the test suite
    should run green. But we only have 20% coverage, so we can't be too
    confident about the state of the thing.



    In case (2), I would not expect anything except code to appear on
    check-out. Whether there is design documentation, and if so where it is
    located, are open questions. So, we're going to spend some time looking
    for it - or not, as we choose. It's either gathering dust in a binder
    somewhere, or buried in an archive. If we can find it at all, it will
    probably be a one-pager giving a cryptic overview of the application's
    overall architecture, plus 147 kg of paper generated to satisfy the
    formal requirements of the waterfall methodology employed by the
    company. The latter will contain no useful information. The QA group
    may or may not have a set of regression tests we can run against the
    code, so we can't be too confident about the state of the thing.



    At this point there is little practical difference between the two
    cases, as far as we're concerned. Either way, we have to become
    familiar with the code base and we have to do our best to be sure we
    have a clean starting point. There's no magic way to do that. Maybe
    that's why they call it "work."



    If we're working in an agile organization, one of our tasks is going to
    be to remediate the test suite. In good conscience we can't leave it at
    20% coverage. I've been through this sort of thing, and I can't say
    it's the most creative exercise one could imagine. But it's necessary.



    If we're not working in an agile organization, then we're not expected
    to develop tests at this point, although we probably will develop some
    tests anyway just to satisfy ourselves that our code is okay. So again,
    there is little practical difference between the two cases.



    >When you have failed to analyze the problem properly, when you realize, for example, that C# 1.0 doesn't have generics...



    With respect, that's tool knowledge, not problem analysis. One of the
    strengths of agile methods is that you would (or I would, anyway) seed
    the team with some people who knew C# well and some who didn't, on
    purpose. Pairing becomes a cheap means of staff development, and a
    powerful one. But that's a different topic. Apart from that, you're
    going to discover performance problems early on for the same reason you
    will discover any other problems early on - lots and lots of testing.



    >...caught up in the process (or as an architect here calls it "checking a box")...



    Here you've expressed the doppelganger of your criticism of agile
    methods. Agilists complain that traditional developers are guilty of
    just that. The fact that some people fall into the habit of checking
    boxes doesn't say anything one way or the other about methodology. The
    same is true of many criticisms of agile methods that I've read here.
    The criticisms are really about the professional discipline of
    individual developers rather than about the methodology or tools they
    are using.   



    >They are the reason software projects fail, not the waterfall process itself.



    I agree completely...and with the corresponding observation about agile
    methods, too. Every methodology is crafted in a certain way for
    specific reasons. People who don't understand the rationale for a
    methodology are likely to cut corners and not follow the best
    practices. The results will be predictable.



    >The problem is, you have to maintain your design so that you can analyze how you will change the design.



    Of course. Isn't it obvious? With one approach, the design consists of
    documents that describe the design. In the other, the design consists
    of executable artifacts. But in both cases, when changes occur you must
    update the design. Who could argue against that?



    There are some development tools that generate code based on models.
    The "right" way to implement a change using those tools is to change
    the model and regenerate the code. They used to call those things CASE
    tools. You can do it with tools like Rational Rose, too. But people
    customize the generated code, so they don't actually go back to the
    model when they need to change it.



    Other development tools support refactoring, not to compensate for a
    lack of up front design as some people like to claim, but to facilitate
    in-flight changes to the design. It's a different approach, but doesn't
    change the fact that changes have to be reflected in the design. People
    get lazy with those tools, too. More commonly, I hear developers
    express fear of collisions with others on the team who may be modifying
    the same classes. Any usable version control system deals with that, so
    there is no need for fear, but some people still have difficulty
    embracing that way of working. So they just glom changes onto the
    existing code.



    There's one project going on where I work now where they are so afraid
    of refactoring that they just make a list of what they want to refactor
    and then work through the weekends to do it all. Sort of defeats the
    purpose of the agile principle of "sustainable pace."



    Now, someone will say agile doesn't work because "sustainable pace"
    isn't real. But the real problem is the way that team is working, not
    the approach itself. You can drive a nail using the handle of a
    screwdriver, but don't blame screwdrivers as a class for the quality of
    the results.



    >Truely agile programming should be able to bring in new programmers and have them up to speed in a matter of moments.



    Well, there's that perfect world again. Sounds like a great place.
    There are a couple of points to make here. First, it's a best practice
    not to change team composition in mid-project. Granted, that's not
    always possible, but it's the exception rather than the rule. So
    on-boarding new programmers is a relatively minor issue. Secondly, does
    anyone really expect to get a new programmer up to speed in a matter of
    moments? Which existing, traditional methodology can do that? If none,
    then what is the benchmark against which you are measuring agile
    methods on this particular point?



    I think it's good to remember that agile practices don't operate in
    isolation. There's a practice called "pairing" that helps alleviate the
    challenges of bringing new team members up to speed. I don't know about
    you, but I'd rather sit down and pair with a veteran team member than
    read documentation. I can get up to speed much faster that way.
    Remember some of the earlier comments in the thread...in a corporate IT
    environment we're working with known problem domains, familiar
    architectural and design patterns, and a handful of widely-used
    application frameworks, application servers, and industry standard
    specifications. Getting up to speed isn't what it used to be. I realize
    that's an oversimplification, but in general it's true enough. I've
    been through it and it isn't as painful as you might imagine.



    >No amount of programmer interaction can possibly compare to well-maintained design documents.



    Here we must agree to disagree, then. But that's okay. Each approach
    has its appropriate uses, and each also calls for a different "culture"
    and mindset. So there's plenty of work for both of us.



    Thanks for your well-reasoned response. It makes the discussion worthwhile.




  • @Alex Papadimoulis said:

    Some organizations (especially
    smaller ones) belive that this is not possible to do; how can you sell
    a software system if it takes 30% just to figure out how much it will
    cost? I consult for these organizations to demostrate that it
    really is possible, and not as bad as they thought. The trick
    is following a few simple rules and providing
    stepped-estimates, starting with the scope overview to the final spec.





    I am always interested in people's real world experience and their
    recommendations for increasing one's knowledge base.  What books
    would you recommend to get a good grasp of the "few simple rules" and
    such?  I am always worried an author will have great methology
    suggestions they cooked up in academia without ever actually field
    testing (some books claiming java's AWT would be the world's way to the
    most super-professional grade GUIs back in the v1.0.2 days come to
    mind).






  • I've been reading these threads with interest since I work in an organization that, in my 9 years with the company, has wandered all over the map in terms of methodology.  Over the past almost 2 years, we've transformed ourselves into a fairly "agile" group.  I find it interesting (given the ongoing argument about how agile != documentation) that I have written more true documentation - in code, XMLDoc (NDoc) and "traditional" documents - in this time period than I did in the previous 5 years under some form of waterfall.



  • Galactic Cowboy,



    You said "true documentation". That's it in a nutshell, isn't it? With
    waterfall methods people generate a lot of documents, but how much of
    the material is actually useful?



  • I've seen guys who could go on for 5 or 6 pages without saying a thing...  :)  And they're not even lawyers...

    We also use SharePoint quite heavily as a document repository and planning tool, and have an on-site message board for group-wide collaboration.

    There is a lot of "agile" stuff that I don't really subscribe to, and I'm not really a big TDD fan.  Unfortunately, TDD works best in a non-UI scenario.  However, having enough discipline to keep a suite of automated unit tests has saved me HUNDREDS of hours of laborious debug-and-fix time - change the code, run the tests, fix the problems.  On one project our automated unit tests hit around 90% coverage on the back end, and our SQA dept. wrote scripts to automate a significant portion of the front end.  (SQA uses Rational Robot and WinBatch, neither of which are really worth the $$$$ but it doesn't come out of my budget....)  "Pair" programming isn't really an official part of the culture, but a lot of it goes on naturally.  And I've seen people avoid some serious flaws simply by having another individual watching over their shoulder.

    Probably the biggest shift was moving to daily builds.  This varies based on the phase of the project.  At the beginning we may only do weekly, and by release time we may build 2 or 3 times per day.  Unfortunately, we never identified a useful deterrent (other than public shame, which really doesn't work well) for breaking the build.  Part of the build process is a series of build verification tests (both automated and manual) that run a sanity check on the build.  If the build passes BVT, it is deployed to our testing lab.

    We generally start designing a product as soon as we have a requirements draft.  The assumption (which is generally correct) is that this is about 75% accurate and we can fine-tune it later.  Some customers really drag out the process, so it can take time to get them to sign off on the requirements even if they don't change a thing.

    Back to documentation:  Here in a nutshell is the outline that we fill in for every major feature or new product:  Introduction (executive summary, business case), High-Level Design, Detailed Design, Dependencies, Level of Effort (broken down into granular items).  We use a customized coding standard based on the iDesign C# coding standards.  XMLDoc method/parameter documentation is encouraged but not required as long as SOME level of documentation is used.  I am not aware of anyone here who really buys into the myth of "self-documenting code".  There are numerous automated tools out there that can be used to generate code-level documentation as well as to profile and test various aspects of the system.  These tools can even be used by people in other methodologies - the tools are not "agile" per se.



  • I should also mention, on the daily builds...  We already had a separate "build" machine; as part of this process we scheduled the builds to run automatically.  This has been an ongoing area of refinement - when it started out we had about 10 different individual builds but have since consolidated them down to 3 or 4.  Each build generates a detailed build report that is then automatically e-mailed to the team.  The build report indicates the project name, build number and success in the subject line, for easy management.  I have a rule set up in Outlook so that all build reports are copied to a subfolder and successful builds are marked as read.  As a result, I only "see" the failed reports.  I also set up autoarchive on that folder so that I only keep 2 weeks' worth of build reports.



  • GalacticCowboy, may I ask how large your project is? LOC and/or megabytes of the source?



  • Galactic Cowboy,



    It sounds like you're doing a lot of things in an agile way. Also
    sounds like your organization is moving incrementally in that
    direction. That's probably a very good approach. There is so much deep
    culture change involved in transforming an organization from a
    traditional one to an agile one that when companies try to do it all at
    once they usually fail.



    Your comments about going from a weekly to a daily build is a case in
    point. If you had tried to go immediately from a traditional build
    schedule to continuous builds with frequent check-ins, which is what
    fully agile teams routinely do, it may have been too big a change.
    Eventually you may get there. And you're scratching the surface of the
    value of pairing already, even without making any sort of formal
    commitment to pairing as a standard practice. Once your staff begins to
    experience the value of pairing, I'll bet they do more and more of it.



    TDD at the UI level is very challenging. We've been trying to extend
    TDD to UI development as well as acceptance tests. It's possible to do
    at least some TDD at the UI level depending on the technologies used in
    a given solution. You can write executable UI tests before the fact
    using Microsoft .NET tools, if the UI is generated from code. For some
    webapps, you can use a table-based tool like Selenium or FITnesse. But
    a webapp that makes use of Ajax seems to be beyond the reach of
    available tools for automating UI tests at the moment. The best web
    framework for TDD purposes seems to be Ruby on Rails plus Watir, but
    there's a lot of development going on in that area just now.



    Acceptance test driven development is an even more compelling idea. If
    you could drive development with an executable "specification", you
    could eliminate a lot of opportunity for defects altogether and reduce
    dependency on formal hand-offs and documentation a bit more. But it's
    hard to do and we haven't worked it out well enough yet. Interesting
    challenge. One of the agile concepts, "cross-functional team," has been
    helpful in this area. An analyst might pair with a developer, or a
    developer with a tester, to write executable acceptance tests.



    Over time, the different specialists level each other up a bit in the
    other skillsets. Over time that could conceivably result in teams that
    can smoothly handle all the different types of work involved. It would
    require people that Scott Ambler calls "generalizing specialists." Most
    IT professionals don't see their career development in that way,
    though. Most try and perfect their skills in one or two specialized
    areas.



    I've seen another idea, behavior-driven development or BDD, that sounds
    promising. Could be a way to drive development from the functional test
    level. That would be one notch better than driving development at the
    unit test level. Remains to be seen how practical the idea will be.



    Anyway, good luck with your agile efforts.



  • Dave:  We tried a "cross-functional team" once, but the people who did it really didn't understand what they were doing...  Let's just say, things really didn't work out.  I think there's a strong resistance now to anything called "cross-functional team", even if it were to be properly implemented.

    That's part of the problem, isn't it...  [:)]  People get fixated on a name for something and think they understand what the name means - so if the thing associated with that name fails, the name itself is soiled.

    ammoQ:  I don't have an exact LOC count, but roughly 1M C++ (recently converted to VS2005), broken up into about 15 modules; another 1.5M in C#/ASP.NET (still in VS2003; we'll convert in about 2 months - after our next SP release) across 2 projects as well as 10 assemblies that are used system-wide.  Plus three major database schemas (SQL Server) - 50 tables/50 SPs in the first, 50 tables/100 SPs in the next, and 100 tables/250 SPs in the last.



  • GalacticCowboy, excellent point about people getting fixated on the names of things.


Log in to reply