Birds Nest



  • I recently posted about an offer to review lots of code for MegaCorp. Thanks to all who responded. Since I was already reviewing one of their systems anyway, I decided to see what was in store should I accept the mission. Given what follows, you can understand why I'm going to pass.

    By comparison, this was minor. So was this.

    This particular system was originally written using Sybase. Later, new stuff was added in Oracle. Since it's all the same logical system, naturally, this leads to lots of two-phase commits. Both databases support this, and although it leads to a lot of complex code, it is not the WTF.

    Then I discovered that in the middle of this two-phase commit transaction, the code calls a web service - synchronously - to get some information. Ok, WTF-ish, but if the web service throws an exception, the two-phase transaction gets rolled back. This too, is not the WTF.

    Then I discovered that the web service calls yet another web service, repeatedly, via an ESB. This second web service implements a long-running process. As such, there are numerous db queries, crunching and db-saves going on to the (third) database used by the second web sevice. Any one of these things can fail in any number of ways at any number of junctions, which result in an exception being thrown back to the first web service, which throws an exception back to the code that was doing the two-phase commit, which rolled back <whatever> in both Sybase and Oracle. This is still not the WTF.

    BTW: no other system uses either of these web services or the underlying long running process.

    Are you ready?

    Both Sybase and Oracle schemas (and everything therein), both web services, the long running process code and ESB handlers were ALL designed, developed, tested, deployed and supported by the same people on the same team.

    That's right; instead of migrating things forward into one database, and using consistent technology to access the db (why wrap what is essentially an internal task with nested web services that nobody else uses?), they chose to keep grafting on different tools and techologies to create a birds nest of WTF.

    And that's not even the WTF!

    When third level managers found out about this, they went bonkers, not because it's a needlessly complex, unsupportable, inefficient steaming pile-o-WTF, but because they were told that they needed to buy, license and support multiple database servers, an ESB and a myriad of other stuff, at a time when everyone knows we're supposed to be consolidating stuff to save costs.

    The technologies they purchased were already in widespread use throughout the company, so I imagine a few more licenses were comparatively minimal in cost.

    And these are the people who want me to dig up and report on the stupidity committed by developers?



  • If one were to assume that the devs/architects were competent (a stretch, I know) then I could see doing webservice calls as things get are then easy to reuse in other places.  Of course without knowing how likely these bits are to be reused we can't know how much of a WTF building for that would be.



  • @snoofle said:

    This particular system was originally written using Sybase. Later, new stuff was added in Oracle. Since it's all the same logical system, naturally, this leads to lots of two-phase commits. Both databases support this, and although it leads to a lot of complex code, it is not the WTF.
     

    What exactly does this mean? That some of the applications data was stored in Sybase, while other data for the same application was stored in Oracle?



  • @locallunatic said:

    without knowing how likely these bits are to be reused
    No reuse - even internally - at all.



  • @mt@ilovefactory.com said:

    What exactly does this mean
    Some data and business logic is in stored procedures in Oracle, some is in data and stored procedures in Sybase, and some is in Java that wraps the calls to both databases. The logic goes something like this:

      try {
          begin two phase transaction
            do non-transactional stuff
            call sybase (several times)
            call web service (could be done before transaction begins)
            crunch results of web service (also, could be done before transaction begins)
            call oracle (several times)
            do unrelated crunching (code just doesn't belong in here)
            call both sybase and oracle several more times 
          commit everything
      } catch (Exception_1 e) { ... }
      } catch (Exception_2 e) { ... }
      } catch (Exception_n e) { ... }
    


  • @snoofle said:

    Some data and business logic is in stored procedures in Oracle, some is in data and stored procedures in Sybase, and some is in Java that wraps the calls to both databases. The logic goes something like this:

    I really think we found the true wft here.

    How the hell do you end up with such a system? A database migration from Sybase to Oracle which were stopped at the halfway, or just a new developer which prefer to use Oracle but who don't want to rewrite the existing code which use Sybase.

    And I don't even want to think about how you backup this solution. You need to start the backup of both databases at exactly the same time, or you will end up with un-synchronized data when you restore your backup.   

     



  • @snoofle said:

    And these are the people who want me to dig up and report on the stupidity committed by developers?
    I think that you need to seriously consider the idea that you have actually died and are being tortured in your own personal hell.



  • @OzPeter said:

    @snoofle said:
    And these are the people who want me to dig up and report on the stupidity committed by developers?
    I think that you need to seriously consider the idea that you have actually died and are being tortured in your own personal hell.
    You say hell, I say goldmine.

    The "stupidity" committed by developers consists of screwing over the people who hired them.  I say you document just how much this whole bird's nest is costing the company and tell them your bill (in the form of a finder's fee) is a flat 15% of that amount.



  • @mt@ilovefactory.com said:

    how you backup this solution
    I didn't even go that far. Once I figured out what was really going on, and for how long (seven years since the system was first written), I decided that I'd seen enough. I could imagine the high level managers demanding the thing be rewritten sensibly, and the users saying that there just wasn't time, and that it'd have to be done "another day", which would never come.

    I don't mind being the instrument for positive change; in part that's what I do. However, uncovering WTF just to create a pissing match (between higher managers and development) and having nothing good come from it is something I can easily do without. I told the managers that asked me to do this code reivew what I found, in detail, and what should be done. They asked if my review was complete. I pointed out that given this mess, there was no point in continuing as everything else would be dwarfed by this.

    As such, I extricated myself from doing any more code reviews for these people, and going forward, I expect that I simply won't have the time to accommodate any future requests.



  • @All: I don't consider finding (bad) mistakes or really bad design decisions by others to be Hell. After all, if everyone did everything properly, I wouldn't be able to find a job - anywhere. For that matter, neither would most folks who read this forum.

    When I first got out of school, everything was new, and for years, I struggled to master the business in which I used my craft. Over the past decade, I've moved beyond that and have been spending a large portion of my time as an expert who's been there and done that. One of the biggest selling points of what I can offer is that I've already been through these sorts of projects several times; I know several ways to succeed, but equally important, I know most of the ways that will wind up being dead ends. It can save a lot of wasted time and effort. In a business where time-to-market can make or break you, that's a big selling point.

    However, just because I can find (and if asked, fix) stupidity doesn't mean I need to put myself in the firing line. If the customer is not going to act on your advice, there's no point in giving it.

    Personally, I prefer to work in a place where I can make a difference at some level. At WTF-Inc, I can't make a change at the company-wide level - politically speaking, it's just not going to happen. However, I've found a niche on a team where the manager gives me latitude to use my discretion to fix what I can in the time available.

    You folks hear about the stupidity in its raw form. What you don't hear about is the cleaned up version after I'm finished ("After the WTF"?). That's why I stay. Incremental progress. A few big hits, many small ones, and a modest amount of satisfaction for having improved things in my little corner of the world.



  • @mt@ilovefactory.com said:

     

    How the hell do you end up with such a system?

    I think this would be caused by Developers only working forward. Meeting deadlines and new (important) features are more important then consolidating two systems into one. They may have been forced to work on other things by Management... This doesn't excuse how DB connections were left open while the app is doing processing that could be done before the connection needed to be opened, however.

    Still, if it works, Management has every right to tell a Developer to press forward with the vital features, and to not bother cleaning things up. But in most cases it's either Lazy Developers or incompetent managemen...

     


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.