Three layers of WTF



  • Delurk post for me, I need to vent...

     I'm a sysadmin stuck with keeping a cruddy hospital patient system reasonably alive and preferably consistent. The vendor decided last year that they didn't want to support the product any more to "Focus on their core market of primary care" instead of specialist hospital systems. This of course caught the powers that be totally unawares (they were warned by me repeatedly) and so we have no replacement. Yes, we have to run an unsupported patient system.

    Recently users began to complain about Word docs attached to patient records going missing and being replaced by empty documents. The following search revealed WTFs layered on WTFs.

     An individual entry in a patient journal can have one and only one document attached. Despite this there are naturally no constraints in the database. No unique, no foreign kyes, nothing. There is also a bug somewhere in the appication layer that results in aditional attachments being linked to the same journal entry. The vendor must have realized that something wierd was happening because they've implemented a check for this when you fetch a document. SQL profiler revealed the following:

    SELECT COUNT(*) FROM documents WHERE journal_entry = <whatever>

    Immediatly followed by SELECT MAX(document_id) FROM documents WHERE journal_entry = <whatever>

    So first they check for a condition that should never happen and could be prevented by decent DB-design, and then they ignore the result of said check (hey, it should never happen anyway...) and assume that the document with the highest ID is the one we want. A bug also sometimes results in blank documents being written to the database when the users attempt to edit an existing doc. This new blank doc has a higher id than the original. I can't fix the bug because I don't have the source and the vendor refuses to have anything to do with the product anymore so I had try for a workaround. 

    OK, then to find the stored procedure responsible and fix it to get the original document instead, never mind all the erroneous documents being created. Exchanging MAX for MIN in the query above should do it. But NOOOOO They didn't use a stored procedure for this. There are a hundred SPs or so in the DB so they knew what they were, but "get the document attached to a journal entry" they decided to do straight from the application. So the fix is waiting for the problem to occur and the manually deleting all the erroneous docs assosciated with a particular journal entry. By running a DELETE FROM documents WHERE <blah blah> on a live patient system.

    TL;DR: No DB-constraints, serious bugs, checks that do nothing, inconsistent use of stored procs. And these people want to "focus  on primary care systems". Yay.



  • @Pilsner said:

    And these people want to "focus  on primary care systems". Yay.
     

    Perhaps they should focus on disbanding the company.



  • @dhromed said:

    Filed under: Yes I do

     



  •  I'm not a DBA or anything, but why not just put a trigger on insert of the DOCUMENTS table that prevents a 2nd row from ever getting commited?



  • @TwoScoopsOfHot said:

     I'm not a DBA or anything, but why not just put a trigger on insert of the DOCUMENTS table that prevents a 2nd row from ever getting commited?

     

    You're hired!



  • @Pilsner said:

    The vendor decided last year that they didn't want to support the product any more to "Focus on their core market of primary care" instead of specialist hospital systems.

    Based on your description of their specialist hospital system, getting as far as possible away from that business seems like a good idea.

     

     



  • @Pilsner said:

    So the fix is waiting for the problem to occur and the manually deleting all the erroneous docs assosciated with a particular journal entry. By running a DELETE FROM documents WHERE <blah blah> on a live patient system.

    Why not put a trigger on the table - when the app attempts to insert a blank document just "ignore" the request (or failing that, attach it to a dummy patient record).



  • @Auction_God said:

    @Pilsner said:

    So the fix is waiting for the problem to occur and the manually deleting all the erroneous docs assosciated with a particular journal entry. By running a DELETE FROM documents WHERE <blah blah> on a live patient system.

    Why not put a trigger on the table - when the app attempts to insert a blank document just "ignore" the request (or failing that, attach it to a dummy patient record).

    Why not just realize all IT in the medical industry is crap, and run (don't walk) to a job in another industry?



  • @TwoScoopsOfHot said:

     I'm not a DBA or anything, but why not just put a trigger on insert of the DOCUMENTS table that prevents a 2nd row from ever getting commited?

    Yup that's what I would do in this situation. Fix the database, let the clients throw errors.



  • @Pilsner said:

    I'm a sysadmin stuck with keeping a cruddy hospital patient system reasonably alive and preferably consistent. The vendor decided last year that they didn't want to support the product any more to "Focus on their core market of primary care" instead of specialist hospital systems. This of course caught the powers that be totally unawares (they were warned by me repeatedly) and so we have no replacement. Yes, we have to run an unsupported patient system.
     

     

    Until this point I swore that you worked at my company.  We had a huge system that was sunsetted by the company over two years ago finally die.  The system was so old that only one person at the company actually knows anything about it.  We've been telling the unit that they needed to replace it for two years, and it was supposed to go live this past november.  But right as we were supposed to sign the contracts, they decided to shop around for a cheaper option.  So now the nurses have all had to revert to paper charting, and the replacement doesn't go live until the end of the summer.  I just chuckle to myself whenever they bring it up in our twice monthly project meetings.



  • @Pilsner said:

    I can't fix the bug because I don't have the source and the vendor refuses to have anything to do with the product anymore so I had try for a workaround. 
    Just wondering: why not contact the vendor and try to buy the source code? Or check if you have a [url="http://en.wikipedia.org/wiki/Source_code_escrow"]source code escrow[/url] agreement



  • @bjolling said:

    Or check if you have a source code escrow agreement
    Judging from the response I got on here to escrow, I wouldn't put too much faith in that either
    1. being in place or
    2. being of much use if it had
    since very few developers on here have had any experience with it.



  • @blakeyrat said:

    Why not just realize all IT in the medical industry is crap.

    HEY! I work at a hospital!

    Though I can't disagree with IT being shit here... The IT department just can't seem to get some basic things right in here (like a halfway-decent Active Directory).



  • Simple answer.. I'm not a DBA eighter and didn't know about the "inserted" virtual table accesible to triggers untill right now. :P

    I might just throw that on the test server and see what happens. Thanks!



  • Seems like hospital IT is lagging behind 5-10 years in general.

    Security and complexity is a real bitch, but c'mon! There are so many benefits to be had from investing more in IT. We're still rolling windows XP because they're too timid to "risk" a migration to 7. But there are rumors of an upgrade from office 2000 to office 2007 (gasp)...



  • We're on XP too. Office is 2000, Except Outlook 2007. All mailing options from Word and Excel (you only get Access on request, and wtf is this Powerpoint you speak of...) are thus broken unless you report it's broken in which case they'll send you a fix.

    Also, they're so worried about having redundancy in case something fails, that they refuse to see the datawarehouse I'm administrator of has completely different demands for storage / usage / back-ups than their patient registratoin system and other operational applications.



  • @TwoScoopsOfHot said:

     I'm not a DBA or anything, but why not just put a trigger on insert of the DOCUMENTS table that prevents a 2nd row from ever getting commited?

     Update: I just tried this in test and it fails miserably. Some more SQL profiling revealed why.

    The application handles an edited document by writing an entirely new document to the DB (incrementing the document_id), committing the transaction, and then deleting the original in a separate transaction... So a trigger after insert will rollback the insert, and then the deletion of the original goes through resulting in the complete dissapearance of the document. This explains the duplicates to some degree, and the "always get the doc with the highest id" logic. Duplicates will happen if the delete fails (I'm guessing locking by another user for instance.) And conversely, if the insert fails the appliction won't notice and will happily delete the only existing version. This also explains the users that complain of getting a "There are no records available to process" error when they try to open a document...

    I guess an UPDATE would just be too easy... At least now I know why there's no UNIQUE constraint. This thing is written in Clarion so even with the source it would probably be a nightmare of generated code to wade through. Luckily outside factors are forcing a complete replacement so I have some hope that we'll get something within a year or so. Oh well.



  • @Pilsner said:

    So a trigger after insert will rollback the insert, and then the deletion of the original goes through resulting in the complete dissapearance of the document. This explains the duplicates to some degree, and the "always get the doc with the highest id" logic. Duplicates will happen if the delete fails (I'm guessing locking by another user for instance.) And conversely, if the insert fails the appliction won't notice and will happily delete the only existing version. This also explains the users that complain of getting a "There are no records available to process" error when they try to open a document...

    I'm not a DBA either, but can't you copy the good document (A) to a new row (thus becoming document B) and let the software delete A?

    You should also deny the origional write ofcourse.



    Should be possible if your DB allows every stored procedure to be triggered.



  • @blakeyrat said:

    @Auction_God said:

    @Pilsner said:

    So the fix is waiting for the problem to occur and the manually deleting all the erroneous docs assosciated with a particular journal entry. By running a DELETE FROM documents WHERE <blah blah> on a live patient system.

    Why not put a trigger on the table - when the app attempts to insert a blank document just "ignore" the request (or failing that, attach it to a dummy patient record).

    Why not just realize all IT in the medical industry is crap, and run (don't walk) to a job in another industry?

     

    Seconded. Just don't change to the banking industry. It's just as bad, if not worse, although you rarely hear about a banking software provider abandoning a product for a new one. They just "update" it over and over, until you're left with a product that was created before Windows 95 and has many layers of spaghetti code and kludges stacked on it.



  • @Buffalo said:

     They just "update" it over and over, until you're left with a product that was created before Windows 95 and has many layers of spaghetti code and kludges stacked on it.

     

    That's the case for my company's product; the core code is over thirty years old.  While the old code is terrible by current coding standards (largely an artifact of the time it was made -- at the time the only platforms available were interpreted, and the time it took to lexically parse the lines made a significant performance difference, so code heavily favored speed over readability), that's not usually the big issue.

     The one thing that I most dislike is how backwards compatibility dominates so many of our design decisions.  Anytime an existing workflow or functionality was changed, the default configuration setting had to be to do the legacy behavior -- even if 95% of our customers will want to always choose the newer behavior, it defaults to off simply because we were dead-set on making sure that existing configuration records would seamlessly transition to the newer versions.  So oftentimes the system defaults to the exact opposite of what you want it to do, meaning that users first configuring the system today have to make sure all these obscure settings are specified because fifteen years ago we didn't want to upset our clients by changing the default behavior to something better.


  • Discourse touched me in a no-no place

    @Cat said:

     The one thing that I most dislike is how backwards compatibility dominates so many of our design decisions.  Anytime an existing workflow or functionality was changed, the default configuration setting had to be to do the legacy behavior -- even if 95% of our customers will want to always choose the newer behavior, it defaults to off simply because we were dead-set on making sure that existing configuration records would seamlessly transition to the newer versions.  So oftentimes the system defaults to the exact opposite of what you want it to do, meaning that users first configuring the system today have to make sure all these obscure settings are specified because fifteen years ago we didn't want to upset our clients by changing the default behavior to something better.
    And you can't throw a version number in the configuration file? All config files with no version number are loaded with the "safe" defaults. All config files with one are loaded with the appropriate-for-that-era "popular" defaults? And of course, all new configuration files are created with the current version number and all the sane defaults.



  • @Weng said:

    And you can't throw a version number in the configuration file? All config files with no version number are loaded with the "safe" defaults. All config files with one are loaded with the appropriate-for-that-era "popular" defaults? And of course, all new configuration files are created with the current version number and all the sane defaults.

     

    Configuration isn't really files per se, it's thousands of records across scores of database 'tables' (I say tables for familiarity, but the actual database is nonrelational).  And there's no guarantee that every piece of code that reads a particular setting is using a single accessor method, so the default value may be assumed in many places throughout the code.

    The way we deal with this issue is that nobody really starts from scratch; we provide reference records with examples of all the common configurations and duplicate and modify those as necessary.  And there's been at least some traction on getting people to create the "right" default and simply provide tools to run during the upgrade to preserve the legacy behavior if desired.



  • @Cat said:

    The one thing that I most dislike is how backwards compatibility dominates so many of our design decisions.  Anytime an existing workflow or functionality was changed, the default configuration setting had to be to do the legacy behavior -- even if 95% of our customers will want to always choose the newer behavior, it defaults to off simply because we were dead-set on making sure that existing configuration records would seamlessly transition to the newer versions.  So oftentimes the system defaults to the exact opposite of what you want it to do, meaning that users first configuring the system today have to make sure all these obscure settings are specified because fifteen years ago we didn't want to upset our clients by changing the default behavior to something better.

     

    That's one of those situations where you're damned if you do and damned if you don't. From the perspective of someone dealing with an established system, with automation and procedures in place, few things are more frustrating than installing an update and finding something doesn't work right. Banks - especially smaller ones - are quite resistant to change, so 15 years ago that was probably the sane thing to do, and because of the ridiculously long software lifecycle you're still stuck with that decision today. Really, the whole situation is a giant pile of WTF, and the software providers don't deserve all the blame. How many banks are going to risk being a guinea pig to a brand new piece of software when there's umpteen 20- or 30-year-old "stable" software solutions available?



  • @Buffalo said:

    That's one of those situations where you're damned if you do and damned if you don't. From the perspective of someone dealing with an established system, with automation and procedures in place, few things are more frustrating than installing an update and finding something doesn't work right. Banks - especially smaller ones - are quite resistant to change, so 15 years ago that was probably the sane thing to do, and because of the ridiculously long software lifecycle you're still stuck with that decision today. Really, the whole situation is a giant pile of WTF, and the software providers don't deserve all the blame. How many banks are going to risk being a guinea pig to a brand new piece of software when there's umpteen 20- or 30-year-old "stable" software solutions available?

     

     Actually, I'm in medical software, not banking, but yes - it probably was the smart thing to do at the time, it just had unfortunate effects down the road.  That's true in many areas, not just configuration - often in the past we had choices where we could develop a complex but robust solution that would apply in general, or develop simpler solutions that would satisfy the majority of users but end up causing problems in the long term.

     At the time, the simple solutions were best - if we could easily satisfy the needs of 90% of your potential end users, at the expense of making the remaining 10% even harder to satisfy down the road, we did it. Now we've spent so much time working to undo some of those decisions and slowly morph the solutions we created (plus the stopgap fixes that sort-of-mostly work) into the robust solution we initially envisioned but decided was too complex.

    Even that made sense at the time, though - we're about six times larger and with well over six times the market share compared to when some of these "bad" decisions were made, so deferring the complexity until later did help us.  It's just not fun when you are the one who needs to pay back that saved time with interest.



  • @Cat said:

    the core code is over thirty years old.

    @Cat said:
    at the time the only platforms available were interpreted

    @Cat said:
    Configuration isn't really files per se, it's thousands of records across scores of database 'tables' (I say tables for familiarity, but the actual database is nonrelational).

    @Cat said:
    Actually, I'm in medical software, not banking

    Sounds like a typical MUMPS system to me.



  • @bannedfromcoding said:

    Sounds like a typical MUMPS system to me.

     

    Indeed it is.



  • @Cat said:

    Indeed it is.

    My condolences.



  • @bannedfromcoding said:

    @Cat said:

    Indeed it is.

    My condolences.

    Until recently I always thought MUMPS was just thought up by Alex as general pointer to the WTF'yness (by today's standards) of old FORTRAN, COBOL etc. languages and the problems when dealing with those old system.

    I cried for the poor developers who support those systems when I found out it actually existed.



  • @dtech said:


    Until recently I always thought MUMPS was just thought up by Alex as general pointer to the WTF'yness (by today's standards) of old FORTRAN, COBOL etc. languages and the problems when dealing with those old system.

    I cried for the poor developers who support those systems when I found out it actually existed.

     

     Eh, it's actually nowhere near as bad as you might expect.  The obfuscated-beyond-recognition coding style that was associated with old MUMPS is long gone (though yes, I will admit if you look deep enough into the bowels of our code you will find some... I am lucky in that my area of work is one of our newer modules and it's actually decently well written.  I do pity the fools who maintain our core modules, because some of that code gives me nightmares)

     The articles tend to exaggerate or tell half-truths about the language.  It's really no harder to read or write than any other procedural programming language, e.g. C, once sustainability began to trump speed and coding standards evolved to reflect that.  There are some genuine WTFs but the articles don't really get into the actual ones - things like variable scope and parameter passing are my main dislikes.  Scoping is strange because loops, do-blocks, and procedure calls are all just a new stack level, and every stack level is able to access any variable declared at its level or higher, unless that variable is hidden by declaring a new variable of the same name.  That makes a lot of sense for loops and do-blocks, but not so much for procedure calls.  So when you call a procedure, it's the called procedure that is responsible for making sure it doesn't overwrite its caller's variables.  Now we do have automated tools to detect that, and that tool is run three times by three different people as part of the peer QA process, but it's still a bit strange of a language design choice.

    Parameter passing is a bit odd as well in that whether something is passed by reference or passed by value is decided by the *caller*, not by the called procedure.  Sometimes that can actually be very useful, though - but it's a bit odd of a paradigm shift.



  • @Cat said:

    @dtech said:


    Until recently I always thought MUMPS was just thought up by Alex as general pointer to the WTF'yness (by today's standards) of old FORTRAN, COBOL etc. languages and the problems when dealing with those old system.

    I cried for the poor developers who support those systems when I found out it actually existed.

     

     Eh, it's actually nowhere near as bad as you might expect.  The obfuscated-beyond-recognition coding style that was associated with old MUMPS is long gone (though yes, I will admit if you look deep enough into the bowels of our code you will find some... I am lucky in that my area of work is one of our newer modules and it's actually decently well written.  I do pity the fools who maintain our core modules, because some of that code gives me nightmares)

     The articles tend to exaggerate or tell half-truths about the language.  It's really no harder to read or write than any other procedural programming language, e.g. C, once sustainability began to trump speed and coding standards evolved to reflect that.  There are some genuine WTFs but the articles don't really get into the actual ones - things like variable scope and parameter passing are my main dislikes.  Scoping is strange because loops, do-blocks, and procedure calls are all just a new stack level, and every stack level is able to access any variable declared at its level or higher, unless that variable is hidden by declaring a new variable of the same name.  That makes a lot of sense for loops and do-blocks, but not so much for procedure calls.  So when you call a procedure, it's the called procedure that is responsible for making sure it doesn't overwrite its caller's variables.  Now we do have automated tools to detect that, and that tool is run three times by three different people as part of the peer QA process, but it's still a bit strange of a language design choice.

    Parameter passing is a bit odd as well in that whether something is passed by reference or passed by value is decided by the *caller*, not by the called procedure.  Sometimes that can actually be very useful, though - but it's a bit odd of a paradigm shift.

     

    I've only been exposed to it in passing, but my 50,000-foot view of MUMPS is that it's for people who have heard of RPG but decided to implement it themselves rather than acquire the existing language.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.