EPIC Audit Trail Fail



  • In a system I'm working on we have an audit trail strategy that involves an update, a delete and an insert trigger on every table in the database, the purpose of which is to....you guessed it.... audit all updates, deletes and inserts. It does this by inserting a row into a table called 'Audit' for every column in the table row that is updated/inserted etc.

    The trouble is it does this per column, so inserting a new record into a table, say 'PatientRecord', that has 20 columns would insert 20 new rows into this Audit table.

    Id RowId Table Column Value Username ActionType Date
    123 1 PatientRecord Name John Smith
    Admin INSERT

    2011-06-20 16:52:31.823

    124 1
    PatientRecord DOB 1981-07-15 00:00:00
    Admin INSERT

    2011-06-20 16:52:31.823

    ...repeat another 8 times.

    Now it gets better (or worse).

    Every table has InsertedBy, InsertedDate,UpdatedBy and UpdatedDate columns which are kept upto date by more triggers. So when a new record is inserted it'll update the InsertedBy and InsertedDate columns automatically. To do this it needs to 'update' the PatientRecord table,  this update triggers the Audit trigger which then proceeds to insert another 20 rows into the Audit table.

    Wait there's more.

    Because we've 'updated' the PatientRecord table, an update trigger fires which updates the UpdatedBy and UpdatedDate columns in PatientRecord. But we're performing another update so, yep you've guessed it, the Audit trigger kicks in and inserts yet another 20 rows into the Audit table.

    So to sum up, inserting a single row into the 'PatientRecord' table inserts 60 rows into the Audit table, and performing an update inserts 40 rows. And these triggers are on EVERY table in the database (even crappy many-to-many linking tables). And the client is wondering why the Audit table is currently sitting at 80GB.

    The guy who wrote this must have been drinking from the incompetence fountain.



  • You just know that he probably had an audit trigger on the audit table, but after crashing $SQL_SERVER 9 times in a row, he thought maybe he shouldn't do that.


  • Discourse touched me in a no-no place

     Well, it's not ENTIRELY terrible design for the audit table itself. Perhaps not the best, but meh.

     

    The trigger architecture, though. That's quite halfassed.



  • @Weng said:

    Well, it's not ENTIRELY terrible design for the audit table itself. Perhaps not the best, but meh.

     

    Perhaps one Audit row per update/insert etc. And only include details of what was actually changed, not every single row.

     



  • @Sunday Ironfoot said:

    And only include details of what was actually changed, not every single row.
     

    Yeah, but how would you do that? A fat navarchar or xml field that displays the changes in some freefrom, destructured, serialized way?

     

    That might even gobble up more data than a few records.



  • If you want to throw a big wrench into this, just run a bulk insert, which won't fire triggers. Lots of fun doing recon later!



  • @dhromed said:

    @Sunday Ironfoot said:

    And only include details of what was actually changed, not every single row.
     

    Yeah, but how would you do that? A fat navarchar or xml field that displays the changes in some freefrom, destructured, serialized way?

     

    That might even gobble up more data than a few records.

     

    I proposed using JSON, it's more compact than XML. Also we'd only be storing columns that get updated, not the entire row. To compare the before & after we'd just look at the previous Audit entry.

     



  • @dhromed said:

    @Sunday Ironfoot said:

    And only include details of what was actually changed, not every single row.

    That might even gobble up more data than a few records.

    Maybe, but I'm betting the Audit table is indexed (if it isn't that's another WTF) and the indexes are taking up a large portion of that 80GB. Not to mention that every time a new row (or 40) gets added the indexes have to be rebuilt... sooner or later the rebuild will take longer than the actual insert.



  • Welcome to the shitty world of auditing in RDBMS servers. What you are describing is the only way to audit changes in a SQL Server Database Engine database, if you want to capture who, when and what all in one place.

    The only failure I can see in what you describe is that there is no process to periodically archive out the audit table.

     

     



  • @thistooshallpass said:

    If you want to throw a big wrench into this, just run a bulk insert, which won't fire triggers. Lots of fun doing recon later!

    What DBMS do you use? My experience with MS SQL Server is that the trigger is run but then because the person who wrote it didn't realise more than one row might be inserted at a time it breaks.



  • @pjt33 said:

    @thistooshallpass said:
    If you want to throw a big wrench into this, just run a bulk insert, which won't fire triggers. Lots of fun doing recon later!

    What DBMS do you use? My experience with MS SQL Server is that the trigger is run but then because the person who wrote it didn't realise more than one row might be inserted at a time it breaks.

    I've seen triggers like this written by programmers who don't fully "get" SQL. They'll happily write a script that does something like:

    update mytable
    set colA = 'Y'
    where colB >= 27

    But when you look at the trigger code they've written it has a cursor that steps through each row in INSERTED and performs some action on it. Suddenly the idea of the database manipulating a set of records flies out of the window.



  • @pjt33 said:

    @thistooshallpass said:
    If you want to throw a big wrench into this, just run a bulk insert, which won't fire triggers. Lots of fun doing recon later!
    What DBMS do you use? My experience with MS SQL Server is that the trigger is run but then because the person who wrote it didn't realise more than one row might be inserted at a time it breaks.

    On SQL Server, triggers don't fire on bulk inserts. This is by design, since forever. Note that bulk inserts are not to be confused with "multiple inserts", they are a specific operation done either with the "bulk insert" keywords in T-SQL, or via ETL (bcp, SSIS). If you do multiple regular inserts (like INSERT TABLE1 SELECT * FROM TABLE2), this does not qualify as bulk and the triggers will fire.

    And yes, some people don't understand that a trigger will execute one time for every operation, not one time for every record, so there is a lot of faulty triggers for insert, update and delete out there. Triggers are tricky and should always be a last resort because they add a lot of complexity and can cause nasty side effects..



  • @havokk said:

    Welcome to the shitty world of auditing in RDBMS servers.

    What you mean is: welcome to the shitty world of people that don't know how to audit RDBMS. 

     @havokk said:

    What you are describing is the only way to audit changes in a SQL Server Database Engine database, if you want to capture who, when and what all in one place.

    It is not the only way at all. On SQL Server it is possible to use the Service Broker to offload the audit operation so it is done asynchronously while keeping the queue order. It is also possible to intercept operations at the database engine level, outside of the query scope, and to log it in a different media; some COTS software, like IDERA compliance manager, are built around these features.

    @havokk said:

    The only failure I can see in what you describe is that there is no process to periodically archive out the audit table.

    Again you could not be more wrong. This is 2011. Housekeeping on audit tables is obsolete and actually counterproductive because it drains I/O resources for no gain at all.

    All you need to do is to use tiered storage (ideally sub-volume tiering, nowadays even Dell has this technology), and partition the audit tables (by month, year, etc) so the older data is stored on the less expensive storage. Once this is setup properly, you stop worrying and wasting I/O. If you do audit properly (ie: not within triggers) you can store the audit data on shitty storage like a MAID because you have nothing to gain by flooding your fast disks with sequential write. Using a NAS is also a good way to segregate your I/O traffic, leaving the precious HBA queue (or RAID controller) for important stuff.

    Of course if you do all your audit inside a trigger, then don't bother with proper storage since anyways you are strangling the database by keeping transactions open longer than necessary, causing locks and exponential contention.

     


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.