DeLorean required



  • A while back, it was decided that some data that was published by another team would now be published by our team. For unrelated business reasons, the subscription mechanism would not be durable.

    Since there are numerous consumers of said data, we decided to start publishing the data in parallel with the other team so as to give the downstream folks time to test and cut over. Only then would the other team cease publication. This way we don't need to have all of these systems deployed simultaneously, and also avoid the everyone-has-to-roll-back-because-system-x-got-it-wrong situation.

    Everyone knew the particulars of how, what, when and where to subscribe to the data well in advance of the test period. Ok, I start publishing the data. The next day, our project manager sends out an email:

    We started publishing the data from our team yesterday (emphasis mine), Please subscribe so that you can get and test with that data.

    I guess it never occurred to him to send out that email the day before the test.

     



  • Huh? I don't see anything wrong with that. Unless you stopped the new system after one day, in which case why?



  • @Timmmm: The data is different each day. In particular, there were several different data file formats we were to cycle through, one per day. If you miss a day, then you didn't test with that file format. The testing is only for a few cycles of all the formats. If you don't get it right the first time, you have a few days to fix your code and try again the next time that file format is published. If you don't start until the last minute, if you don't get it right the first time, you're screwed.



  • I don't really see a problem either.

    It's like, "we migrated the servers yesterday. Please update your bookmarks."

    Goddamn synchronized posting. 😠

    <!-- xyz -->


  • Context would have been really handy in the first post, I was lost too.  But:

    @snoofle said:

    @Timmmm: The data is different each day. In particular, there were several different data file formats we were to cycle through, one per day. If you miss a day, then you didn't test with that file format. The testing is only for a few cycles of all the formats. If you don't get it right the first time, you have a few days to fix your code and try again the next time that file format is published. If you don't start until the last minute, if you don't get it right the first time, you're screwed.

     

    This whole process sounds like a giant pile of WTF.  Is there a sensible reason things are done this way, or are people just trying to not jam up the tubes with too many different formats?



  • @snoofle said:

    In particular, there were several different data file formats we were to cycle through, one per day. If you miss a day, then you didn't test with that file format. The testing is only for a few cycles of all the formats.
    ... is there a good, non-obvious reason for this or is it as much of a wtf as it seems? I mean it sounds like it came out of a hollywood script:

    "We only have 2 minutes to access the data. After that it shifts and we wont have access for days" 

    "The suspect will be long gone by then!"

    Cue frenzied typing by sweating geeky sidekick while flashy graphics swirl around the screen next to a timer counting down.



  • @Justice said:

    Is there a sensible reason things are done this way...?

    The files are *huge*, and our qa and pre-prod environments share the same network - it would impact performance in production so we were told: one file format once per day; do it over several weeks to minimize the load on prod.

    And yes, it is a giant pile of wtf. $DEITY forbid we just use real-time messaging instead of this ftp-wannabe crap.



  • @snoofle said:

    @Justice said:

    Is there a sensible reason things are done this way...?

    The files are *huge*, and our qa and pre-prod environments share the same network - it would impact performance in production so we were told: one file format once per day; do it over several weeks to minimize the load on prod.

    And yes, it is a giant pile of wtf. $DEITY forbid we just use real-time messaging instead of this ftp-wannabe crap.

    It's Stephen King's proofs. New book each day. 800 pages at least.



  •  Does King still write a lot?



  • It's still better than what I have to deal with.  A few weeks in advance we get told, "this file is being changed on this date".  Sometimes we will get told what it is being changed for, but never specifics on the actual changes.



  • @DOA said:

    Cue frenzied typing by sweating geeky sidekick while flashy graphics swirl around the screen next to a timer counting down.

     

     Right about then Hugh Jackman drops a logic bomb through the trap door.



  • @snooflle said:

    $DEITY forbid

    NICE!



  • @smbarbour said:

    It's still better than what I have to deal with.  A few weeks in advance we get told, "this file is being changed on this date".  Sometimes we will get told what it is being changed for, but never specifics on the actual changes.

    I fucking hate it when they do that!  You basically have to stop every business process that is dependent upon that file being consumable by your system.  Then you get to do some "realtime development".

     Although in my case, the outside partner sends a vague hint that they're thinking about changing a format, and then just does it without telling us.   So we get to see the system fail as our first indicator that a change has occurred.  The more you think about them, the creeper the possibilities of just how the system will fail become.



  • @hoodaticus said:

    @snooflle said:

    $DEITY forbid

    NICE!

    Except that $DEITY hasn't been initialized, so that's " forbid", which doesn't make much sense.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.