We can always deal with it manually



  • This company enriches some commercially available data and resells it. Our analysts play with what-if scenarios on the data until they get the numbers they like, and then mark the data set for subsequent recall. At the end of the day, they pick the data set they like and ship it out.

    The solution that was thrust upon us by our new team lead was to make a copy of all of the data tables to represent each marked result set. Then, when they chose a data set to ship, we would (in the middle of the production day), do the following:

    • Kick all the users off the system
    • Kill the app server
    • Repeat the following for EACH of about 50 data tables - in production:
    • - drop or disable all constraints, triggers and indices on "table"
    • - rename the active "table" to "table_save"
    • - rename the chosen "table_mark_n" to "table"
    • - add/enable the same set of constraints, triggers and indices on the now-replaced data table
    • Restart the app server
    • Run the package/ship routine to finalize the data and ship it
    • Stop the app server
    • Repeat the following for these same 50-ish tables:
    • - drop or disable all constraints, triggers and indices on "table"
    • - rename "table" back to "table_mark_n"
    • - rename "table_save" back to "table"
    • - add/enable all the constraints, triggers and indices on the original table
    • Restart the app server

    Of course, the moment you disable or drop constraints (think PK/FK's), anything that references it is automatically invalidated (at least in oracle), and you have a massive ripple effect.

    Naturally, if any of these steps fail, the main tables would consist of some subset of the original tables and some subset of the "marked" tables, with who-knows-what set of constraints, triggers and indices.

    When I suggested that there just HAD to be a better way (say adding a label field to each record, and having the DAO, business and GUI layers work with labeled record sets), I was told not to worry because we could always just deal with it manually.

     



  • @snoofle said:

    we could always just deal with it manually.
     

    You can always deal with anything manually, but that is not the fucking point.



  • Can't you atleast use transactions to prevent the  "some subset of the original tables and some subset of the "marked"
    tables, with who-knows-what set of constraints, triggers and indices." problem? Ok I admit i have newer tried to see if drop table/trigger/indices work in a transaction with rollback, but I seems to remember that PostgreSQL did implement transaction/rollback support for drop table and drop column.  

    And please don't quit your job, the design you need to work with make anything else sane by comparison :}

     



  • @snoofle said:

    This company enriches some commercially available data and resells it. Our analysts play with what-if scenarios on the data until they get the numbers they like, and then mark the data set for subsequent recall. At the end of the day, they pick the data set they like and ship it out.

     

    TRWTF

     A book I read ages ago drew a link between the arrival of spreadsheet software to the desktop with the subsequent collapse of several large corporations. His thought was that spreadsheets, for the first time, allowed this sort of 'fiddle input figures until the output figures are what you like' capabilities into the hands of thousands of middle managers. When all these imaginary forecasts failed, so did several corporations 



  • @Tiller: "delete table xxx" is too slow.and "truncate table xxx" can't be rolled back.

    @robbak: welcome to Wall Street - it's all random guessing with facts and figures to appear to back it up

    Oh BTW: we also thought of making a snapshot of the whole db as part of the "marking" process, and just pointing the app server at the desired snapshot (thus eliminating the risks of all the table shuffling), but the dba's said that would take 2 hours (really? I can back up my fairly full 1TB disk on my crappy home PC in under an hour and our prod db is nowhere near that much data) and our response window is 10 minutes. It's not a question of if it will fail, but when. Unfortunately our users refuse to plan for that eventuality: we need more features in the app now!

     



  • @dhromed said:

    @snoofle said:
    we could always just deal with it manually.
    You can always deal with anything manually, but that is not the fucking point.
    As long as the people who have to deal with it do not include me and I can't be blamed, I'm perfectly happy to let the muppets reap what they saw. In fact it brings a smile to my face to watch the agony as they have to dig into the pile of crap they've built. I think of it as a sliver of justice in an unfair world.



  • Remember the saying "A little knowledge is a dangerous thing" ? This is the kind of thing the management would do at my old job, although for different reasons. 

    They has read a document that said that large databases were slow databases. Therefore they decided that the databases should be small and compact. So each Friday night the server (Running MS SQL Server) was shutdown. The database files (stored in SQL Server's 'Program Files' directory no less) were copied to another machine that had a tape drive, and the database reinitialised ready for Monday. If you wanted data from a previous week, the database files were temporarily overwritten with the database files for that week, you got your data, and then they were overwritten again with the current set. To stop people adding data to the archive for the wrong week, when an old database file was used, the IP address of the server changed and whoever wanted the old data was given the new IP.

    I'm not sure of the biggest WTF. Management, or IT. 



  • " I'm not sure of the biggest WTF. Management, or IT."

     Both...unfortunately, neither has a good grasp of the WHOLE FREAKIN' POINT.  <Okay, better now that the meds are kicking in...>

     Ahem...Kinda like the folks that have multimillion dollar IT infrastructures, yet 99% of real business is done via "print, transfer by hand, copy, edit, copy, print reports, return to database for archival".  A lot of smaller factories are like that, and if you happen to suggest another alternative, you're always met with a FUDdy sales pitch argument from 1992.

     That said, I do appreciate the effort they took to change the IP as needed...that's one I haven't seen yet...



  • Why not add a table to hold an Index and a meaningfull name for each "DataSet", add another table for each relevant "real" table that would hold a primary key from that table and a matching key from the "DataSet" table so that each record in the primary can belong to many "DataSet" 's, then you could simply perform a query like:

    Select * From SomeTable S Inner Join DataSetXRef  XR On S.KEY = XR.TableKey Inner Join DataSetTable DS On XR.DataSetKey = DS.Key  Where DS.FriendlyName = "Desired DataSet"??



  • @snoofle said:

    Naturally, if any of these steps fail, the main tables would consist of some subset of the original tables and some subset of the "marked" tables, with who-knows-what set of constraints, triggers and indices.

    Don't tell your lead about transactions, then, or he will be even more eager to push his scheme further. And since he seems to possess the reverse Midas touch (everything he fiddles with turns into shit), he'll find a way to make it even more wicked.



  • @snoofle said:

    @robbak: welcome to Wall Street - it's all random guessing while making up facts and figures to appear to back it up
     

    FTFY



  • @snoofle said:

    and our response window is 10 minutes.

    Wait, you can do that whole manual process in 10 minutes? Even using script files and other automation, that's pretty impressive if you have 50 tables... of course scratch that, assuming whoever's doing it isn't actually QAing the data at every step.

    What about just buying another database server? Or partitioning the one you have. Of course, your idea to just add tags to the records is quicker and easier to implement anyway, but if management can't get behind it, maybe they can get behind major hardware purchases. Heh. Or Amazon AWS rentals.



  • @blakeyrat said:

    @snoofle said:
    and our response window is 10 minutes.

    Wait, you can do that whole manual process in 10 minutes? Even using script files and other automation, that's pretty impressive if you have 50 databases...

    What about just buying another database server? Or partitioning the one you have. Of course, your idea to just add tags to the records is quicker and easier to implement anyway, but if management can't get behind it, maybe they can get behind major hardware purchases. Heh. Or Amazon AWS rentals.

    You mean Amazon's mechanical turk? The one job market in the world that is worse and more abusive towards workers than the chinese sweatshop job market?



  • [quote user="Renan "C#" Sousa"]@blakeyrat said:

    @snoofle said:
    and our response window is 10 minutes.

    Wait, you can do that whole manual process in 10 minutes? Even using script files and other automation, that's pretty impressive if you have 50 databases...

    What about just buying another database server? Or partitioning the one you have. Of course, your idea to just add tags to the records is quicker and easier to implement anyway, but if management can't get behind it, maybe they can get behind major hardware purchases. Heh. Or Amazon AWS rentals.

    You mean Amazon's mechanical turk? The one job market in the world that is worse and more abusive towards workers than the chinese sweatshop job market?[/quote]

    No. I meant AWS. That would be why I typed "AWS."



  • @blakeyrat said:

    [quote user="Renan "C#" Sousa"]@blakeyrat said:
    @snoofle said:
    and our response window is 10 minutes.

    Wait, you can do that whole manual process in 10 minutes? Even using script files and other automation, that's pretty impressive if you have 50 databases...

    What about just buying another database server? Or partitioning the one you have. Of course, your idea to just add tags to the records is quicker and easier to implement anyway, but if management can't get behind it, maybe they can get behind major hardware purchases. Heh. Or Amazon AWS rentals.

    You mean Amazon's mechanical turk? The one job market in the world that is worse and more abusive towards workers than the chinese sweatshop job market?

    No. I meant AWS. That would be why I typed "AWS."[/quote]

    My bad. The turk is a subset of the AWS, and I mistook one for the other.



  • @Mole said:

    Remember the saying "A little knowledge is a dangerous thing" ? This is the kind of thing the management would do at my old job, although for different reasons...

    Holy Jesus. I'm going to tell myself that you made that shit up (and I'm going to take an extra oxy to help me believe it).



  • @blakeyrat said:

    @snoofle said:
    and our response window is 10 minutes.
    Wait, you can do that whole manual process in 10 minutes? Even using script files and other automation, that's pretty impressive if you have 50 tables... of course scratch that, assuming whoever's doing it isn't actually QAing the data at every step.

    What about just buying another database server? Or partitioning the one you have. Of course, your idea to just add tags to the records is quicker and easier to implement anyway, but if management can't get behind it, maybe they can get behind major hardware purchases. Heh. Or Amazon AWS rentals.

    re: 10 minutes: The whole series of sql commands is in a document (not a script), so it can be cut-n-pasted. The total of data is only about 100MB, so 10 minutes is plenty of time. The person doing it is supposed to check to make sure that each step passes before proceeding to the next step, but I've seen folks just cut-n-paste the whole thing in one swipe, so if one step fails, it just continues with the next one, pretty much guaranteeing disaster (eg: copy main table to backup fails, truncate main table succeeds, copy marked data intol main table; now the main data is gone with no backup; oops!)

    re: alternative good ideas: you are THINKING my friend. That is not allowed here!

     



  • [quote user="Renan "C#" Sousa"]My bad. The turk is a subset of the AWS, and I mistook one for the other.[/quote]

    If it is, then that's brand new. AWS has always been separate from Mechanical Turk.

    I am slightly curious to hear your proposal of how to use Mechanical Turk to solve this issue, though... I guess you're a lot more imaginative than I am, because I'm stumped.



  • @snoofle said:

    re: alternative good ideas: you are THINKING my friend. That is not allowed here!

    Sorry sir! Would you like another coffee sir!!

    And yes, God forbid they use a SQL script or even (gasp) something like a saved Import/Export or SSIS process! Or even a shitty product like Talend could do this for you. Unthinkable.



  • Wow, two genuine WTFs in as many days by the same person. Your "team lead" is truly a WTF factory.

    I'm sure you already know this, but CYA my friend, CYA!  With that done you can sit back and watch the disaster unfold.

    Thanks again!



  • @blakeyrat said:

    [quote user="Renan "C#" Sousa"]My bad. The turk is a subset of the AWS, and I mistook one for the other.

    If it is, then that's brand new. AWS has always been separate from Mechanical Turk.

    I am slightly curious to hear your proposal of how to use Mechanical Turk to solve this issue, though... I guess you're a lot more imaginative than I am, because I'm stumped.

    [/quote]

    Mechanical Turk has been a part of AWS for a long time.  I think you are confusing EC2 (cloud computing) with AWS (the product line containing EC2, Mechanical Turk, S3, etc.)



  • @blakeyrat said:

    [quote user="Renan "C#" Sousa"]My bad. The turk is a subset of the AWS, and I mistook one for the other.

    If it is, then that's brand new. AWS has always been separate from Mechanical Turk.

    I am slightly curious to hear your proposal of how to use Mechanical Turk to solve this issue, though... I guess you're a lot more imaginative than I am, because I'm stumped.

    [/quote]

    Simple. Give the people behind the service terminal access to the machine with the database. Then have the mechanical turk people do the same thing that snoofle's colleagues do. [management thinking mode]That way you can save a truckload of cash paying less than an indian minium wage, and if you get 10 times more people in the mechanical turk service working for you, you can do it in 10% of the time it takes now.[/management thinking mode]



  • You Can do this in Oracle live at least with 11g using versioning. Makes things annoying to deploy but you can do it live with 0 downtime to users. We had Tom Kyte give us a run down of this very feature (the guy from asktom oracle blog) as part of a pilot we are running. Apparently you use views in your code instead of tables and the views are version specific so if the user is on version 1 he uses a view automaticly that shows the old columns and version 2 shows the new columns for version 2 users. All the triggers etc are versionable as well as packages. Really powerfull way to do it, but again its a pain to plan out.



  • [quote user="Renan "C#" Sousa"]That way you can save a truckload of cash paying less than an indian minium wage, and if you get 10 times more people in the mechanical turk service working for you, you can do it in 10% of the time it takes now.[/quote]

    Goddamn, this boy is bright!  Somebody promote him!  And give him a huge bonus out of those bailout funds we received!



  • @snoofle said:

    @Tiller: "delete table xxx" is too slow.and "truncate table xxx" can't be rolled back.

    @robbak: welcome to Wall Street - it's all random guessing with facts and figures to appear to back it up

    Oh BTW: we also thought of making a snapshot of the whole db as part of the "marking" process, and just pointing the app server at the desired snapshot (thus eliminating the risks of all the table shuffling), but the dba's said that would take 2 hours (really? I can back up my fairly full 1TB disk on my crappy home PC in under an hour and our prod db is nowhere near that much data) and our response window is 10 minutes. It's not a question of if it will fail, but when. Unfortunately our users refuse to plan for that eventuality: we need more features in the app now!

     

    You should switch to MS SQL Server.  SQL snapshots are copy-on-write, so they are created instantaneously.


  • @morbiuswilters said:

    @blakeyrat said:

    [quote user="Renan "C#" Sousa"]My bad. The turk is a subset of the AWS, and I mistook one for the other.

    If it is, then that's brand new. AWS has always been separate from Mechanical Turk.

    I am slightly curious to hear your proposal of how to use Mechanical Turk to solve this issue, though... I guess you're a lot more imaginative than I am, because I'm stumped.

    Mechanical Turk has been a part of AWS for a long time.  I think you are confusing EC2 (cloud computing) with AWS (the product line containing EC2, Mechanical Turk, S3, etc.)

    [/quote]

    Well, maybe their salespeople just suck then. They just came and gave our company a huge overview on their entire AWS line of services, and never once mentioned Turk. Although now that I check the website, I guess Turk is listed under "Products." Plus, it's not a web service, so I don't see why it would be grouped in with "Amazon Web Services."

    In any case, if I meant Mechanical Turk, I would have fucking typed it. So that still stands.

    @morbiuswilters said:

    Mechanical Turk has been a part of AWS for a long time.  I think you are confusing EC2 (cloud computing) with AWS (the product line containing EC2, Mechanical Turk, S3, etc.)

    The reason I said "AWS" is because they have a bunch of products that could solve the problem. RDS for one. SimpleDB (potentially) for another. And of course EC2, you can just give each analyst their own instance. I apologize for not typing, "use one in a line of Amazon products called AWS (but not including Mechanical Turk), specifically EC2, RDS or possibly SimpleDB to solve your problems!!!" I'll be more careful in the future.

    Sorry, I get grumpy when people make up bullshit, then claim I said it.



  • @blakeyrat said:

    Sorry, I get grumpy when people make up bullshit, then claim I said it.  That's why I'm glad this forum software features automatic quote verification and big red letters if you misquote someone.
    QFT



  • @blakeyrat said:

    The reason I said "AWS" is because they have a bunch of products that could solve the problem. RDS for one. SimpleDB (potentially) for another. And of course EC2, you can just give each analyst their own instance. I apologize for not typing, "use one in a line of Amazon products called AWS (but not including Mechanical Turk), specifically EC2, RDS or possibly SimpleDB to solve your problems!!!" I'll be more careful in the future.

    Well, since he's using Whoracle, his only real option would be EC2.  But I understood what you meant and your point was valid.  In fact, saying AWS was probably smarter because if you had mentioned SimpleDB or RDS some pedantic dickweed would have pointed out that snoofle was using Oracle and couldn't use those.  As it is, you just confused a Mexican and triggered a mediocre flamewar.

     

    @blakeyrat said:

    Sorry, I get grumpy when I haven't had dicks in my butt for awhile.

    FTFY.



  • @Jaime said:

    @snoofle said:

    @Tiller: "delete table xxx" is too slow.and "truncate table xxx" can't be rolled back.

    @robbak: welcome to Wall Street - it's all random guessing with facts and figures to appear to back it up

    Oh BTW: we also thought of making a snapshot of the whole db as part of the "marking" process, and just pointing the app server at the desired snapshot (thus eliminating the risks of all the table shuffling), but the dba's said that would take 2 hours (really? I can back up my fairly full 1TB disk on my crappy home PC in under an hour and our prod db is nowhere near that much data) and our response window is 10 minutes. It's not a question of if it will fail, but when. Unfortunately our users refuse to plan for that eventuality: we need more features in the app now!

     

    You should switch to MS SQL Server.  SQL snapshots are copy-on-write, so they are created instantaneously.

    Or MySQL with any filesystem or device that supports snapshots.



  • @morbiuswilters said:

    As it is, you just confused a Mexican and triggered a mediocre flamewar.

    Then my work here is done.



  • @morbiuswilters said:

    @blakeyrat said:

    Sorry, I get grumpy when I haven't had dicks in my butt for awhile.

    FTFY.

     

    Hey, I just heard someone talking about dicks in butts and I thought I'd like a piece of that action.



  •  Which part were you thinking of, the giver or the taker? 



  • @Mole said:

    Which part were you thinking of, the giver or the taker?

    He's quite happy to do either.  But take it from me: you don't want him to be the giver.  He has an unsatisfying girth as well as a bad habit of crying after he finishes giving.  There's no bigger mood-killer than tears dripping down your back.



  • @morbiuswilters said:

    @Mole said:

    Which part were you thinking of, the giver or the taker?

    He's quite happy to do either.  But take it from me: you don't want him to be the giver.  He has an unsatisfying girth as well as a bad habit of crying after he finishes giving.  There's no bigger mood-killer than tears dripping down your back.

    Easily solved if you deal with him manually.



  • Mood-killer?



    He's just not doing it right. You've got to aim the tears at the weak points for massive damage.



  • I have an unsatisfying girth? I'm not sure what that means. For the record, I weigh about 15 pounds more than the The Machinist.



  •  @dhromed said:

    I have an unsatisfying girth? I'm not sure what that means. For the record, I weigh about 15 pounds more than the The Machinist.
    I think it means you can give it without touching the sides 😉



  •  A, stupid me. Normqally I'm the dirtiest mind around, but this time I misinterpreted "girth".

    Yes, morbs, I know you don't like my five-inch-thick wand. You tear easily, and last time the stitches gave you a lot of trouble when we went out for Thai afterwards.

    Also, and I feel I must defend myself on this, you know I cry only because the reduced blood volume in my brain reduces me to a more bestial, emotional state. You know that and I don't like you disparaging me publicly. 😞

     



  • @dhromed said:

    Yes, morbs, I know you don't like my five-inch-thick wand. You tear easily, and last time the stitches gave you a lot of trouble when we went out for Thai afterwards.

    Goddammit, beat by my own ambiguity.

     

    Off-topic: Where the fuck is pstorer?  He can't be on vacation because he's not some pansy Frenchman Communist who gets time off from work.  Maybe he finally got his comeuppance for posting Nazi propaganda?


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.