It's important that you wait for the answer



  • We have a 40TB database of assorted information on, well, mostly everyone in the US. Yesterday evening, my boss asked me to run some 9-way join as soon as I got in today to get some high-priority information. Unfortunately, some of the where-clause fields were not indexed (normally not queried; special situation).

    Ok, I write and test the SQL in a small test db, and then kick off the query in the main database.

    I then go back to what I was doing before this little interruption.

    No! You must watch the SQL window while it's running in case there's an error.

    But, but, but I can watch this window while doing something else (two monitors).

    No, I want you to watch this as it's important!

    This was at 6:45 this morning. I'm still watching this little blinking cursor at 3:30. I don't think it's going to finish by the time I go home. They said it's ok to bill for the extra time I'm sitting here.

    I'd rant that I'm so, so, so bored, but after work, I'm going to buy myself a new toy with the money I earned for snoozing all day.

     



  • >.>



  • ...

    ...

    ...

    42!



  • @nexekho said:

    ...

    ...

    ...

    42!

    This.



  • I'm confused.  So they're going to pay you for two days work today?

    Nice gig if you can get it.


  • 🚽 Regular

    @snoofle said:

    I'm still watching this little blinking cursor at 3:30.
     

    No you aren't! You're posting a thread about it on TDWTF. GET BACK TO WORK, SLACKER!



  • If it wasn't for the fact that they are happy for you to bill for the extra time, I'd suspect a hazing ritual or bullying.

    I'm not a DBA and don't play one on TV, but is it the sort of query where adding some indexes, while delaying the time to start the query, would have reduced the total time to produce an answer?

     

     



  • What I want to know is what the DB is doing about the rest of the connections. I'd imagine this two day operation is consuming a lot of the server's resources so wouldn't that be slowing, or even killing, other connections?


  • Garbage Person

    @Soviut said:

    What I want to know is what the DB is doing about the rest of the connections. I'd imagine this two day operation is consuming a lot of the server's resources so wouldn't that be slowing, or even killing, other connections?
    You know that DB servers, especially the kind with 40TB of data, can typically handle a buttload of concurrent connections, right



  • @snoofle said:

    We have a 40TB database of assorted information on, well, mostly everyone in the US.
     

    I am a U.S. citizen who hasn't set foot in the United States since 1992. Haven't had a credit card all that time. Please tell me I'm not in your database.

     



  • @snoofle said:

    No! You must watch the SQL window while it's running in case there's an error.

    But, but, but I can watch this window while doing something else (two monitors).

    No, I want you to watch this as it's important!

     

    ?

    "No, I will not sit here staring at this screen. Now go away."

     



  • @dhromed said:

    @snoofle said:

    No! You must watch the SQL window while it's running in case there's an error.

    But, but, but I can watch this window while doing something else (two monitors).

    No, I want you to watch this as it's important!

     

    ?

    "No, I will not sit here staring at this screen. Now go away."

     

    "Yes, I will sit here staring at the screen.

    Or at least you won't know any better unless you stay here staring at me."

     



  • Would be fun if your session times out before the query terminates. I use TOAD with Oracle and TOAD just keeps running the clock on the query...



  • @snoofle said:

    I am a U.S. citizen who hasn't set foot in the United States since 1992. Haven't had a credit card all that time. Please tell me I'm not in your database.

    Probably this "special" query has made sure that you will never be able to set a foot anymore...



  •  @snoofle said:

    I then go back to what I was doing before this little interruption.

    No! You must watch the SQL window while it's running in case there's an error.

    But, but, but I can watch this window while doing something else (two monitors).

    No, I want you to watch this as it's important!

    Not as important as making sure the application doesn't lock up.  Your boss should have had you test the application to pass the time (in addition to checking on the SQL window every now and then).

    Although the application "shouldn't" lock up for the duration of the query, if part of the application grabs a needlessly strong lock while the query is running, it may conflict with even a seemingly innocent SELECT query.  I've made this mistake before.

     



  • @Paddles: no; the tables are already heavily indexed, but this query is using non-indexed fields because of the nature of the one-off request

    @Soviut: as Weng pointed out, the servers are pretty powerful; it's just that a 9-way join on n-TB of data kind of eats up resources at an immense scale

    @AndyCanfield: this ain't credit or financial; If you go back more than 3 years, you probably aren't in there

    @TGV: me too

    @joeyadams: the app never does an update; just queries and inserts, so it's unlikely it'd ever lock up. Though it'd be nice if we had a COPY of the prod db in some OTHER environment so we could do these sorts of things someplace other than prod

     



  • @snoofle said:

    @Paddles: no; the tables are already heavily indexed, but this query is using non-indexed fields because of the nature of the one-off request
     

    I don't think you can blanketly (is that a word?) say that. I've seen many instance where the creation of a temporary index took, say, 25% of the time that a query was taking, but reduced subsequent executions of that query by 95%.

     

    So, whatever happened? Is it still running? What would be really funny is if someone had an open transaction (forgot to run COMMIT or ROLLBACK) which was causing a block on your query. Well, OK, funny to me at least.

     

    Also: If the boss was willing to sit there and watch YOU to make sure you watched the query, why not turn the 2nd monitor to the side, have him pull up a chair, and watch the query his own damn self so you could get work done. 

     



  • About an hour ago I chortled at this thread. Now, I've had to come down to the office (Sunday night, around 9.30pm for those of you not in the UK timezone), to rebuild a RAID array with two drive failures.


    Guess what I'm doing? Waiting and watching a progress bar for the next....... 2 hours maybe?!


    Karma sucks.



  • @MeesterTurner said:


    Karma sucks.
    But it does pay time-and-a-half.

     



  • @DaveK said:

    @MeesterTurner said:


    Karma sucks.
    But it does pay time-and-a-half.


    Ah, if only. My overtime rate's a WTF in itself.... but that's another story! At least RAID has just rebuilt and let me boot into Windows, so hometime for me - Yay!



  • @SQLDave said:


    So, whatever happened?

     

    The query finally finished around 10-10:30 Fri night. This morning (Mon), I came in only to find an email requesting that I redo the query because the data can't possibly be in there that way if the query includes " ... where ... and xxx = 'IN' and ...", yet in my query:

    ... where ... and xxx = 'IN' and ...

    so methinks that after some further analysis,of the data, it's going to have to be run again.



  • @snoofle said:

    @Paddles: no; the tables are already heavily indexed, but this query is using non-indexed fields because of the nature of the one-off request

    @Soviut: as Weng pointed out, the servers are pretty powerful; it's just that a 9-way join on n-TB of data kind of eats up resources at an immense scale

     Adding indexes and then querying could reduce total wait time.  But those indexes would also take up disk space.  On a billion row table comprising 112gb of data, Doing this reduced the total time to query down, including creating the index, down to about 45 minutes.  In our case however it was a table that never should have grown to that size.  A rogue process that executed every 15 min was runaway, and making an insane number of inserts.   We used the data to figure out what process it was and then truncated the table since we really didn't need the data otherwise.  Now the process is fixed and we are back to a normal 5 inserts a day.

    The way the schema of one of our db's is here 9 way joins are the norm.  I'm spending most of my days trying to tune the db to make life more manageable.



  • @galgorah said:

    @snoofle said:

    @Paddles: no; the tables are already heavily indexed, but this query is using non-indexed fields because of the nature of the one-off request

    @Soviut: as Weng pointed out, the servers are pretty powerful; it's just that a 9-way join on n-TB of data kind of eats up resources at an immense scale

     Adding indexes and then querying could reduce total wait time.  But those indexes would also take up disk space.  On a billion row table comprising 112gb of data, Doing this reduced the total time to query down, including creating the index, down to about 45 minutes.  In our case however it was a table that never should have grown to that size.  A rogue process that executed every 15 min was runaway, and making an insane number of inserts.   We used the data to figure out what process it was and then truncated the table since we really didn't need the data otherwise.  Now the process is fixed and we are back to a normal 5 inserts a day.

    The way the schema of one of our db's is here 9 way joins are the norm.  I'm spending most of my days trying to tune the db to make life more manageable.

    Your post and approach make sense. But I don't have write access to the DB, and creating an index around here takes an act of $DEITY and enough paperwork to deforrest Oregon ...


  • @snoofle said:

    Your post and approach make sense. But I don't have write access to the DB, and creating an index around here takes an act of $DEITY and enough paperwork to deforrest Oregon ...
    That is a pain.  I'm actually moving more towards a DBA role these days, even though my official role is a lead developer.  I've found that often in these situations a few drinks with a DBA can help smooth out the process.  It all depends the DBA is not also burdened with extreme bureacracy and if you can get them to offer to create the temporary index for you.  That way its a request from their end and not yours. 


  • @snoofle said:

    Your post and approach make sense. But I don't have write access to the DB, and creating an index around here takes an act of $DEITY and enough paperwork to deforrest Oregon ...

    Why do companies shoot themselves in the foot like that!

    Do you have enough access to create a Table variable and shove your data in it? I'm 90% sure you can add an index to a Table variable...


  • ♿ (Parody)

    @blakeyrat said:

    @snoofle said:
    Your post and approach make sense. But I don't have write access to the DB, and creating an index around here takes an act of $DEITY and enough paperwork to deforrest Oregon ...

    Why do companies shoot themselves in the foot like that!

    Do you have enough access to create a Table variable and shove your data in it? I'm 90% sure you can add an index to a Table variable...

    Didn't we just have a thread about stuff like "delete from...."? Remember, this is a production DB, and there is not another DEV / TEST copy. I guess the 40TB thing is probably a key to understanding why that's so.



  • @boomzilla said:

    @blakeyrat said:
    @snoofle said:
    Your post and approach make sense. But I don't have write access to the DB, and creating an index around here takes an act of $DEITY and enough paperwork to deforrest Oregon ...

    Why do companies shoot themselves in the foot like that!

    Do you have enough access to create a Table variable and shove your data in it? I'm 90% sure you can add an index to a Table variable...

    Didn't we just have a thread about stuff like "delete from...."? Remember, this is a production DB, and there is not another DEV / TEST copy. I guess the 40TB thing is probably a key to understanding why that's so.

    I don't understand what your response has to do with my post. You don't have to delete anything to put data in a Table variable, if that's what's confusing you? You don't even have to delete the variable, you can just let it drift out of scope.

    Edit: Oh wait, I get it. You're referring to the "shoot in the foot" command.

    My point is he ought to be able to create tables in his own schema, even if the dbo tables are locked-down (and for good reason.) Unless the DB server is critically out-of-space, there's no good reason to disallow users from using their own schema.


  • ♿ (Parody)

    @blakeyrat said:

    My point is he ought to be able to create tables in his own schema, even if the dbo tables are locked-down (and for good reason.) Unless the DB server is critically out-of-space, there's no good reason to disallow users from using their own schema.

    I agree that something like this should be possible. Of course, we don't even know which RDBMS this is, or if he even has anything like his own schema. But then again, it's not clear to me that this would really help him. But, again, given that this is a production DB, I doubt that he would be allowed to do something like that. Given the size of the DB, and assuming that it was common for users to be able to do this, I can imagine all sorts of mayhem (at least as far as continued normal operation of the production environment) that could result in foots getting shot that wouldn't require anything to be written to the production data.


Log in to reply