Thrashing...



  • I've been working here for about a year now. About the only thing that is consistent from day to day is the utter lack of disk space in the database.

    Production is 20+TB and growing by about 100GB/day. We've already spent the entire 2012 budget on disk space that will only last us through March 1. They're still talking about getting more money to order more disk space (I assume that when they count back the setup/install/delivery/order-processing time, they'll wait until the last second to allocate the money so the business doesn't shut down).

    We have no parallel production scale environment - neither servers nor db - so whatever we do, it's never tested at the scale of the production environment; we need to do the best we can in dev and guestimate whether it will work, and at what capacity when we scale it up.

    The dev db was set up as 20GB 5 years ago, when there were ten developers here. Now there are >100, but they've increased the storage available to us to 100GB. Unfortunately, a single small scale run generates about 1GB of data. You might imagine how quickly the dev db fills up, and the frustration everyone else feels when someone runs a script to wipe out the test data and then they go looking for it only to find that their work has disappeared.

    I happen to be working on something that requires scale testing and simply can't shake it out at that level. I ran a couple of tests and blew the db space limits. They get the message and have the dba create a parallel dev db - just for me - with 10GB of space; barely enough for one full run. Unfortunately, I need to keep both a baseline and a test run result set at the same time in order to compare the results. Arguments ensue and the net result is that I should have the DBAs compress and store both result sets offline. Then, on an as-needed basis, I can ask them to restore the baseline and test versions of a given table so that I can compare them - one table at a time.

    Of course, when you have complex joins that span multiple tables, it gets a little more complicated. That and the fact that it takes the DBAs two days to wipe stuff out so they can restore individual tables. While this is annoying as hell, it does give me a great deal of free time. [Un]fortunately, any other work I might do would also require db resources, so I can't move on to a parallel project.

    As you might imagine, my work is proceeding at the pace of a crippled snail stuck in molasses on a cold winter's day.

    Then my boss asks me when I'm going to be finished as they need me to start another project.

    I just looked at him. Without a word from me he realized (given the resource problems) what he had just asked, and got this sheepish look on his face.

    Wheeee!

     


  • Trolleybus Mechanic

    @snoofle said:

    I've been working here for about a year now
     

    [b]Only?[/b] Have you considered self-immolation?

     



  • @Lorne Kates said:

    @snoofle said:

    I've been working here for about a year now
     

    Only? Have you considered self-immolation?

    Nah, I just spent an hour chatting with the wife (it is V-Day).

    I also wrote a tiny program to compute how much they're paying me to sit here and do nothing on an ongoing basis, though I'm having second thoughts about showing the output to anyone.



  • Why haven't you moved your development work to a database instance on your workstation.  For a few hundred dollars and a few hours of time, you can fix your disk space and DBA bottleneck problems simultaneously.  And can't think of a DBMS that doesn't have a free version that can be installed on a workstation.



  • Talk about penny-wise and pound-foolish!!!



  • @Jaime said:

    Replicate db to local box

    I asked if I could do that shortly after I started here. We're not allowed to do that.


  • Man, it is stories like this that make appreciate what I have.  On my previous program I was assigned an activity that was similar in style with generating tons of data and comparing the data with previous runs.  I was very fortunate through the whole process.  The first thing was that my test data was a set of thousands of files that took almost 200 GB, but none of my servers had enough spare room for all of it.  So for the static data I asked my leadership for a hard drive that had could store 250 GB for it, since none were available they instead gave me a SAN that had 1 TB of space on it (and I did not even have to share it with anyone) and the SAN already had RAID 5 on it so one less thing for me to worry about.  For the generating of the data I was given 3 dedicated servers to do the runs, and I worked them to death.  Because my program was willing to give me the resources I needed I was able to take an activity that used to take 1 day with me twiddling my thumbs for most of it to having a run take 1 hour.  Because my runs were shorter I was able to do more runs, and as a result of that I was able to make additional improvements.  After 6 months of finding improvements it allowed me to save the company millions in operation costs.



  • @Anketam said:

    Man, it is stories like this that make appreciate what I have.  On my previous program I was assigned an activity that was similar in style with generating tons of data and comparing the data with previous runs.  I was very fortunate through the whole process.  The first thing was that my test data was a set of thousands of files that took almost 200 GB, but none of my servers had enough spare room for all of it.  So for the static data I asked my leadership for a hard drive that had could store 250 GB for it, since none were available they instead gave me a SAN that had 1 TB of space on it (and I did not even have to share it with anyone) and the SAN already had RAID 5 on it so one less thing for me to worry about.  For the generating of the data I was given 3 dedicated servers to do the runs, and I worked them to death.  Because my program was willing to give me the resources I needed I was able to take an activity that used to take 1 day with me twiddling my thumbs for most of it to having a run take 1 hour.  Because my runs were shorter I was able to do more runs, and as a result of that I was able to make additional improvements.  After 6 months of finding improvements it allowed me to save the company millions in operation costs.
     

    nice piece of offworldly science fantasy you have there, Dan Brown. have you considered writing Digital Fortress 2 about it?



  • @SEMI-HYBRID code said:

    nice piece of offworldly science fantasy you have there, Dan Brown. have you considered writing Digital Fortress 2 about it?

    And if you think that is "offworldly science fantasy" here are some more details behind the scenes that would confirm your belief:
     - This was my first program out of college
     - The program came in on time, under budget, and above targets
     - The code base was well organized and had source control on it that all developers used (and used correctly)
     - There was a trouble ticket system that made sense
     - The management/leadership on the program was excellent, did a good job, and had a good grasp on technical matters

     The sad thing is that I thought that was normal until one of my coworkers showed me a php article on this website.  After the program finished I was assigned to a more normal project, and I started to read more and more articles on this website.



  •  Paying for an ungodly amount of consultant hours is cheaper than luxury items such as HDDs.

     Clearly you don't understand economics.



  • @DOA said:

     Paying for an ungodly amount of consultant hours is cheaper than luxury items such as HDDs.

     Clearly you don't understand economics.

     

    You're new here, aren't you?

     



  • @GreyWolf said:

    @DOA said:

     Paying for an ungodly amount of consultant hours is cheaper than luxury items such as HDDs.

     Clearly you don't understand economics.

     

    You're new here, aren't you?

     

    Clearly you don't understand sarcasm.


  • Have you considered cloud based storage for your non-production DB.  When all is said and done this can actually be much cheaper than even "cheap SATA" systems. I have one test system that needs to scal up to nearly 100TB during some specific testing. Purchasing, Operating, Maintaining this sale of hardware ocally would be annoying at best. As a cloud, it does a cold spin up (dynamic population) in a few hours, testing is run for a few days, and then it is spun down. Rinse, Repeat as necessary.


  • Trolleybus Mechanic

    @Sutherlands said:

    @GreyWolf said:

    @DOA said:

     Paying for an ungodly amount of consultant hours is cheaper than luxury items such as HDDs.

     Clearly you don't understand economics.

     

    You're new here, aren't you?

     

    Clearly you don't understand sarcasm.
     

    You're sarcastic here, aren't you?

     


  • ♿ (Parody)

    @TheCPUWizard said:

    Have you considered cloud based storage for your non-production DB.  When all is said and done this can actually be much cheaper than even "cheap SATA" systems.

    Given that they're not even allowed to put it on their local development machines, I'd guess this would be shot down. IIRC, they're running Oracle, so I wouldn't doubt that the awesomeness of Oracle licensing was a major consideration here.



  • @boomzilla said:

    @TheCPUWizard said:
    Have you considered cloud based storage for your non-production DB.  When all is said and done this can actually be much cheaper than even "cheap SATA" systems.
    Given that they're not even allowed to put it on their local development machines, I'd guess this would be shot down. IIRC, they're running Oracle, so I wouldn't doubt that the awesomeness of Oracle licensing was a major consideration here.
    Sign up for an OTN account and you are allowed to download and install all of their products for non-production use.  If the database is under 11GB, then you can install Oracle Express Edition for free - even for production use.



  • .. but that would require snoofle bringing in personal kit to run it on - it sounds like he's forbidden by policy, rather than by licencing.

    (snoofle had better confirm this - I'm just guessing)



  • @snoofle said:

    I also wrote a tiny program to compute how much they're paying me to sit here and do nothing on an ongoing basis, though I'm having second thoughts about showing the output to anyone.
     

    That's what TDWTF is for.

    C'mon, spill!



  • Just fire 50 dev and you will get enough money for storage. They just sit and watch the whole day anyway.



  • @tchize said:

    Just fire 50 dev and you will get enough money for storage. They just sit and watch the whole day anyway.

    Heck, fire ALL the devs - all they do is make bugs! With zero devs, your project will have zero bugs! It will be the envy of corporations everywhere!



  • @snoofle said:

    Production is 20+TB and growing by about 100GB/day. 
     

    Of course I have no idea what your application is, but I suspect that they are storing historical data in the database. Such stuff is easy to get at but expensive to store and slows down access to any recent data. This is what log files are for - log files on tape.

     

     



  • @AndyCanfield said:

    log files on tape.
    This.

    Just push it to tape storage; cheap as dirt, no?  Or do they need "real-time" access to all historical data at all times...  I've had this requirement once and business quickly changed their mind after I told them how many children of theirs they would have to sell to pay for it.  Oh, and they wanted 99.999% uptime, mirroring and failover, too!  All for the low, low price of a bare-bones server sitting under a developers desk.  HAHAHAHA!!  Oh, you were serious? ....  HAHAHAHA!



  • @snoofle said:

    @Jaime said:

    Replicate db to local box

    I asked if I could do that shortly after I started here. We're not allowed to do that.
     

    Heres an idea:

    1) Do it anyway, just don't tell them about it until your finished.

    2) expense any purchases you had to make.

    3) after they deny the expenses, pull out your little cost estimator program, and compare how much you saved the company vs the expenses.

    if they still deny the expenses you could use it as a tax decution, if your tax situation allows you to take advantage of it.

     

    Another idea:

    Simply show them the output of your cost estimator and explain that with a fraction of that wasted money you could drive down to the local frys/whatever buy what you need and be finished already.

     

    Both ideas would require someone above you to have some common sense, which sounds unlikely.



  • @esoterik said:

    @snoofle said:

    @Jaime said:

    Replicate db to local box

    I asked if I could do that shortly after I started here. We're not allowed to do that.
     

    Heres an idea:

    1) Do it anyway, just don't tell them about it until your finished.

    2) expense any purchases you had to make.

    3) after they deny the expenses, pull out your little cost estimator program, and compare how much you saved the company vs the expenses.

    if they still deny the expenses you could use it as a tax decution, if your tax situation allows you to take advantage of it.

     

    Another idea:

    Simply show them the output of your cost estimator and explain that with a fraction of that wasted money you could drive down to the local frys/whatever buy what you need and be finished already.

     

    Both ideas would require someone above you to have some common sense, which sounds unlikely.

    You're forgetting all the red tape, the meetings, the formal requests, etc., that he has to do... After all, it is an Enterprise(TM).

Log in to reply