The DB can handle it!



  • Our dev DB is somehwat lacking. In particular, it's used by a lot of people, is somewhat underpowered (CPU) and grossly under-configured with respect to number of supported connections and temp space.

    As such, folks routinely get errors when attempting to connect, or do any significant queries.

    In my case, we have an application that writes about 4K small records, each in it's own request (I'm rewriting this in the near future). It usually takes about 7 seconds to complete. About 2 days ago, it started to take about 700 seconds to complete. Today, I sent yet another message to the DBA insisting that the DB is pathetically slow and it's killing us.

    He responded that there was only one person connected to the DB (me).

    I replied that I didn't care about the number of connections; I was concerned that the db wasn't processing the requests quickly enough.

    He said the DB can handle anything I throw at it.

    I pointed out that it was currently handling one user and inserting rows at the rate of 6 rows per second, and suggested that there might be a problem.

    *dead silence* for two hours; he's looking at it.

     



  • I know your current workplace is a swiss cheese of WTFs.. but I always thought your test environment should closely mirror your production environment so that tests are viable.

    I presume your production DB isn't this slow - but there's still a case for trying to emulate the same conditions.

    (and a DBA that has difficulty reading, interpreting and understanding written communication is a minor WTF in itself, let alone the fact that the first few emails should have triggered warning bells rather than dismissive replies.)



  • Technically it is handling anything you throw at it ... just not at the rate you would like ;)



  • A friend of mine: "Hello, I would like to report that the system is unresponsive."

    Consultant *remotes in* "Hmm, shows here everything is up. Let me see... Yes it's responding."

    Friend: "Well, it might be responding, but it takes over 15 minutes to do what I tell it to."

    Consultant "It's responding, so there's nothing I can do. G'bye".

    It's train signals we're talking about here, on the world's second most crowded railway (the dutch railway, Utrecht area).



  • Sounds like your DBA either

       a) doesn't care

       b) Doesn't really know what their doing

       c) Has the social skills of a basement dwelling troll.

    A good DBA should be social, should be able to query the DMVs to figure out the bottle neck, and should care.



  • Hmm... I didn't think Snoofle worked at my workplace, but I'm beginning to suspect otherwise...



  • I'd like to see you type in 6 rows per second for 700 seconds, with zero typos. Maybe you're holding the DB to unreasonable standards?



  • @snoofle said:

    In my case, we have an application that writes about 4K small records, each in it's own request (I'm rewriting this in the near future). It usually takes about 7 seconds to complete. About 2 days ago, it started to take about 700 seconds to complete. Today, I sent yet another message to the DBA insisting that the DB is pathetically slow and it's killing us.

    Is it the DB or is it the network routing? Each record in its own request, assuming each request waits for the previous one to respond that it's completed, would depend on having a low latency connection. So if your normal intranet latency was 1ms or less and the network got borked and started routing out to the internet and back in, that could result in 100+ ms latencies. Which happens to be close to the 7s to 700s factor of change.

    Or, your dba is an idiot.

    Or both.



  • @steenbergh said:

    Friend: "Well, it might be responding, but it takes over 15 minutes to do what I tell it to."
     

    Bet the people sending signals to the probes on Mars would like to have that kind of response time.



  • @darkmattar said:

    Is it the DB or is it the network routing?
     

    A good DBA should be able to show evidence instantly that it's not a database problem, if they have decent real-time performance-monitoring systems setup. And be jumping in if those systems alert the DBA to a performance-related incident currently outbreaking.


  • ♿ (Parody)

    @Cassidy said:

    A good DBA should be able to show evidence instantly that it's not a database problem, if they have decent real-time performance-monitoring systems setup. And be jumping in if those systems alert the DBA to a performance-related incident currently outbreaking.

    Yes, but that sort of firepower is usually aimed at a production (or sometimes test) environment. Otherwise they're going to be chasing their tails whenever a programmer executes a boneheaded query in his development environment, and in a place like mine, those false positives are going to take all of his time.



  • @Cassidy said:

    @darkmattar said:

    Is it the DB or is it the network routing?
     

    A good DBA should be able to show evidence instantly that it's not a database problem, if they have decent real-time performance-monitoring systems setup. And be jumping in if those systems alert the DBA to a performance-related incident currently outbreaking.

     

    If the DBA has real-time perf-monitoring systems setup, they're probably pointing to production systems, not to a dev database with one user on it. 

     



  • True, but IME monitoring systems tend to be setup and trialed on dev/staging boxen before being deployed to production kit.[1]

    Also, no reason why dev kit can't have the same monitoring stuff deployed - just that the results aren't as much of a priority than the live kit.

    I'm clutching at straws again, aren't I?

    [1] I only believe this since - (a) I've experienced it a few times, but also (b) on some boxen I help administer, I've replicated the same monitoring on live to testbeds for devs to see the impact of their code. I don't let them deploy it onto production so that the monitoring tools on live can show the code's wank.  YMMV...



  • @da Doctah said:

    @steenbergh said:

    Friend: "Well, it might be responding, but it takes over 15 minutes to do what I tell it to."
     

    Bet the people sending signals to the probes on Mars would like to have that kind of response time.


    Haha, yes they would, as would the people on the Voyager projects. However, on the dutch railway system, that one piece of track can have over 5 different trains passing through in either direction. 15mins is quite the delay then.



  • @Cassidy said:

    Also, no reason why dev kit can't have the same monitoring stuff deployed - just that the results aren't as much of a priority than the live kit.

    For example, check out how much a Foglight license costs (hint: priced per CPU of monitored server(s)), and imagine your company says that this is the only allowed monitoring tool in the whole ecosystem.



  • Well, okay, to clarify: there's no reason why they can't  use this product. Their policy decision to use only that tool in combined with the licencing costs may mean that they won't add this type of monitoring onto their testbed.

    It also means the performance impact of any changes will remain uncertain until those changes are deployed to live. Someone has had to balance the cost of their chosen monitoring tool with the risk of not using it in a staging environment.

    (also going to have to point out that many RDBMS include some DB monitoring and analysis tools - some free, others cost. Yes, Oracle, I'm looking at you for the last one).



  • Even Oracle has a free one included if you look good enough. Except some versions of the database have a bug that causes it to segfault when it runs. Been there.



  • @darkmattar said:

    @snoofle said:
    In my case, we have an application that writes about 4K small records, each in it's own request (I'm rewriting this in the near future). It usually takes about 7 seconds to complete. About 2 days ago, it started to take about 700 seconds to complete. Today, I sent yet another message to the DBA insisting that the DB is pathetically slow and it's killing us.

    Is it the DB or is it the network routing? Each record in its own request, assuming each request waits for the previous one to respond that it's completed, would depend on having a low latency connection. So if your normal intranet latency was 1ms or less and the network got borked and started routing out to the internet and back in, that could result in 100+ ms latencies. Which happens to be close to the 7s to 700s factor of change.

    Or, your dba is an idiot.

    Or both.


    It better be the DBA. First rule of external network connectivity is that you don't allow your own addresses back through your external firewall. You deny your public address space that is used internally and you deny RFC 1918 private addressing. That should have blocked the routing anomaly you describe, the result being no connection. And even if not for that, your provider should kick your own addresses back to you immediately. Unless you're looking at maybe a frame relay or T1 circuit, it's not likely that going to your provider and back will cause 100ms times. 20, 30, maybe a little higher, but 100ms sounds too high for the scenario you put forth.



  • @bannedfromcoding said:

    [quote user="Cassidy"]Also, no reason why dev kit can't have the same monitoring stuff deployed - just that the results aren't as much of a priority than the live kit.

    For example, check out how much a Foglight license costs (hint: priced per CPU of monitored server(s)), and imagine your company says that this is the only allowed monitoring tool in the whole ecosystem.[/quote]

    Huh. Now I'm wondering if you work at the same place I do. Ye gods Foglight sucks.



  • They installed an anti-virus and have it configured to do a full scan on every change.


  • Discourse touched me in a no-no place

    @JoeCool said:

    They installed an anti-virus and have it configured to do a full scan on every change.
    I have a friend whose corporate IT have decided that running a full scan on every open is a wise idea. Including if it's a read-only open. Great idea if your job involves writing code to do high-performance analysis of [i]very[/i] large images. My friend is a very mild-mannered person, but not when it comes to corpIT who seem to be able to teach Mordac new tricks.


Log in to reply