But it works
I am building something that will send information from our system to that of another team. Once received, the other system needs to look up some data for each record we send. For performance reasons, I batch up a thousand records per message. After it's deployed, the users are griping it's slow. I check out my end, and it's running zippity quick. I sigh because...
The other team has an existing routine to look up the data for a single record. Naturally, the developer of the new functionality decided to leverage that existing code and so looped through each record in my message, calling the db-lookup routine to return data for a single row. As opposed to doing: select ... where id in (...).
Why are you doing it like this? This is slow as ****! Use an in-clause (we use oracle, which supports 1000 items in the in-clause).
But it works.
I told my boss who told his boss who told their boss who crushed the other hierarchy into fixing it.
Tell him to drive home with only one of his four brakes in working condition... What? It (technically) works!
I suppose building it using existing things could be understandable if it was a time crunch to get it done and it was marked with a FIX LATER, but based on your stories I doubt that.
(we use oracle, which supports 1000 items in the in-clause)
Well, in my RDBMS you can pass in a table-valued parameter so there's no limit on how many IDs you can specify. (More importantly, it's not a string concatenation operation to build the query.)
if it was a time crunch to get it done and it was marked with a FIX LATER
To my mind, that's not a fix. It's something that's delaying a fix... and it has the effect of magnifying cost and effort with each delay.
I see it as two different versions of the software: The first being the Standard Edition, and the Zippity quick version being the Professional Edition