While investigating where all the time was going in some dog-slow code I have the misfortune to have to debug, I encountered this gem. Who can offer a more CPU intensive way of pausing a proccess?
While investigating where all the time was going in some dog-slow code I have the misfortune to have to debug, I encountered this gem. Who can offer a more CPU intensive way of pausing a proccess?
@boog said:
Even more disturbing are the scared-shitless developers who work with databases...
Not just databases. Elsewhere in this same abomination was some XML parsing (possibly by the same author) which did something like this:
(in no particular language)
foreach node in allNodes
{
if (node.isInteresting)
{
foreach childNode in allNodes
{
if (childNode.parent == node)
{
// Compute something
}
// else carry on looking checking all other nodes anyway
}
}
}
The XML was such that each node of interest only had one child, so once the child was found there was no need to consider any others. XPath queries and DOM navigation methods (other than asking a node for its parent) weren't familiar to the author. Fair enough, if it was their first foray into XML (but I don't think it was), but who in their right mind would think that checking the relationship between every permutation of node-pairs was the right way to go about it?
Some code I have the misfortune to occasionally have to read in order to work out why some new and spectacularly stupid behaviour is occuring has an... unconventional... way of keeping the size of its log database under control.
This particular log records events that are generated as a consequence of external influences (X switched on, Y switched off, Z requested a reset etc). They're stored in a MySQL database (sledgehammer/nut), so it should be easy for the user to ask questions such as "tell me that last time Y restarted" or "how many times has Z restarted in the last 48 hours". It isn't, but that's far from the worst aspect.
Due to a certain lack of developer confidence, the details of an event that is sent to the database is also written to a flat file, in a more verbose form. When it reaches the database it is written to another flat file. When an event is deleted (we'll come to this in a moment) that fact is also written to a flat file.
Assuming (hoping!) that horror will go away someday, that still leaves the usage of the database itself. To keep the database size under control, somebody has decided that only the most recent 400 entries should be preserved. This, on disk with ~4GB of space which is recording in the order of low hundreds of events of maybe 64 bytes each, per day. Since the deletion is itself logged, saving ~64 bytes of database consumes ~64 bytes of log file. Genius.
At some reasonable interval (probably daily), I might have chosen to run a selective delete query to remove all entries over 7 days old (by that age, the information has zero value). Unfortunately, that's not the "solution" that's been implemented. Prior to *every* insert, this call is made:
// trim the table size to <1000 entries if needed
deleteOldEvents();
Despite the comment, the magic threshold is 400, but they way it's done is not what you might expect...
Query query = db->query();
query << "select * from events order by thedate";
StoreQueryResult result = query.store();
if (result.num_rows() < MAX_ALLOWED_ROWS) {
// We still have room to spare
return;
}
// too many rows in the DB -- delete some
try
{
StoreQueryResult::iterator i = result.begin();
Row row = *i;
int dateTime = atoi(row["thedate"]);
char* tPtr = ctime((const time_t *)&dateTime);
char eventTime[32];
(void) memset(eventTime, 0, sizeof (eventTime));
(void) strncpy(eventTime, dPtr, strlen(tPtr) - 1);
LogThis("Deleting all entries logged before %s", dTime);
query << "Delete from events where thedate=\"" <<
atoi(row["thedate"]) <<"\"";
query.execute();
}
catch(Exception& e)
{
LogThis("%s", e.what());
// Carry on regardless
}
I'm an SQL novice and would never even have contemplated that that might be the right approach. It's the wrong solution, implemented badly. The wrong solution implemented less badly would have been less offensive. Even this would have been an improvement:
There are many more horrors contained within this code, but I can only cope with brief exposures to it. One recurring pattern that occurs on consecutive lines is this:
#define SOME_VALUE 123
const unsigned int mySomeValue = SOME_VALUE;
someFunction( mySomeValue, someOtherParam );
Groan...
Ah, DHCP not enabled - that old chestnut. It took a disappointingly long time for many of us here to convince a stubborn higher power here that it was more likely to be problematic having 192.168.0.10 as a static address than it would be to have DHCP enabled.
Engineer : "What happens if we choose our own static address and it clashes with something else on the customer's network?"
PHB : "Errr,,.... But what happens if we ship with DHCP enabled and the network's not got a DHCP server on it? That would be worse!"
Engineer : "No, it wouldn't. Consult the network admin, get a static address allocated and set it. No harm done"
Many variations of that conversation took place over a period of months. DHCP is now enabled by default.