Time intervals and performance brags
We're relay running some data from system to system because everybody needs to know it and we as an organization suck at architecture.
System A generates the data and gives it to system b, who gives it to system c who gives it to system d.
I am system c.
System a is generating these things at a pretty quick rate. Bursts of thousands within a second, then long periods of silence. System a drops the data into an MQ for system b, system b reformats the fixed width data to XML and passes it in to system c.
As such system b will give the data to us in Web requests, the Web service stuffs them in a queue, and a separate data loader loads the data sequentially from that queue, massages it a bit, and makes a Web request to system d.
System c's inbound Web service clocks in at thousands of requests per second. Since it's just dropping shit in an MQ, it's really only limited by bandwidth. System c's data loader, however, runs at what I consider to be an absolutely abysmal 20 per second.
Except system b manages to push a total of just under 3 per second.
System b is a multi node C monstrosity.
System c is a cruddy .Net app running on a single node shared with some other very resource hungry applications.
System c reports their performance as 20 per second. This is regarded by the business as "too slow".
System b reports their performance as 10000 per hour. This is regarded by the business as a marvel to be emulated.
System d keeps their mouths shut because a single transaction takes 4 seconds.
system B is lying to the business...
either that or system C should report to the business that after much work optimizing and genreal epic programming they have increased their processing capacity from 20 requests per second to 72,000 requests per hour.
fbmac last edited by
I have worked in a project where C was chosen because they wanted maximum performance, degenerated into softcoding, and required 4 servers for handling 1000 users. It was a web-based application.
HardwareGeek last edited by
I find it amusing that a TDWTF FP article is the sole reference for that Wikipedia article.
There's one web application at work where the main developer of it recommended that we procure an 8 core system with a huge amount of RAM and disk to run it. We only really plan to have 20–30 active users, and they'd be very unlikely to hit it all at the same time (it's a metadata catalog). $10k of hardware to serve 20 people? I Don't Think So. Something deeply bizarre is going on with that recommendation.
Even Dischorse doesn't need anywhere close that sort of horsepower.
degenerated into softcoding
I'd just try to sell a scripting language as a simple runtime configuration library. It'd probably be less miserable than whatever they invented for themselves.
fbmac last edited by
I just remembered another one. A website constantly falling apart on heavy load for a very simple task. Turn out it was a component for integrating with SAP R3, that was a COM+ DLL that no matter how we scaled the server, it couldn't handle more than a few hundred users a time.
It was for an HR system, and every month, at payday when everyone wanted to look a detailed report of their salaries, we had to spend the day monitoring and restarting the system every few minutes. We didn't have any alternative to that DLL at the time