So to keep the site more secure they insist on IE?
Hmmmm.....
Posts made by KarenM
-
RE: Security by...user agent?
-
RE: IOS7 BSOD
How long before Microsoft sues Apple for infringing the BSOD patent?
-
RE: Fractal Reporting
Hudson is the automated build tool we use for the nightly builds at WTF Inc. And yes, it's what people use when they can't afford a REAL build engine.
-
RE: Fractal Reporting
@morbiuswilters said:
@blakeyrat said:
So in summary:
@summary said:
AJAX, Selenium RC, SeRC, Maven, ANT, Hudson, Se1.0, JUnit, XML, SeRC, Hudson, ANT, Java, Make, Maven, Java. Hudson, VM. ANT, klaxon, SQL, ANT, Hudson, JUnit, XML, flyweights.
- Selenium, DOM.
- DOM, Selenium, Selenium executor.
*ML, DOM, DOM, STAX, Selenium, Hudson. STAX, SeRC, Hudson, SVN Blame, jar, classpath, XSLT, XSLT, XML. XSLT, psuedocode. XML, HTML. JUnit XML, XML, STAX, XML, XML, XSLT, STAX, Selenium.
Hudson? PNGs.
It's WTF salad!
Worse, it's only the entrée salad. Feel lucky that you don't have to examine the main course menu....
-
Fractal Reporting
TLDR: The symptoms, diagnoses and remedies for a Selenium build.
At WTF Inc, we test our AJAX application via Selenium RC. The SeRC run is written in a Maven project that has some custom ANT tasks that allow Hudson to execute the Se1.0, writes out some JUnit-style report XML, and then transmutes that into pretty reports. The Pointy-Haired Boss ordained that we were to use SeRC, ordained that it was to integrate into our existing Hudson builds, and that a pretty report would be available for him each morning -- or rather, lunchtime, since the execution would take about six hours (it was a VERY complicated webapp). None of this is TRWTF. (In short: ANT is Java's version of Make, Maven is Java's dependency manager.)
The Senior Developer who was responsible for making this work is on-the-record as saying "I'm too senior to write test harness code", although he vanished into the layoffs. Since I was responsible for the tests, I also became responsible for maintaining the Hudson harness. I soon discovered that Senior Developer implemented a "Execute these named files" strategy for discovering the test files. I quickly modified the strategy to "All files in these directories", since the number of test scripts was increasing faster than the rabbit population. After this modification the engine continued fine for several weeks, until the massive increase in scripts inevitably resulted in an OutOfMemoryError. A quick increase of memory to the VM and everything was fine. Only 2 weeks later the OutOfMemory errors kept coming and we realised we'd have to study the inner workings of the beast.
I started at the ANT tasks that were executed. The first warning klaxon that went off was when I realised that the Senior Developer had effectively written the SQL task from scratch. So rather than use the built-in SQL implementation in ANT for the data injection, he rolled his own. I quickly culled his custom code out of the Hudson harness and replaced it with the standard implementation. I soon saw what I thought was the cause of all these memory problems: the JUnit XML report format demands that the duration of the the test suite be written before the results of the individual tests steps, could it be that the system was holding onto the flyweights in memory too long? I was reluctant to rewrite the report generation code, so I carefully examined the entire system even closer.
The real problem made my head spin. The algorithm our Senior Developer used to execute the tests went like this:
For every directory in the path:
For every test file in the directory:
Read the contents of the Selenium files into an in-memory DOM.
For each table line in the DOM, extract the Selenium command and send it to the Selenium executor.
Write the test results for the file
Examine all the test results in the results output folder and make pretty reports.
Just so we're clear: reading your *ML files into a DOM requires a high memory footprint, and is generally only done when the tags are going to be edited. Since in this case all we were doing was reading the file, I replaced the DOM construction with a STAX reader that simply opened the file, found the appropriate characters, and then closed the thing without any additional fuss. The result was that instead of performing one Selenium test step per second, our Hudson harness started performing two every second, and it didn't throw an OutOfMemoryError 5 minutes into the build.
In that respect the STAX reader was a success, but the SeRC run would still fail with an OutOfMemoryError after an hour. Something was still innately wrong with the test harness. It seemed to hang when the reports were being generated, so I took a closer look at the report generation code. The Senior Developer had placed a 3rd-party library on the Hudson harness to handle the publication of the pretty reports -- as in "downloaded the project's source code and then checked it into the harness's source control". And SVN Blame confirmed that he had made no modifications to the 3rd party code: apparently he didn't know how to put the jar on the classpath (or add it as a dependency, or any other way of putting in 3rd party code). This 3rd-party code was essentially XSLT engine -- as in, this is the sort of code you include when you want to develop your own XSLT engine, not when you simply want to make an XML file look pretty.
As cumbersome and inefficient as this programmatic XSLT was, he might've gotten away with it if he'd examined his algorithm more closely. Look again at the pseudocode above.
Each directory on the test path was for a specific module: /ModuleA for tests relating to Module A, /ModuleB for tests relating to Module B, you get the idea. When the harness completed all the tests for Module A, it would look in the results directory and publish the reports. It would do the same when it completed Modules B, C, etc.... Each time it would look into the result directory and transform the results XML into pretty HTML. But because it was blindly publishing the results of EVERYTHING in the results directory, it would re-publish existing reports without blinking. When only Module A results were there that was fine; when Module B was complete, the system would publish the reports for Module A (again) as well as Module B. When Module C was complete the system would publish reports for Module A (again), Module B (again) and Module C. This would continue through all 26 of our modules (which typically had 20+ tests in each). No wonder it was taking 10 minutes to complete the reporting phase of some of the later modules.
I did end up replacing the JUnit XML format with our own home-grown variety: one that only required the duration of each individual step to be recorded in the XML. I also modified the reporting engine so that it used a STAX writer to write the results into an XML file on the fly, and only performed report publication once the last test had been executed. Report publication consisted of slicing that single results.xml file into smaller XML files with an XSLT in the header -- the web browser did all the hard work of making them pretty. The STAX harness could do 3, perhaps 4, Selenium instructions every second and its memory usage remained constant, no matter how many tests we added.
But my favourite bug from the Hudson harness? The Senior Developer directed all screenshots to be saved as 'Screenshot.jpg', even though the screenshots were always PNGs and more than one screenshot would be taken during the execution. I instructed the harness to use the test name and timestamp as the filename, ensuring none of the previous screenshots were overwritten.
-
RE: Standard video capture drivers? What's that?
Moral of the story: if it can't do cool stuff in Microsoft Office, it doesn't meet requirements.
-
RE: Who's the Dummy, here, exactly?
> svn blame dummyFile
Stupid variable checked in by PaolaBeanDUMMY = "PaolaBean"
May have gotten the details slightly wrong, but you get the idea.
-
RE: Priorities
What I don't understand is why Snoofle has that beautiful picture of his boss in the targeting reticle but never pulls the trigger.
-
RE: Double Negatives
@joe.edwards said:
@morbiuswilters said:
If you're in the US: since you are doing business with Awesome Gym, the DNC registry doesn't really apply. Admittedly, telling the gym you didn't want to be contacted should have been enough, but the telemarketers aren't then going to check the register because they don't have to--they're acting on behalf of Awesome Gym.
Doesn't it become harrassment when he repeatedly tells them to desist and they persist?This whole incident happened outside the US, so I'm afraid those comments don't apply.
@shoreline said:
I think it's a bit of an assumption that they're organised enough to be using a database. Not sure how they got the current members when they should have the ex-members either.
For truly high-technology, one would need a printed, photocopied list of ex-members from an excel spreadsheet (exported from a database where it's stored in JSONed XML format), photographed on a wooden table, pasted into a zipped word document and delivered by courier to the telemarketers in a USB drive for security reasons.
Nah, not enterprisey enough.
-
Double Negatives
I'm a member of Awesome Gym, a franchisee that is actually a nice place to do a workout. A while ago a telemarketer rang as part of Awesome Gym's latest promotion, saying there was now a great offer for ex-members to come back to the gym. I was surprised: I'm not an ex-member, I've told Awesome Gym never to ring me with special offers & promotions, and the number they were ringing is on a Do Not Call register (ie, telemarketers ringing my number get fined by the government). Telemarketer apologised and promised not to ring again.
Half an hour later a different operator phoned with the same story. I gave the same reply. The same thing happened the next day, and the next day and the next....for two weeks. Of course I spoke to Awesome Gym: they said the Telemarketers should only have a list of ex-members (I've never let my membership lapse), Awesome Gym already has me on their 'don't ring with special offers' list, and of course checking the final list with the government's Department of Anti-Spamming is simply due diligence.
So let's see: the telemarketers effectively unioned the list of current members AND the list/s of people they shouldn't be ringing, in such a way that the names appeared in their list at least 15 times. Bad SQL structure anyone?
Awesome Gym complained twice to the telemarketers. When they kept dragging their heels, Awesome Gym pulled their 5-figure contract and warned their fellow franchisers never to use them again due to breach of contract (ie, ringing people they were told not to ring).