But we have 98% coverage...
I was asked to write some JUnit tests for a panel someone else had just implemented. There were no specs to work from - apparently the users just waved their hands and the developer worked from that (!), so I really had no choice but to go see what the code did. It was the usual litany of tables, checkboxes, radiobuttons, text fields, combos, buttons and the related actions. A 5000 line constructor to define it all, and 4 getter methods to get the Action objects.
So I JMock the underlying services and db calls, all told about 50-ish calls in all, and at least get the constructor to work. But the real work that needs to be tested is in the actions where the processing logic resides.
The problem is that each action calls an underlying library that calls another library that .... and ultimately pops a dialog box on an error; something too deeply buried to be mocked away.
Since a large part of testing is to simulate failures, we can't very well have our automated test system stop in the middle of every run until someone walks over to press the button to acknowledge the forced failure. I was instructed to not test failure cases; it's enough to test that the constructor executes because it's 98% of the code.
In other words, I have tested Java's ability to construct a widget, and JMock's ability to return the argument I pass to it, but none of the processing logic performed behind the panel.
But we have 98% coverage...
Does it compile? Ship it!
Since a large part of testing is to simulate failures, we can't very well have our automated test system stop in the middle of every run until someone walks over to press the button to acknowledge the forced failure.
I may be naïve, but I imagined that someone testing a GUI program would use a test environment with a full suite of GUI functions available to it. In other words, one which can (a) check that a button has been successfully generated, and (b) press the thing.
I take it you're not that lucky?
not that lucky?
Bingo; no tools what-so-ever; just junit and some piece of crap to run the entire test suite every 30 minutes. Of course, if it fails at 5:30 on Friday, we get 125 identical failure messages by Monday at 8AM; we use an OLD version of Notes (really!), so there is no swipe-and-delete, so everyone auto-filters them to the bit-bucket. But the coverage report goes up the chain to management.
Bingo; no tools what-so-ever;
There's always Robot. Heck, it might even complement the rest of the code.
Pardon my ignorant old-fashioned ideas, but I believed it was one of the Noble Truths of Programming, that 20% of the code does $USEFUL_STUFF, and the other 80% does error checking? How can anybody believe in 98% coverage (unless of course they fervently want to tell their boss that "nothing can possibly go wrong, go wrong, go wrong...."). What would 98% coverage mean, if it was true?
Noble Truths of Programming, that 20% of the code does $USEFUL_STUFF, and the other 80% does error checking?
I was going to say NO WAI, but then I realised that I use libraries that really compact my code, so that's where the 80% ends up, I guess. The active business code that me and coworkers get to write & maintain from day to day may very well be that 20% you mention.
It also uses the 13KB jQuery library.