Okay, someone please tell me that I'm getting "test driven development" wrong.



  • Since I have neglected tests for some time now, I thought it was time to do something about that. (Please note: I don't get paid for that stuff and I also don't do mission critical or something ;)

    And since my development platform of choice is Ruby on Rails, I thought it best to delve into unit tests, functional tests and the rest.

    Thus I created a new project, created some basic models and thought about the constraints those models were to live under (uniqueness, naming conventions, et al) and wrote some tests accordingly, expecting the tests to fail because I wrote the tests first and then implemented the actual behaviour.

    I then arrived at this: Ticket at Github

    So, answer me this: Is it really "expected behaviour" to make it look like your application just exploded? I also don't see the reason for it - the tests have run, you're getting a detailed list of what failed and what passed - so what would be the rationale to also make it look like the task just crashed?

    By the way, this behaviour is mentioned nowhere (hence my creation of this ticket) and it also annoys the Rubymine IDE which only shows the results from the test suite which happened to run first (functional/unit/integration/...) - which is particularly funny since a functional test can very well fail due to a model misbehaviour (which would (hopefully) be caught by a unit test - which never gets shown, using this method...)



  • eh. sounds like an IDE workflow problem.



  •  TRWTF is them trying to be helpful and explaining how their stuff works, and you talking to them like you're Blakeyrat



  • So, stupid question, stop me if I've gotten anything wrong:

    Doesn't the difference in output, and verbose but ugly stack trace, stem from the fact that you used the '--trace' option?



  • Well, I think I'll have to explain a bit more:
    The Rubymine IDE, when starting a test, will show you both the console command (and its output) and then the parsed result in its GUI.

    In this particular instance, this meant, when doing a "rake test" without the "--trace" option, that I only get the functional test results and a terse "There were errors running functional tests, unit tests". I also don't associate a "failed test" with an "error", since even the rake testing differentiates between the two.

    Additionally, I've happened across a lot of stack traces when using Rails - usually, when you see such a trace like this, it means that there's an incompatibility somewhere, a gem is not the proper version, someone pushed a new gem to the repository without also updating its dependencies or just a plain old bug.
    Which means that, when I expect a test to fail, I also don't expect it to error out. "Error" and "Failure" are two distinct concepts in my mind. If my test has an error, I take this to mean that there's e.g. a syntax error in the test itself or something.

    And if I sound a bit irate like Blakeyrat, then that's grounded in the fact that I've been bashing my head into this particular behaviour for several days now, only to be told that it's intended. Without said intention being mentioned anywhere.


  • ♿ (Parody)

    I've never really done anything with ruby or rake, but your complaint is that the test process exits with a non-zero exit code, which signals that it was not successful. It exits like this when at least one test failed. This sounds like reasonable behavior to me.

    The guy even gave you the deeper rationale, which is that you might have other tasks that you only want to run when the tests are successful. So those tasks use the standard way of checking the exit code of your test process.

    So, in short, it sounds like TRWTF isn't anything to do with test driven development or your understanding of it, but your understanding of processes and build systems in general.



  • Right. And they're not able to mention this anywhere because...?

    Isn't one of the gripes on this very board the abundance of undocumented behaviour?

    Would it be so hard to include a single paragraph like "And don't worry if your tests exit with a non-zero error code. That's okay."? Is it really expected of everyone to know everything right from the start? Good grief.

    To borrow your meme: TRWTF is you expecting me to know some undocumented behaviour right from the start and then being condescending about my "lack of understanding" when said understanding is hard to come due to the lack of documentation.



  • @Rhywden said:

    In this particular instance, this meant, when doing a "rake test" without the "--trace" option, that I only get the functional test results and a terse "There were errors running functional tests, unit tests". I also don't associate a "failed test" with an "error", since even the rake testing differentiates between the two

    Is the expected test failure from a negative test, a TODO test, or a broken test (or test of broken functionality)?  If it is a negative test or a TODO test, then there's a problem with the product.  However, by adding the --trace and throwing that into the mix, you've distracted the dev from it.  --trace is just making it much more visible.

    If it's a test that is legitimately failing (either due to it being broken or the functionality it's testing being broken), then the problem mostly exists between your keyboard and chair.  Yeah, failure and error are different - but not by a lot in this case.

    Note that if it's a negative test, you've probably wired it wrong.  The test itself should do the negation.

    However, I've seen a test harness where a test for something properly generating a stack trace resulted in a stack trace, followed by an 'ok' result, and then a non-zero exit.  If there were no problems, this caused the overall harness to report in the end that everything was fine, and then it stack traced and exited with a non-zero exit.  (Somebody else had already reported it, however, and the fix was already in beta.  What most bothered me about it was nobody bothered to back out the known non-working version, because the version number must always rise, and the latest functionality can't be removed temporarily.  Not even when the bug causes a work outage for everyone using an older, popular feature.)



  • The test being done is a test for functionality not implemented yet - in this case a test for a variable being not null/nil.

    And I resent this scheme here of me being PEBKAC. Again, I stated clearly that I'm just starting out with test driven development - and again, this behaviour, even if it is to be expected, is mentioned nowhere. I mean, it may be clear as day to you guys - but what about those who haven't yet had too much contact with this stuff? Are we supposed to figure everything out on our own, or what?

    If there was a clear documentation, I'd be eating crow now. But not when stuff like this is leading me into two directions at once.

    Let's be honest here: The IDE is not showing me what I expected and at the same time the console output is reporting an error. Am I really that much of a doofus when I suspect a causal link?

    Honestly, this kind of non-documentation sometimes just makes me want to run away. Another gem (dealing with OpenID) told me on its How-To page: "For this Open-ID provider, simply add the two lines in your view and you're done."
    Needless to say, it also involved manipulating the user model, the user controller, setting up routes in a certain way, adding another controller and providing a helper method.



  • @Rhywden said:

    Honestly, this kind of non-documentation sometimes just makes me want to run away. Another gem (dealing with OpenID) told me on its How-To page: "For this Open-ID provider, simply add the two lines in your view and you're done."

    Needless to say, it also involved manipulating the user model, the user controller, setting up routes in a certain way, adding another controller and providing a helper method.

    OpenID is Satan. I don't mean it's a minion of Satan, or a creation of Satan, but it is literally Satan.


  • ♿ (Parody)

    @Rhywden said:

    If there was a clear documentation, I'd be eating crow now. But not when stuff like this is leading me into two directions at once.

    I understand what you're saying. I think the problem is that they're using the operating system and general conventions here. Situations like this remind me of Demolition Man where everyone laughs because Stallone's character doesn't know how to use the three seashells.



  • Thank you. I'll now recognize stuff like that in the future - I was simply annoyed by that answer because I spent two days trying to figure out what I was doing wrong.

    Maybe I'll even apologize to that guy tomorrow - it's not his fault for being the bearer of bad news and all.



  • As far as I understand the situation (didn't read the ticket) rake test:unit is not used to produce readable results, but rather to check if all of the unit tests pass; which is useful when doing a deployment. I wouldn't recommend using rake to start the unit tests, but rather somthing like autotest (which has more WTFs to rant about ;).

    Also, for RoR, don't use any IDE. Build your own environment or this kind of "undocumented behavior" is the last thing to worry about in the near future. What really happened is that RubyMine calls rake test:unit to claim that it supports "TDD" or something, but left that broken stub of feature where it is.



  • What has this rant about rake (a TOOL) have to do with (mis)understanding unit testing anyway? If you want to complain, at least blame the right thing.Otherwise you only get people correcting you in stead of lighting torches.


Log in to reply