The Dark Art of Testing



  • Hi everyone,

    As the new guy at work, I've kinda inherited the job of testing. I can write a decent test, but testing is neglected here. I've not had to design an entire test process before, anyone got any tips?

    It's some equipment with a Web front end - I'm planning on using selenium for testing the front end but is there a good way of automating tests that work on both the front and back end, or should I do that separate then use the current manual tests to bring it together?


  • :belt_onion:

    Who on earth did you inherit the job from? The VP Engineering's Canadian girlfriend?



  • @Greybeard An email forwarding rule, I think. It's a small company with no test team and the lead dev has been more concerned with giving the directors their new shiny toys than testing. The current process is that after the devs have given it a quick look then testing is the support teams problem but in the past the team has been an electronics expert and a couple of sys admins, skills wise, so it's been done with a giant patchy list of crappy manual end to end tests.
    So I have the opportunity to put something proper in place. Except I can't count on adequate unit and integration testing.



  • @Pockets Do you have a continuous integration server? That's what you want for automating your tests. We use TeamCity and have it set up to make a build and run all of our tests whenever someone checks something in.

    There are many other options out there, paid and free.



  • I've scrounged some hardware from our sys admin, but we have nothing atm. Just a tiny number of unit tests that are run locally on a unit whenever a developer feels like it . The problem I've run into when reading up is that it's not just a website there's all the stuff behind that to test too.

    There's no CI and I don't think I have the pull to make that happen as the lead dev isn't big on anything like that (or unit testing)



  • @boomzilla I would say step one is do you have product requirements? You can test for some things without knowing how the software is supposed to behave (for example, fuzz tests for crashes), but you can't really have a reasonable test plan without requirements first.

    I'd prioritize the areas to test. Selenium is great for testing the front-end, but if all the troublesome code is in the API that won't necessarily help you. Like Boomzilla says, a CI server is great, but if your customers are seeing constant crashes every day that you could fix for them with 45 minutes of work, then spending days or weeks setting up a CI server shouldn't be your highest priority.


  • Winner of the 2016 Presidential Election

    @Pockets said in The Dark Art of Testing:

    There's no CI and I don't think I have the pull to make that happen as the lead dev isn't big on anything like that (or unit testing)

    Setting up Jenkins to run your unit tests whenever master changes and send you an email whenever a build fails takes between 30 and 60 minutes. Just set up an instance on any machine you have access to and wait until people are convinced that it's useful.



  • @asdf I'd estimate our unit test coverage as 10% on a good day, and major parts of the system are so spaghetti that they can't be - raw sql in random places, etc.



  • Before we get on with recommending specific testing tools, we probably should ask what it is he's testing first. All we know so far is that it involves a web front end, but that tells us nothing about the problem domain, or about the middleware or server code, or even about how the front end itself is written. Any advice we give now is going to be worthless without a lot more details.



  • @ScholRLEA
    Fair point, I was deliberately vague to avoid being id'd; moaning to the Internet about everyone doing it wrong after a couple of months isn't a good look after all. It's some [redacted due to paranoia], basically.



  • @blakeyrat said in The Dark Art of Testing:

    I would say step one is do you have product requirements?

    Well, yeah, but I was addressing the actual question he was asking. And having that bit automated will save you a lot of time in the future.



  • @boomzilla said in The Dark Art of Testing:

    And having that bit automated will save you a lot of time in the future.

    Right; if you can justify its cost in the present. It's all about priorities.


  • I survived the hour long Uno hand

    @Pockets said in The Dark Art of Testing:

    major parts of the system are so spaghetti that they can't be - raw sql in random places, etc.

    You're going to want to tackle that early on. Make your boss aware of the abysmal coverage, and see what quick wins you can propose to increasing that. A good tip is to find a big problem and say "We'd have caught that sooner if we had a test for it, but to do that I need whatever"

    @Pockets said in The Dark Art of Testing:

    is there a good way of automating tests that work on both the front and back end

    I wouldn't bother. You'll want unit and integration tests in the language you code in, and unless that happens to be Java or Python, your selenium bindings are going to be less well documented and supported.

    @Pockets said in The Dark Art of Testing:

    There's no CI and I don't think I have the pull to make that happen

    Can you scrounge up a spare VM? Jenkins is free and very easy to set up. Showing the benefits is often easier than explaining the paradigm shift. "I just want a central way to run tests so anyone can click a button and get results" is a great place to start.


  • I survived the hour long Uno hand

    Also: Hi, are you me two years ago? If so, do not agree to rework the release process.



  • @Yamikuronue +1 funny


  • Winner of the 2016 Presidential Election

    @Pockets said in The Dark Art of Testing:

    I'd estimate our unit test coverage as 10% on a good day,

    Still better than nothing, and even those unit tests are worthless if you don't run them automatically.

    Side note: 100% coverage is a goal that you'll most likely never achieve anyway. And if you only look at coverage metrics, you'll end up writing extremely stupid and useless tests which call all getters and setters. Just start somewhere, preferably with the critical parts of the system. Or at least add a test whenever you fix a bug or regression. Over time, your tests will start to cover everything that actually matters.

    @Pockets said in The Dark Art of Testing:

    major parts of the system are so spaghetti

    Sounds like you need some refactoring, which is one more reason to write some tests. If you can't mock the DB, at least write some basic integration tests. Then start un-spaghettying the mess.

    @Yamikuronue said in The Dark Art of Testing:

    do not agree to rework the release process.

    +128

    I made the same mistake.


  • :belt_onion:

    I read his question as "how do I share tests of the front end in isolation with tests of the integrated system?"

    My advice, not that it would be taken, would be "hire someone who has a clue". Design mistakes made at this phase are likely to stay with the product as long as it exists. They certainly have where I work.



  • Yeah, there's a degree of that. I've seen a problem where the front end thinks it's made a change but it actually failed. I guess the proper way is two tests, but I don't think I can get the devs doing integration tests enough to cover that.


  • Discourse touched me in a no-no place

    @Yamikuronue said in The Dark Art of Testing:

    do not agree to rework the release process.

    +1 for a bullet I dodged 15 years ago.



  • @blakeyrat said in The Dark Art of Testing:

    @boomzilla said in The Dark Art of Testing:

    And having that bit automated will save you a lot of time in the future.

    Right; if you can justify its cost in the present. It's all about priorities.

    Remember: if the building is on fire, STOP TESTING AND EXIT THE BUILDING. If the cabin loses pressure, put your oxygen mask on before you continue testing.


  • SockDev

    @boomzilla said in The Dark Art of Testing:

    If the cabin loses pressure, put your oxygen mask on before you continue testing.

    And make sure you put on your own mask before putting them on your children


  • I survived the hour long Uno hand

    @Pockets said in The Dark Art of Testing:

    I don't think I can get the devs doing integration tests enough to cover that.

    Actually ,with spaghetti code and low coverage, integration tests are easier to get them to write. A test that simulates an AJAX call is a lot faster to run than starting up the app and running a webdriver functional test to click the buttons, while getting the same coverage of that endpoint, plus the closer you are to the code the easier it is to simulate error conditions.

    That said, depending on why the front end thinks it succeeded, you might not root out the problem with that sort of integration test, so grain of salt there.



  • @Yamikuronue Yeah, easier than unit tests, but any kind of extra testing by the dev team is likely to be blood from a stone for the most part - I get the feeling that I've got such a good amount of goodwill around it at the minute because they don't actually expect anyone to be asking for more tests, just that the WebDriver stuff makes it all 'not my problem'.


  • :belt_onion:

    Yeah, sounds like you're at an earlier point in the path we took. Unfortunately, I don't have an answer for you--our tests aren't good at testing things in isolation.

    Of the low-level suites kicked off by CI, the shorter one takes about three and half hours to complete. Few developers pay attention to the results.

    The UI tests are developed and run by an offshore team.


  • I survived the hour long Uno hand

    @Pockets You'll probably get more milage doing integration tests yourself over doing the webdriver tests. They're fragile enough to burn up the little good will you have with spurious failures.



  • I suppose you could do integration tests of the backend by sending the same web requests the frontend was sending.

    For 1000-yard tests like that, I like to keep both the input data and the expected result in the repo, and (if practical) fail the test any time the result doesn't match what was expected. The idea is to keep track of any changes in system behavior, even if you're not familiar enough with the system to know which slight changes are going to cause bugs.



  • @Yamikuronue said in The Dark Art of Testing:

    spurious failures.

    Oh yes. Tangentially related, but I recently added a stateful-inspection firewall with IDS so we could ensure PCI compliance. As it was very powerful and needed tweaking I though (stupidly) the early wins I could demonstrate would be be great. As it is I just squeaked though even keeping the damn thing. Even one false-positive got me a meeting and the demonstrable performance in murdering a crytolocker download-and-execute was the only thing that let me keep it.
    To the rest of the business a false positive is a huge disaster, at least in my company. I hadn't really appreciated that until now.


  • Discourse touched me in a no-no place

    @Cursorkeys said in The Dark Art of Testing:

    To the rest of the business a false positive is a huge disaster, at least in my company.

    If your little bit of extra security prevents the business from actually earning money, you'd better believe it's a big deal. You'd be protecting against a theoretical threat (are things under direct attack immediately?) with something that is a direct immediate threat (preventing employed people from actually doing their work). When faced with this, the sums tend to add up against you pretty rapidly.

    This is why it is really useful to partition systems so that as little handles confidential data as possible, and to try to stop systems from having access to everything. (I'm also waiting for when someone figures out a hack against the IDSs that are common and the excrement hits the ventilator big time… :smiling_imp: Don't trust them without actually verifying from time to time.)



  • @Pockets said in The Dark Art of Testing:

    I get the feeling that I've got such a good amount of goodwill around it at the minute because they don't actually expect anyone to be asking for more tests, just that the WebDriver stuff makes it all 'not my problem'.

    Make 100% sure that any bug the WebDriver stuff "finds" is 100% verified by an actual human being before you bring it to the developers' attention. Because with WebDriver tests, the tests break at least as often as the product, and you'll burn-up goodwill fast if your developers keep getting presented with non-issues with big red X's in the email.

    Like Yami says, I'd start with your REST API tests. They're quicker and easier to write, quick to spec (just run the app, do the operation you want to test, and log all the REST API input/output with Fiddler or your browser's debug tools) and will find more fundamental problems.

    EDIT: also keep in mind, there are whole (huge!) classes of bugs that automated testing will never find, no matter how sophisticated the automated tests are. "When you click Users, then Active, the font's really too small to be comfortably read" is not something Selenium is going to tell you.


  • Discourse touched me in a no-no place

    @blakeyrat said in The Dark Art of Testing:

    EDIT: also keep in mind, there are whole (huge!) classes of bugs that automated testing will never find, no matter how sophisticated the automated tests are. "When you click Users, then Active, the font's really too small to be comfortably read" is not something Selenium is going to tell you.

    I'd like to put this on a big poster in every developer's office in the Western Hemisphere.



  • @dkf said in The Dark Art of Testing:

    @blakeyrat said in The Dark Art of Testing:

    EDIT: also keep in mind, there are whole (huge!) classes of bugs that automated testing will never find, no matter how sophisticated the automated tests are. "When you click Users, then Active, the font's really too small to be comfortably read" is not something Selenium is going to tell you.

    I'd like to put this on a big poster in every developer's office in the Western Hemisphere.

    Yes. Very much so. Testing is not my job, but I still submit about one defect a month on something QE missed. A recent example was "The columns in this grid are labeled X, X, and X; they should be labeled X, Y, and Z." And it had been that way for at least four years; there was a four-year-old screen shot in the documentation showing the columns all labeled the same.



  • @dkf said in The Dark Art of Testing:

    I'd like to put this on a big poster in every developer's office in the WesternNorthern Hemisphere.


  • Discourse touched me in a no-no place

    @devjoe That's more testable (assuming that there's some relationship between the labels in the model and the labels on the screen). Real actual usability is not fully subject to automated testing; stuff like “the buttons are confusingly labelled” requires human input to find out because it is exactly about how people interact with the system.

    Computing isn't just mathematics and engineering. It's some psychology too, especially with user interfaces.



  • @Pockets said in The Dark Art of Testing:

    raw sql in random places

    What you're saying is you're 99% sure you're vulnerable to SQL injection.



  • You got lots of good advice already, but here's how I would approach this task:

    1. Set up a testing harness and some easy to use testing scripts, or a CI server if you want. (You'll want a CI server eventually anyway, but don't burn political capital to get it until testing has proven its worth)
    2. Use version control. Make a branch for your tests.
    3. Fix all the bugs in your bug tracker, issue by issue. Make sure you write tests for each fix in your tests branch. "All" the test has to do is set inputs to a unit of code, and verify that the code outputs the expected thing. You want to cover as many cases as you can, but basically any non-trivial test is an improvement over not having any tests. Write tests for all the "business rules" and "business logic."
    4. Run your tests before every commit.
    5. Somebody, somewhere, will eventually break one of your tests erroneously (i.e., because they deviated from the spec). So talk with them, telling them something along the lines of "HEY ASSHOLE, YOUR CODE IS BUGGY." Show them how and why. What business rules is their code breaking? Write a test for their bug fix, too.
    6. When they get tired of you, they'll fire you.
    7. You can put unit testing experience on your resume.


  • @Pockets The lead dev doesn't like testing or deploying anything?


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.