This will turn out well...



  • Last week, we had this big briefing from the team that maintains our company-wide ERP system. For the past six-plus months, they have been planning this huge upgrade of this mission critical system - testing code, rolling out features, and planning implementation details. They have even done three "practice migrations" where they test the system with a full copy of the production data. Once this is deployed, pretty much every screen and input form will be changed.



    Which brings up the obvious question: when is user training?



    Answer: we will start training people two weeks after deployment


    Glad I don't work in finance.



  • When I worked on a till to pay for my student lifestyle, the retailer I worked for changed their tills. For brand new versions of exactly the same as we already had. Even though they were 100% identical (apart from being shiny and clean) we still all had to be retrained on them, yet when they rolled out new features, we were left to fend for ourselves... Think that should've been the other way around somehow.



  • @Ex-Navy Dude said:

    Which brings up the obvious question: when is user training?
    Answer: we will start training people two weeks after deployment

    Did they give a reason for this?  It could be that the first two weeks will be devoted to operator training, early-life support and skilling up the service desk so that they're prepared for when the great unwashed users are let loose.

    Alternatively, it could be that users are expected to sit around and not touch anything for two weeks. Or that users are expected to still work in those two weeks and flounder badly, prior to training.

    I suppose you ought to be grateful that end-user training has actually been considered and planned, even if the scheduling is somewhat suspect - I know of many places where it is assumed their staff are automatically blessed with this information upon release.



  • Eh, training is only for applications that have so bad userinterfaces that the users can't understand it on their own.

    In other words, they are doomed.



  • @Cassidy said:

    @Ex-Navy Dude said:

    Which brings up the obvious question: when is user training?

    Answer: we will start training people two weeks after deployment

    Did they give a reason for this?  It could be that the first two weeks will be devoted to operator training, early-life support and skilling up the service desk so that they're prepared for when the great unwashed users are let loose.

    Alternatively, it could be that users are expected to sit around and not touch anything for two weeks. Or that users are expected to still work in those two weeks and flounder badly, prior to training.

    I suppose you ought to be grateful that end-user training has actually been considered and planned, even if the scheduling is somewhat suspect - I know of many places where it is assumed their staff are automatically blessed with this information upon release.

    "Lack of resources" was the excuse. And they are taking the "everyone will just have to wing it for two weeks" angle. The official deployment date is after a two weeks black out period.

    We had to tell our vendors and suppliers that they won't get paid for a month because our system is being upgraded. All the finance guys will be looking at about 4 to 6 weeks of backed up work and everyone in the company pushing them to hurry. No sales orders being closed, no software keys being delivered, no cash flow coming in makes people uncomfortable.



  • @Ex-Navy Dude said:

    "Lack of resources" was the excuse. And they are taking the "everyone will just have to wing it for two weeks" angle.

    I guess they are about to have a lesson into what happens when you don't do a CBA.
    Not wiling to spend $X on training up front --> $X*Y cost later due to lost productivity, unhappy staff, annoyed customers, suppliers adding late-payment penalties, etc (where Y is much greater than 1).

    Seriously, make sure your CV up to date. If a company doesn't care about their vendors and suppliers then they certainly don't care about their staff.



  • @Ex-Navy Dude said:

    And they are taking the "everyone will just have to wing it for two weeks" angle.

    "Oh, look - I've just entered all this data wrong! Still, it's not gone live yet so there can't be any ramifications...."

    @Ex-Navy Dude said:

    "Lack of resources" was the excuse.

    .. and "lack of proper training " is my reason for not being productive on this wonderful new recently-introduced system. I'd answer any query with "and how do I do that in the new system" for a bit. But that's me...

    It seems strange that the resources are there, just that they've been diverted into doing stage 2 (deployment), leaving none available for stage 1 (training). Hasn't anyone pointed out to the project manager what's gonna happen when activities are performed in the wrong order?  



  • @Cassidy said:

    It seems strange that the resources are there, just that they've been diverted into doing stage 2 (deployment), leaving none available for stage 1 (training). Hasn't anyone pointed out to the project manager what's gonna happen when activities are performed in the wrong order?  
     

    "Pointing the problem out to the project manager" is scheduled for four months from now.



  • @da Doctah said:

    "Pointing the problem out to the project manager" is scheduled for four months from now.

    I feel a Dilbert "MFU2" cartoon moment approaching.



  • @da Doctah said:

    @Cassidy said:

    It seems strange that the resources are there, just that they've been diverted into doing stage 2 (deployment), leaving none available for stage 1 (training). Hasn't anyone pointed out to the project manager what's gonna happen when activities are performed in the wrong order?  
     

    "Pointing the problem out to the project manager" is scheduled for four months from now.


    Project manager? You think we are organized enough to have a single person in charge? The whole even seems to be run by committee: this guy in charge on new features, this guy in charge of migrating the back end, this guy in charge of business logic definition. etc....

    I'm sure the blame-game will start in a week or two, though.



  • @Cassidy said:

    @da Doctah said:

    "Pointing the problem out to the project manager" is scheduled for four months from now.

    I feel a Dilbert "MFU2" cartoon moment approaching.


    Ooooo... I have extra fun to report. Just hours before the end of the blackout window, they found a browser compatibly issue with Internet Explorer. It appears that the testing group only used Firefox. Forget Dilbert comics, this is sounding more like a Hollywood horror flick.



  • @Ex-Navy Dude said:

    Project manager? You think we are organized enough to have a single person in charge?

    "WHO'S ACCOUNTABLE FOR THIS FAILURE?"

    "the project manager"

    "WE DON'T HAVE A PROJECT MANAGER"

    *ting!*

    @Ex-Navy Dude said:

    Ooooo... I have extra fun to report. Just hours before the end of the blackout window, they found a browser compatibly issue with Internet Explorer. It appears that the testing group only used Firefox.

    Who test the testers?



  • @Ex-Navy Dude said:

    Just hours before the end of the blackout window, they found a browser compatibly issue with Internet Explorer. It appears that the testing group only used Firefox.


    I wouldn't blame the testers for this unless the test specs specified which browser to use and they ignored the instruction.  The browsers that needed to be tested should have been a requirement somewhere (likely Business Requirements; possibly Functional Requirements) and should then have been specifically tested.

    Given the other things you've said about this project I can probably guess but....what do you have in the way of documentation?

     [ ] Business Requirements
     [ ] Functional Requirements
     [ ] Test Plan
     [ ] Test Specificationans



  • @RTapeLoadingError said:

    I wouldn't blame the testers for this unless the test specs specified which browser to use and they ignored the instruction.  The browsers that needed to be tested should have been a requirement somewhere (likely Business Requirements; possibly Functional Requirements) and should then have been specifically tested.

    I'd certainly start with whoever formulated the test plan and reviewed the test basis. This could be a tester, a project manager (missing, in this case), BA or janitor.

    But yah, you're right - "what did the specs say?" is what I'd ask.



  • @RTapeLoadingError said:

    I wouldn't blame the testers for this unless the test specs specified which browser to use and they ignored the instruction.

    I blame them. Anyone who's done browser testing even once should know that testing in only Firefox is not acceptable. Now, it would be nice if the spec had said which browsers to test in, but it's also one of the first things the testers should have noticed as missing. Either they're: 1) too incompetent to know that you should test a range of browsers; or 2) they saw that the spec didn't specify which browsers to test on and said "Cool, since they didn't tell us to test them all we'll just test Firefox!" in which case they're deliberately shirking their duties. This whole "if it's not in the spec we can't expect a reasonable person to engage their brain" mentality is silly and self-destructive.



  • I agree with morbs.



  • I don't.

    I get the competancy thing, but we're making assumptions that the testers have been permitted project time to design and implement such tests. The original post suggests to me there's a lot of silo mentality going on, and it's perfectly possible that testers have been directed - for reasons of time constraints - to do the best testing possible in FF prior to the deployment, despite their objections and intentions.

    Of course, it's perfectly possible that testers were given free reign to do the lot and took the lazy way out, but without further information I ain't gonna point fingers.

    Oh, wait.. I have already. Erm..



  • So your conclusion is that

    1) The testers don't have enough time to do tests

    2) The testers were explictly told not to use the most popular browser to do their testing

    ?



  • No.

    My observation is that there could be many reasons why the product was only tested in FF and this bug missed.

    My current thoughts are that immediately apportioning blame to the testers for not performing proper testing could be premature without more information forthcoming.

    I can theorise all I like, but at this conjecture it's only supposition.

    TL;DR: I'm not jumping to any conclusions until I get more facts.



  • @Cassidy said:

    No.

    My observation is that there could be many reasons why the product was only tested in FF and this bug missed.

    My current thoughts are that immediately apportioning blame to the testers for not performing proper testing could be premature without more information forthcoming.

    I can theorise all I like, but at this conjecture it's only supposition.

    TL;DR: I'm not jumping to any conclusions until I get more facts.

    In the real world, QA is the butt-monkey of any development shop. Whenever a bug is found in production, every finger points at them for not finding it. They are often an afterthough, so they have limited time and limited budget. It's a thankless job, testing software, and few even bother to tell them what's going on even minutes before they're expected to start.

    It could very well be that the they had the best intentions, but they simply weren't given enough time. They gambled that by testing one browser thoroughly, they would catch bugs that affected all browsers, and hope that any browser specific bugs would be minor. Unfortunately, in this case, they lost that gamble. And, now, who's to blame? Obviously, QA, so as punishment, their budget will be slashed.

    So, next time you see someone who works in QA, be nice to them: Pat them on the head, give them a cookie, let them know you care. After all, it's not their fault they weren't smart enough to become programmers instead.



  • @morbiuswilters said:

    @RTapeLoadingError said:
    I wouldn't blame the testers for this unless the test specs specified which browser to use and they ignored the instruction.

    I blame them. Anyone who's done browser testing even once should know that testing in only Firefox is not acceptable. Now, it would be nice if the spec had said which browsers to test in, but it's also one of the first things the testers should have noticed as missing. Either they're: 1) too incompetent to know that you should test a range of browsers; or 2) they saw that the spec didn't specify which browsers to test on and said "Cool, since they didn't tell us to test them all we'll just test Firefox!" in which case they're deliberately shirking their duties. This whole "if it's not in the spec we can't expect a reasonable person to engage their brain" mentality is silly and self-destructive.


    This is the same dev team that developed a plug-in for this one section of the ERP system that requires Chrome. Now all our admin types and non-techie management people get to have three browsers. The help desk loves these guys.

    I think they aren't any "reasonable people" with that pesky "common sense" available on that team.



  •  @morbiuswilters said:

    @RTapeLoadingError said:
    I wouldn't blame the testers for this unless the test specs specified which browser to use and they ignored the instruction.


    I blame them. Anyone who's done browser testing even once should know that testing in only Firefox is not acceptable. Now, it would be nice if the spec had said which browsers to test in, but it's also one of the first things the testers should have noticed as missing.


    I can see what you're saying - you'd hope that your testers would realise that if multiple browsers are part of the SOE that they should be tested.  Saying that, I still believe that the responsibility to specify this lies elsewhere.  One of the reasons for having documented requirements is so that you know what you should be (and should not) be testing. 

     

    @morbiuswilters said:

    This whole "if it's not in the spec we can't expect a reasonable person to engage their brain" mentality is silly and self-destructive.

    As others have said, the testing portion of a project is often squeezed due to it (a) being towards the end where there's no-one else's time they can steal, and (b) being seen as not that necessary by some management types.  (Without more info we can only guess if this was the case here.) 

    Now, if the testers were indeed pushed for time you can imagine the response from above when they start not only testing unspecified areas of functionality, but start raising incidents against them.  Presumably testing in IE as well as FF would roughly double the amount of time required for testing the browser component and it would be hard to imagine that there was time in the testing part of the project to cover this.



  • @RTapeLoadingError said:

    I can see what you're saying - you'd hope that your testers would realise that if multiple browsers are part of the SOE that they should be tested.  Saying that, I still believe that the responsibility to specify this lies elsewhere.  One of the reasons for having documented requirements is so that you know what you should be (and should not) be testing.

    I'm fine with it being specified but common sense would dictate that even if it weren't somebody would add it to the spec rather than proceeding with obviously-deficient testing.



  • @pkmnfrk said:

    In the real world......And, now, who's to blame? Obviously, QA

    img: ur_doing_it_rong.gif

    @pkmnfrk said:

    After all, it's not their fault they weren't smart enough to become programmers instead.

    Oh, I'm not rising to that one. Too obvious!

    @Ex-Navy Dude said:

    The help desk loves these guys.

    So why hasn't someone (perhaps heading up Incident Management) graphed trends to illustrate the relationship between a new release and an increase in calls to the service desk?

    Call centres get a slating for not resolving the incident quickly and thus are blamed for downtime: by identifying the causes of the service interruptions, you're beginning the road to incident prevention (and thus increasing profitibility of the organisation by eliminating - rather than reducing - downtime).



  • @Cassidy said:

    @pkmnfrk said:

    In the real world......And, now, who's to blame? Obviously, QA

    img: ur_doing_it_rong.gif

    @pkmnfrk said:

    After all, it's not their fault they weren't smart enough to become programmers instead.

    Oh, I'm not rising to that one. Too obvious!

    I just calls 'em as I sees 'em.



  • @Cassidy said:

    "WHO'S ACCOUNTABLE FOR THIS FAILURE?"

    "the project manager"

    "WE DON'T HAVE A PROJECT MANAGER"

    ting!


    The last line should perhaps be amended to

    The person who should have appointed a PM and didn't.



  • @pkmnfrk said:

    I just calls 'em as I sees 'em.

    Yah, I know. I'm describing what should happen; you're describing what actually happens. In your environment, anyway.

    .. and many other environments, IME. It's as if testing is not viewed as a particularly valuable activity, once that requires dedicated effort and proper plans/procedures. It's an afterthought. And when something goes wrong...

    Rince, repeat, redo.

    @pjt33 said:

    The person who should have appointed a PM and didn't.

    Yah, that's kinda what I was alluding to. There should be someone accountable for the entire SDLC, the design-build-test-rollout-handover process.

    								        Ex-Navy Dude's posts suggests there's no real overall control, just silos that stick to their own demarcation without considering the impact upon other stages. </p>


  • @Cassidy said:

    @Ex-Navy Dude said:

    The help desk loves these guys.

    So why hasn't someone (perhaps heading up Incident Management) graphed trends to illustrate the relationship between a new release and an increase in calls to the service desk?

    Call centres get a slating for not resolving the incident quickly and thus are blamed for downtime: by identifying the causes of the service interruptions, you're beginning the road to incident prevention (and thus increasing profitibility of the organisation by eliminating - rather than reducing - downtime).


    They have the stats, they publish the stats, and they try to talk about the stats but the two groups are in silos that reach all the way to the CEO level. If the manager in charge of support would like to have the manager in charge of ERP dev change what they are doing, it has be escalated all the way to the big boss.

    So nothing gets changed, obviously...



  • @Ex-Navy Dude said:

    They have the stats, they publish the stats, and they try to talk about the stats but the two groups are in silos that reach all the way to the CEO level.

    LA LA LA CAN'T HEAR YOU LA LA LA!

    If you're in contact with the service desk, get them to also graph trends against a long dateline on a huge fuckoff graph up on the wall of their office.
    Sooner or later, people will spot that spikes in that graph seem to occur at project handover dates and begin to ask questions.


Log in to reply