manually defined automated testing



  • So I've been working some automated testing and documentation into my core DLL and I've reached an indecision point.

    I use Visual Studio 2008 right now. I wanted to adopt a feature from Visual Studio 2010 or 2013. That version had a template for a "test project." You just defined a list of assert statements and the build ran them and reported a list of success/failure. That's probably a little vague but I last saw it back in the middle of 2014. My version looks sort of like:

    function Run ()
    {
      System.Data.DataTable D = new System.Data.DataTable();
      D.Columns.Add("Function",typeof(System.String));
      D.Columns.Add("Status",typeof(System.Boolean));
      Adapters(ref D);
      return D;
    }
    function Adapters (ref System.Data.DataTable D)
    {
      Adapters_CentimetersToInches(ref D);
      Adapters_JoulesToCalories(ref D);
    }
    function Adapters_CentimetersToInches (ref System.Data.DataTable D)
    {
      D.Rows.Add("CentimetersToInches(2.54)=1",(CentimetersToInches(2.54)==1));
      //there would be more tests for a less straightforward function
    }
    function Adapters_JoulesToCalories (ref System.Data.DataTable D) {}
    

    I also have a function that just prints a formatted list of all exported functions. That's just basic reflection and System.String.Format() calls.

    So right now, when I run ZENITH.EXE verify, depending on what I have commented out in Main(), prints either that text list or a dialog like this:

    6141f57e-0207-474e-bc00-1636706c4a42-image.png

    I want to change that to output something more like two tables at once. In the first table, I want the name of the function, how many tests were run, how many tests succeed, and how many tests failed. In the second table, I want the list of actual tests that were run.

    I'm not sure how this makes sense to display.

    1. one window for each list
    2. one window with two full lists
    3. one window with a master-detail sort of view....which I have no idea how to do and actually failed a job interview years ago because I couldn't get the detail table to change when the selected master row changed

    I'm also not sure if a DataSet with two DataTables and a DataRelation is a better choice than something more specific (two arrays of structs or an array of structs with sub item structs). I started with just separate DataTables but I liked the idea of a DataSet because I could aggregate some stuff based on a child relationship until I figured out I'd need several computed columns in the child because aggregates can only operation on bare columns (in other words, SUM(IIF(Status,1,0)) couldn't be used to count success/failure if Status was a boolean).


  • Considered Harmful

    What language is that?



  • @pie_flavor C#...mostly. Some JavaScript creeping in for the benefit of the forum's code formatter.


  • Discourse touched me in a no-no place

    @Zenith said in manually defined automated testing:

    I want to change that to output something more like two tables at once.

    Unless you're very very eager to write GUI code and so on, it's massively simpler to dump verification output as text (or write a CSV). There's so much less work required, and you can easily hook up more analysis downstream. (You could also make it work within Powershell; that'd still be fine.) Forcing a GUI into things makes integration far more annoying.



  • @dkf But could you output HTML table code that could easily be pretty-printed/parsed by something else (like the browser)?


  • :belt_onion:

    @Benjamin-Hall said in manually defined automated testing:

    @dkf But could you output HTML table code that could easily be pretty-printed/parsed by something else (like the browser)?

    I'd do XML + XSLT + launch browser. Maybe I have overcomplicator's gloves on, but seems like the best of both worlds - - parseable and pretty output without having to write GUI code.



  • @dkf I actually like writing GUI code. A quarter to a third of this library is fixing some of the idiocy of WinForms. I have some pretty cool stuff in there, if I could stabilize enough to write more applications with it. Better than the garbage peddled by Infragistics and DevExpress anyway.

    It's not really a big deal in this case though. I want the test function to output some sort of object that can be fairly easily plugged into a DataTable (or RecordSet if I feel like being sadistic) or flattened out into a text file.

    The thing I don't like about the DataSet approach is that, for the aggregation part to work, I need 2 extra columns on the child table. Right now, I have function name, test description, and status. Because the COUNT() aggregator is too stupid to use IIF(), I have to have success and failure columns and use SUM() on them.

    An alternative is ditching the DataRelations altogether and manually looping through the parent calling COMPUTE(column,filter) on the child. It's an ugly solution because it's not automatic or readonly like it should be. I don't know, at that point, why even bother using DataTables? Other than plugging into a DataGridView is a million times easier than a list or array.



  • This is the strangest thing ever. You are making a test that basically builds into a gui app that shows the results.

    I'd usually want test results outputted in some kind of text format. In dev, my IDE can parse that format, and apply it back into the codebase. So I'd see little play icons above each test and red/green indicator of what happened when it was last run. In CI, I would see that output in the log, or get sent in an email, or something.

    What is the advantage of doing it this way? Is this the standard in windows app development? (the last time I did windows dev, I had no tests, or version control for that matter)



  • @cartman82 Last I checked, most people use some form of test runner and VS has built in "run all the tests and tell you the output" functionality. Pretty complex ones as well.



  • @cartman82 This library is a little different. When compiled as an EXE, it has some built-in demos. Part of it runs these tests. Another part shows off custom controls. Otherwise it's just a DLL that fixes alot of stupid/half-baked stuff in the framework.

    Is this the standard in windows app development?

    Fuck if I know. All of the jobs out here are full stack JavaScript development (cloud or otherwise).

    What is the advantage of doing it this way?

    I like to be able to run the code and get a quick view of whether or not everything's behaving as I expect it to. I don't know how you all work but YAGNI sometimes actually applies for me so I need to be sure my assumptions are correct.

    @Benjamin-Hall said in manually defined automated testing:

    @cartman82 Last I checked, most people use some form of test runner and VS has built in "run all the tests and tell you the output" functionality. Pretty complex ones as well.

    Yeah, some version newer than what I have had a test project built in. I don't think I've ever worked anywhere that actually used it though. Where I discovered it, I was suddenly walked out on a Friday afternoon so I didn't have time to take notes with me. They couldn't have used it on their spaghetti anyway. The module I was rescuing at the time had been built by somebody that apparently just learned what inheritance and polymorphism were and used them as much and as often as he could. He was an imbecile.


  • ♿ (Parody)

    @Zenith said in manually defined automated testing:

    I use Visual Studio 2008 right now.

    I'm having a hard time moving past this. Why?



  • @Benjamin-Hall said in manually defined automated testing:

    @cartman82 Last I checked, most people use some form of test runner and VS has built in "run all the tests and tell you the output" functionality. Pretty complex ones as well.

    That's what I mean. Test runner spits out test output as text, and optionally IDE ingests it and annotates code based on it.



  • @Zenith said in manually defined automated testing:

    @cartman82 This library is a little different. When compiled as an EXE, it has some built-in demos. Part of it runs these tests. Another part shows off custom controls. Otherwise it's just a DLL that fixes alot of stupid/half-baked stuff in the framework.

    Ok, if you need to be able to click something and show off that everything works as it should, then it makes sense. More like a demo mode for the consumer than a test suite for the developer.



  • You need a really good reason not to use NUnit or MSTest for your automated tests. You get a nice UI (these days, right in the IDE) and it also makes it easy to integrate those tests into CI or whatever.



  • @apapadimoulis

    1. I like 2008.
    2. This PC is too cluttered to put newer versions of stuff on top (the search for my Windows CD does not go well).
    3. 2017 was like running underwater.
    4. SharpDevelop was not up to par last time I looked into it.
    5. Look, it's not worth the expense to update my entire tool chain right now. Having 2019 or 2021 up and running is worth as much as simply saying I do. It's a better ROI to get out of debt than clear just one of several hipster hurdles.

    I think what's holding me up is that objects get me halfway and datatables get me halfway but they're both clunky out to their logical conclusions. I just have to pick one and settle for it.


  • Banned

    @Zenith said in manually defined automated testing:

    SharpDevelop was not up to participate last time I looked into it.

    Because most (all?) SharpDevelop features were integrated into IDE proper.

    @Zenith said in manually defined automated testing:

    Look, it's not worth the expense to update my entire tool chain right now.

    You're missing out on 12 years of language development. I don't know exactly what feature set the 2008 version of C# has, but there's been many nice additions in the meantime.



  • @Gąska 2008 compiles up to .NET 3.5 so it has more than you think. 2012 put me up to .NET 4.5 without too much trouble. Once I find that damned Windows CD I will put both on the replacement PC. I've read through much of the 4.0 reference source anyway.

    Most of the changes I didn't like or need. There's alot of stuff that works in .NET 2.0 without specialized classes from later frameworks. ActiveDirectory is one of them. I also wasn't fond of LINQ, largely due to being joined at the hip to Entity Framework, and I'd written the subset I needed myself anyway. Using var instead of declaring types can go jump in a lake of fire. If I wanted to do that I'd be a full stack JavaScript duhveloper.

    WPF might've been the problem with SharpDevelop. I don't think it had a form builder at the time. I still like WinForms better anyway. WPF had some interesting ideas, but only half-baked some of them, and removed stuff from WinForms that I was using to rub salt in the wound.


  • Banned

    @Zenith said in manually defined automated testing:

    There's alot of tuff

    1468e92e-14c1-44d3-816d-c1185633240b-image.png

    I still like WinForms better anyway.

    I'm sorry to hear that. I don't think I can help you, though.


  • Discourse touched me in a no-no place

    @Zenith said in manually defined automated testing:

    full stack JavaScript duhveloper

    That phrase grinds my gears. The people who use it aren't writing operating systems or firmware; how can they possibly be full stack?


  • Notification Spam Recipient

    @dkf said in manually defined automated testing:

    @Zenith said in manually defined automated testing:

    full stack JavaScript duhveloper

    That phrase grinds my gears. The people who use it aren't writing operating systems or firmware; how can they possibly be full stack?

    Its a term made up by contractors so that they could do java and java script badly and then charge more for it. Then the recruiters got hold of it and anyone who's even looked at java script is now full stack without the renumeration.



  • @dkf said in manually defined automated testing:

    That phrase grinds my gears. The people who use it aren't writing operating systems or firmware; how can they possibly be full stack?

    The OS or firmware isn't generally considered part of the stack, any more than the hardware is. It's what you run your system on. I'd call myself a full stack application developer, because I know what I'm doing from the database up to the UI, Unless you're writing very low level things you can take the OS as a given.

    I do still think that 'full stack JS' is generally an oxymoron because (unless it's a node.js based system I guess) there must be some back end application logic and a data store somewhere, and you can't write that all in JS.



  • @bobjanova Referring to Node, who seriously looked at JavaScript in the browser and said "you know what? wouldn't it be awesome to use this crippledick language everywhere?"



  • @Zenith Someone who only knows JS, I guess. Although to be fair, for quick prototyping or semi-professional servers, it's actually really quick and nice to develop in, and there are advantages to having the same predominant language throughout the stack.

    It is possible to write good JS, it has testing libraries and so on. I don't object to node.js as much as you obviously do.


  • Considered Harmful

    @bobjanova said in manually defined automated testing:

    Someone who only knows JS, I guess. Although to be fair, for quick prototyping or semi-professional servers, it's actually really quick and nice to develop in, and there are advantages to having the same predominant language throughout the stack.

    Professionally I use C# and Java. All my hobby projects are JS (TypeScript, technically).


  • Banned

    @bobjanova said in manually defined automated testing:

    there are advantages to having the same predominant language throughout the stack.

    Sure, but do they outweigh being constrained to one thread per process on the server?


  • Discourse touched me in a no-no place

    @bobjanova said in manually defined automated testing:

    there are advantages to having the same predominant language throughout the stack

    Sure. It means you can hire frontend devs and frontend dev prices and browbeat them into doing backend work (badly) at the same time, while labelling the whole mess “full stack”. The whole thing falls over when you sneeze on it. Or indeed when someone in a galaxy far far away sneezes in its general direction. This is claimed to be a feature. Humanity deserved 2020.



  • @bobjanova Years ago, I was hired for a C#/ASP/SQL job that turned out to be Dynamics with X++ when I showed up. X++ looked disturbingly like JavaScript, a mishmash of primitives and half-baked "objects" with about seven library functions between them. I don't like languages that do that.

    JavaScript is alright for doing some trivial stuff inside a browser (due to absolutely zero alternate choices since the demise of VBscript). But compared to C# and Visual Studio, it's a toy. Even compared to VB, it's a toy. People have been putting up with thin clients for so long they don't know or remember what they lost from thick clients IMO.


  • Discourse touched me in a no-no place

    @error said in manually defined automated testing:

    All my hobby projects are JS

    All JS projects are hobby projects 🏆


  • Discourse touched me in a no-no place

    @Zenith said in manually defined automated testing:

    People have been putting up with thin clients for so long they don't know or remember what they lost from thick clients IMO.

    What was gained was being able to deploy things really easily. Pushing a new version of an SPA to clients is trivial (provided you handle state sensibly). But what was lost along the way?


  • Considered Harmful

    @Gąska said in manually defined automated testing:

    @bobjanova said in manually defined automated testing:

    there are advantages to having the same predominant language throughout the stack.

    Sure, but do they outweigh being constrained to one thread per process on the server?

    It's a shame that they don't have multithreading.



  • @dkf said in manually defined automated testing:

    @Zenith said in manually defined automated testing:

    People have been putting up with thin clients for so long they don't know or remember what they lost from thick clients IMO.

    What was gained was being able to deploy things really easily. Pushing a new version of an SPA to clients is trivial (provided you handle state sensibly). But what was lost along the way?

    First, I question the value of this based on context. While there's an argument for, say, a service like eBay, accessed by millions all over the world, way too many internal apps are web apps. I have firsthand experience seeing/writing apps used by ~50 people in the same bureau on the same floor of the same building. Choosing web over desktop in that situation is stupid (actually, I'd like to see the shell+library pattern more, but it's rare to see anything but "huge pile of JavaScript spaghetti" and "all logic in the window/form event handlers"). It explodes the number of moving parts that have to be supported but can't because somebody else controls them. And it feeds into SaaS which you may love when it keeps paying you but not the other way around.

    What was lost along the way?

    • any controls beyond a subset of the most basic offerings of Windows 3.1 from 28 years ago
    • actual error handling beyond page execution outright stopping
      -- there's a whole class of errors developers never see because it choked on the wrong side of the wire
      -- clients don't see anything they might be able to fix themselves (especially problematic when "support" is an Indian shrugging and closing the ticket)
    • ability to do much of anything with files
    • choice of client
      -- while there's little choice of OS today, it was more of one in the past, but that's further devolved into "thou shalt use Chrome and update every day or fuck off"
    • ability to do any work offline
    • ability to just let something that works alone and be confident it'll keep working
      -- you're on Chrome's update schedule and, if they fuck something up that you relied on, you get to find out when everybody else does and run around with your hair on fire until its fixed
      -- see also the LeftPad incident that broke a ton of websites a few years ago
    • ability to do compile-time checking
      -- could be because most JS developers I've seen are Indian but the volume of code pushed out that clearly does not and cannot function suggests they never run even basic tests (a compiler would at least stop some of it from making it into a build by virtue of not being able to create a new build)
    • ability to automate interaction or share functionality (possibly more of a power user thing?)
      -- my large employer was looking to buy a browser-clicking bot to automate tasks for lack of access/existence/knowledge regarding APIs
      -- difficult to leverage parts of one program/library for another program/library when they're both websites
    • anybody using WebForms or similar patterns lost useful response time because every fucking thing is a postback dragging megabytes of VIEW_STATE with it
    • code clarity was a casualty in many monstrosities, especially when JS is used to replicate (poorly) something that's a library call server-side
    • it raises the bar for the hobbyist/learner in that they have to set up IIS/Apache to write their Hello World

    Not everything must, or should be, a virtual mountain of JavaScript on a JavaScript website.



  • @dkf said in manually defined automated testing:

    But what was lost along the way?

    Sanity?



  • @Zenith said in manually defined automated testing:

    People have been putting up with thin clients for so long they don't know or remember what they lost from thick clients IMO.

    As a consultant, I've had to put up with some thick clients, and it's not fun.



  • @DogsB said in manually defined automated testing:

    @dkf said in manually defined automated testing:

    @Zenith said in manually defined automated testing:

    full stack JavaScript duhveloper

    That phrase grinds my gears. The people who use it aren't writing operating systems or firmware; how can they possibly be full stack?

    Its a term made up by contractors so that they could do java and java script badly and then charge more for it.

    Is it? I was under the impression it was a term made up by management so they could get two distinct roles covered on the salary of one employee.


  • Considered Harmful

    @Zenith said in manually defined automated testing:

    anybody using WebForms

    :laugh-harder: also :theres-your-problem:



  • @error Single page frameworks have the same problem, though, they've just got it for a different reason. By putting the entire application into a single JavaScript file, they cause a slow page load whenever it feels the need to fetch that - at the very least, the first time a new visitor hits your page (first impression) and every time you update any little thing.

    I quite like web UIs, actually. But I don't understand why everything has to be web based, even when you're using the browser to simulate (badly) a desktop app. And I don't understand why single page apps are a good thing, even when you're simulating (badly) the behaviour of the browser back button, navbar etc.

    What's wrong with a page constructed server-side that has <script> tags referencing the JS you actually need for that page?

    Web apps are also a pain in the arse to test - I'm currently fighting Selenium tests to test our software, and it's impossible to write a Selenium test that isn't flaky. And yes browser updates occasionally break your application, if it's doing anything remotely clever.


Log in to reply