Representative constant



  • Testing code requires some testing data. In our unit tests, this data is stored as constants in consts namespace. Thanks to "no magic values" rule, we often have things like const int num_devices_1 = 1; const int num_devices_2 = 2; etc. But that's normal. The values in test data often doesn't need to make sense either - since most of our app works like "read something from this interface and pass the same thing to that interface". Sometimes it's "depending on the value, do one of several things", but that's surprisingly uncommon.

    We also have tons of enums in our codebase. Most enums are in this form:

    enum EThisParticularThingState
    {
        EThisParticularThingState_Enabled,
        EThisParticularThingState_Disabled,
    }
    

    The guy who wrote unit tests for some particular class had the rules from the first paragraph in mind, so he created appropriate constant (forgetting that "magic value" rule isn't enforced for enum values because enums are consts themselves). And because that enum was one of those things we don't care about, he could initialize it with anything and it would work the same. And so he wrote:

    const EThisParticularThingState this_particular_thing_state = static_cast<EThisParticularThingState>(42);
    


  • @Gaska said:

    Thanks to "no magic values" rule

    TRWTF is dogmatic enforcement of no magic values rule. And especially enforcing it in a way that puts the constants in separate place. In most cases it means everybody has to look in another place for the actual value while bringing absolutely nothing for readability when the constant is only used in one function as is usual in the tests (once in the initialization and once in the assertion, both in the same test method).

    By the way,

    @Gaska said:

    const EThisParticularThingState = static_cast<EThisParticularThingState>(42);

    seems to be missing a variable name.



  • @Bulb said:

    TRWTF is dogmatic enforcement of no magic values rule. And especially enforcing it in a way that puts the constants in separate place.

    Sometimes it makes sense. Especially if your tests are 90% identical and you refactor out the common part. But shouldn't be a strict rule.

    @Bulb said:

    seems to be missing a variable name.

    Fixed.



  • @Gaska said:

    Especially if your tests are 90% identical and you refactor out the common part.

    I would still probably put it at the beginning of the test suite or test fixture that uses it, not one big central place.

    The other thing is that 90% identical tests likely crossed the point of diminishing returns long ago.



  • @Bulb said:

    I would still probably put it at the beginning of the test suite or test fixture that uses it, not one big central place.

    They all are local to test suite. Each test suite begins with consts namespace (that's sometimes named constant or constants instead). Code completion hates it, since many of those consts are common to half of test suites in given module.

    @Bulb said:

    The other thing is that 90% identical tests likely crossed the point of diminishing returns long ago.

    Our bonus depends on line coverage, and we have a lot of ifs.



  • @Gaska said:

    Our bonus depends on line coverage

    That might be TRTRWTF.



  • @Bulb said:

    That might be TRTRWTF.

    Actually, I'm glad this is the case. Other divisions don't have it, and their coverage is less than 20%, with large parts of codebase not tested at all. Though 92% goal seems too high, especially since gcov gives great many false negatives and some paths of execution are outright impossible.



  • @Gaska said:

    ```
    enum EThisParticularThingState
    {
    EThisParticularThingState_Enabled,
    EThisParticularThingState_Disabled,
    }

    
    I do not care for this at all. I suppose at least it should be easy to turn into a C++11 `enum class`.
    
    If it was _my_ codebase, I would probably do something like this. At least now I don't need to spam `ParticularThing` over and over and over...
    

    namespace ParticularThing {
    enum EState {
    ENABLED,
    DISABLED,
    };
    }
    //...
    using namespace ParticularThing;
    EState = static_cast<EState>(42); // for justice!



  • @tar said:

    C++11 enum class.

    If you are so lucky to be able to use C++11 compiler. For us stuck with Visual Studio 2008 due to the later ones missing Windows Embedded (formerly CE) target and gcc 4.6 due to bug in newer Android toolchains affecting Android < 2.3¹



  • I fully expect that C++1y will be standardized and published before C++11 gets widespread-enough compiler support to make it make sense to update all your codebase belong to it.



  • C++14 exists already. It is much smaller step then C++11 was, mainly it generalizes things introduced previously.

    The interesting things, concepts (method for declaring what template requires of it's parameter) and modules (getting rid of headers!), were postponed for C++1z, tentatively planned for 2017, and I have not checked the status recently whether they are still in the game or got postponed again.

    C++11 does have support from all three compilers that remain in the game: gcc, clang and msc++. Unfortunately embedded (including mobile, which are not really embedded any more these days) platforms are often stuck with ancient versions of them.



  • @Bulb said:

    concepts

    So, C++35, then? :trolleybus:



  • @Gaska said:

    Though 92% goal seems too high, especially since gcov gives great many false negatives and some paths of execution are outright impossible.

    I'd love to see a checkin comment like this some day:

    "Removed hard-to-test error handler to improve coverage metric"


  • Discourse touched me in a no-no place

    @tar said:

    So, C++35, then?

    Or Haskell or Ada right now.



  • Or hopefully Rust soon (they've already released 1.0.0-alpha two weeks ago).



  • @antiquarian said:

    Haskell

    Haskell's type system seems interesting and powerful, if slightly bonkers in parts. (How do I monads?)

    @Bulb said:

    Rust

    If they can capture the "look & feel" of C++, while fixing even some of it's more obnoxiously broken aspects (how do I validate pointer?), I think Rust has a great future ahead of it.


  • Discourse touched me in a no-no place

    @tar said:

    Haskell's type system seems interesting and powerful, if slightly bonkers in parts. (How do I monads?)

    From IRC via my quote file:

    edwardkmett: Most monad tutorials are written by people who barely understand monads, if at all, but unfortunately nothing can stop someone from writing a monad tutorial. We've tried, there was blood everywhere.



  • @tar said:

    Haskell's type system seems interesting and powerful, if slightly bonkers in parts. (How do I monads?)

    Monads are not really type-system thing. They are just a single type-class (most other languages call that interface), Monad, and some syntactic sugar to convolute them. The mind-bending property of monads is that they are used to express sequentiality, but the interface itself does not say anything about sequence and in fact most collection (including list) fit it.

    @tar said:

    @Bulb said:
    Rust

    If they can capture the "look & feel" of C++, while fixing even some of it's more obnoxiously broken aspects (how do I validate pointer?), I think Rust has a great future ahead of it.

    It looks very promising. The initial proposal was very complicated and convoluted, but they simplified all the cruft out and it now looks quite elegant. It still has a bunch of smart pointers that you have to choose manually, but it statically checks that your resource handling is sound, can do everything C++ can and adds a bunch of good ideas from other languages, especially the trait system that is similar to Haskell type-classes, i.e. you define interface implementations for types after the types themselves, which replaces function overloading in a more object-oriented and manageable way and unlike Go's automatic implementation remains explicit.


  • Discourse touched me in a no-no place

    @Bulb said:

    The mind-bending property of monads is that they are used to express sequentiality, but the interface itself does not say anything about sequence and in fact most collection (including list) fit it.

    Maybe is also a monad, which allows you to do cool things like flow a nullable value through multiple operations and have it do the right thing automatically. If any of the operations would result in Nothing, that's the result.



  • @antiquarian said:

    Maybe is also a monad

    True. Forgot about that one.

    Actually when I was trying to comprehend Haskell code I had more problems with the $ operator and the abuses of currying to omit arguments from function definitions.



  • @tar said:

    I suppose at least it should be easy to turn into a C++11 enum class.

    It would be if not for several layers of managers who decided about shape of this interface. All those enums come from autogenerated adapters for one of several homebrew inter-process interfaces that are defined by negotiations (sometimes even primitive barter).

    @Jaime said:

    I'd love to see a checkin comment like this some day:

    "Removed hard to test error handler to improve coverage metric"


    More often it's terrible tests in form of "if I call this private method with these parameters, the internal state of this object should become like that". Makes refactoring very tiring.

    @tar said:

    If they can capture the "look & feel" of C++, while fixing even some of it's more obnoxiously broken aspects (...), I think Rust has a great future ahead of it.

    It does both even now.

    @tar said:

    how do I validate pointer?

    You don't - the compiler guarantees that pointers always point to a valid object. One does not simply make a dangling pointer - except for unsafe blocks, it's banned to make a pointer that's not provably valid all the time in all code paths. Also no two mutable pointers to the same object can exist at any given time.

    @Bulb said:

    It still has a bunch of smart pointers that you have to choose manually

    Except it's all in libraries now. The core language has only two kinds of pointers - & and &mut (and corresponding raw pointers if you're going unsafe).

    @Bulb said:

    object-oriented

    Ha, ha, ha. Rust doesn't even have constructors, let alone class inheritance!



  • @Gaska said:

    You don't - the compiler guarantees that pointers always point to a valid object.

    That was (sort of) my point, although I made it in a very obtuse and obscure way. Time to check out Rust, it seems to me...



  • @tar said:

    Time to check out Rust, it seems to me...

    Just a word of warning: the borrow checker still is fucking annoying.



  • @Gaska said:

    Except it's all in libraries now. The core language has only two kinds of pointers - & and &mut (and corresponding raw pointers if you're going unsafe).

    Like in C++ ;-). It's really the important part—things like garbage collector are fully opt-in thing, so you can use them when they are convenient and avoid them if they could cause trouble. Not like Go which built features into the core language (maps and the way it did arrays) that simply can't exist without the collector. There is already a bootloader and some toy operating systems in Rust proving it is actually usable in non-hosted environment.

    @Gaska said:

    @Bulb said:
    object-oriented

    Ha, ha, ha. Rust doesn't even have constructors, let alone class inheritance!

    That's what object-oriented evolved into. Go does it the same way (except it's interfaces are implicit if signatures match while in Rust the traits have to be declared explicitly).

    Constructors it kind of has—I believe¹ you can force constructing with a function. Inheritance is for traits only, but that's where you need it. Modern object-oriented programming frowns on inheriting classes anyway.



  • @Gaska said:

    Though 92% goal seems too high, especially since gcov gives great many false negatives and some paths of execution are outright impossible.

    That doesn't seem high to me, but hardware is a bit different than software; we generally can't patch stuff in the field. (If we're lucky, we might be able to work around a hardware bug in software.) Consequently, our coverage requirements are more stringent. 99.5% is the minimum coverage on the software model of the chip hardware for it to be sent to manufacturing. One of the significant activities near the end of any project is figuring out which impossible and don't-care things can legitimately be excluded from coverage.


  • Discourse touched me in a no-no place

    @HardwareGeek said:

    hardware is a bit different than software; we generally can't patch stuff in the field

    And that's one of the key reasons why hardware engineering is different from software engineering. Software engineers could do things to the same standard as hardware engineers, it's hardly impossible, but nobody is willing to pay for it (outside of certain safety-critical areas) and getting things shipped sooner makes better business sense.



  • Generally I agree, though hardware doesn't have to deal with data serialization and dozens of layers of abstraction, so it hardly ever (or even never) has impossible execution paths. Also, I bet you have coverage measuring tools that don't suck ("why is this closing brace uncovered?", "I'm using this class many, many times - why it says the constructor is uncovered?", "assertion uncovered? Really?" etc.).



  • @Gaska said:

    it hardly ever (or even never) has impossible execution paths

    No, impossible execution path are certainly possible, at least in the software model that is used to create the hardware, e.g., an if statement or default case on a switch whose only purpose is to report an error because that logic's input conditions were violated, or even worse, just to silence the lint tool from complaining that the switch has missing cases; you don't want those paths to be taken. There are pragmas that the coders can use to exclude such things, but typically nobody thinks about stuff like that until the end when we testers are under pressure because our test coverage is too low.

    @Gaska said:

    "why is this closing brace uncovered?"
    Seriously?

    @Gaska said:

    assertion uncovered?
    In our tools, assertions are checked to see whether the assertion expression has been evaluated, as well as whether it passed or failed.



  • @HardwareGeek said:

    Seriously?

    Yes, all the times. gcov is really terrible in this matter, and it's not just us but the whole internet that has this problem.

    @HardwareGeek said:

    In our tools, assertions are checked to see whether the assertion expression has been evaluated, as well as whether it passed or failed.

    Same here, except that gcov doesn't always believe us that the assertion was evaluated, even if the line immediately before and the line immediately after were executed. It's really marvelous how many false negatives it reports.



  • @Gaska said:

    Same here, except that gcov doesn't always believe us that the assertion was evaluated, even if the line immediately before and the line immediately after were executed. It's really marvelous how many false negatives it reports.

    Isn't assert() commonly a macro? You're probably not touching the internal branch were the assertion fails.



  • @PleegWat said:

    Isn't assert() commonly a macro? You're probably not touching the internal branch were the assertion fails.

    It's actually the case. Still, why does it mark the macro expansion site as uncovered? Even if it somehow makes sense to the program, considering the internal implementation details of a compiler, it's still fucking stupid.



  • @Gaska said:

    gcov is really terrible in this matter

    Fortunately, when I have to write tests in C to run on the embedded CPU, nobody gives a [bleep] about coverage in the C program, just whether it exercises interesting parts of the hardware model. Which is good, because we can't use anything like gcov anyway.

    The downside is that we also can't use anything like an IDE or even gdb to debug the test code. We can't even use printf debugging, since there's no console or file system attached to the simulated embedded CPU. We're limited to thinks like writing to a specific memory location and/or examining the simulated CPU instruction pointer and registers as it executes the code.



  • @Gaska said:

    It's actually the case. Still, why does it mark the macro expansion site as uncovered? Even if it somehow makes sense to the program, considering the internal implementation details of a compiler, it's still fucking stupid.

    If they thought about it at all, this likely works best in the majority of practical cases. TRWTF is not handling assert() as special.



  • @PleegWat said:

    If they thought about it at all, this likely works best in the majority of practical cases.

    No it doesn't. Macros can be split into three groups - compilation flags, declarations, and inline functions. Not marking them as covered makes no sense in any of these three categories.



  • I meant it's probably they're tracing the branches of the ternary separately.

    Looking at my copy of assert.h, it's a ternary across 3 lines. Likely the gcov instrumentation is added after macro expansion, so there are 3 lines in the compiler's view of the code. Coverage data is collected for each separately, and when gcov displays the data it merges it all back into one line, which it marks uncovered because one of the 3 expanded lines was uncovered.



  • Probably. Still stupid.

    BTW, the reason for uncovered closing braces is probably that nonexistent else clause hasn't been triggered.



  • It was never my intention to dispute the fact it's stupid.



  • @Gaska said:

    Our bonus depends on line coverage

    @Gaska said:

    Same here, except that gcov doesn't always believe us that the assertion was evaluated, even if the line immediately before and the line immediately after were executed. It's really marvelous how many false negatives it reports.

    Stop writing assertions in your tests. Problem solved.



  • @PleegWat said:

    Looking at my copy of assert.h, it's a ternary across 3 lines. Likely the gcov instrumentation is added after macro expansion, so there are 3 lines in the compiler's view of the code.

    Macro expansion usually happens around the same time that \-notated lines are joined together, so an expanded macro should be on a single line in the postprocessed source code (try gcc -E (I think) to get a feel for it). This is why __``LINE``__ can expand to the same value over a single instance of a macro.



  • @Maciejasjmj said:

    Stop writing assertions in your tests. Problem solved.

    There's also code review.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.