Debugging doesn't work like I want it to!


  • ♿ (Parody)

    @tar said:

    I've used it to track down the causes of odd crashes which take hours/overnight to manifest.

    I have a program where this is often the most useful reason. Not that it necessarily takes so long as you're saying, but stepping through a zillion cases to find the one I want (no, I don't have anything reasonable to use for a conditional break point) doesn't work.



  • @boomzilla said:

    I have a program where this is often the most useful reason. Not that it necessarily takes so long as you're saying, but stepping through a zillion cases to find the one I want (no, I don't have anything reasonable to use for a conditional break point) doesn't work.

    One of my major beefs with VS is how much of a pain in the rump it is to set a conditional breakpoint on a string (std::string or MFC CStringT, take your pick) when debugging C++ code...


  • BINNED

    @tarunik said:

    What do you suggest they learn as a systems language instead? Go gets picked on all over around here, C would probably draw a similar reaction as above, Pascal and Ada's descendants are redheaded stepchildren for good reason, and Forth (yes) is just too plain odd for most people who haven't touched HP RPN calculators.

    You were so close to the correct answer (bolded above).



  • Huh, what the hell are these languages they're using out there.

    Dunno boss, your fault for trying to start a school in this town. None of them match up.

    Well, they use weird languages, so if we teach weird languages it'll probably work out fine. Remember, someone used to C++ can probably learn C# on the job if they have to.

    So find other weird languages we don't know but have books for, and teach out of that.

    That's the plan. Oh look, we have a book on Java.



  • @antiquarian said:

    You were so close to the correct answer (bolded above).

    I have some verbosity concerns about Ada -- I have used VHDL before, and it drowns programmers in a swamp of keyword-noise.



  • @tar said:

    Constexpr lets you run arbitrary C++ at compile time to produce super-optimized assembly without needing to resort to template hackery. I don't see why that's not exciting.
    Because... I don't actually know what I'd use that for.

    It's totally possible that I don't recognize opportunities to use it because I don't have it, but I really don't know what I'd do with it.

    (I guess I have seen one suggested use. It was to have a printf-like function that evaluates the format string at compile-time, which would both let it avoid runtime parsing and typecheck arguments. I will admit that's pretty cool & useful.)

    @PJH said:

    He uses a debugger to step through code? That's at least showing signs of anti-WTF
    Maybe I am TRWTF, but I will actually defend printf debugging quite a bit. With the caveat that I also spend a fair bit of time in the debugger; both have their purpose. A log generated by printf debugging has several advantages over debugging:

    • Better scanning. I am pretty sure that if I gave you 20 numbers to put in order, you would do it faster if you see a list of all 20 at once than if I gave you 20 index cards each with a number and told you you can only have a couple visible at a time. A trace is like the former: you can quickly see an overview of things. A debugger shows you one instant in time.
    • Time travel. If you have a log, you can easily search backwards in the log and see what happened before that point. If you're in a non-time-traveling debugger, you have to restart the program, which brings about a bunch of disadvantages. If you have a time-traveling debugger... well, I'm jealous. :-) I don't think they are very widespread yet.
    • Tracing values. If you have a log showing the value of variables, you can search that log for a value that you're interested. I don't know how to do something equivalent with a debugger at all. The closet I know is a watchpoint -- but that tells you when a location is read or written, not when a read results in a particular value or a particular value is written. If you had a time-travelling debugger you could trace where a value came from with reverse watchpoints, but then we're back to the scanning thing.

    @TwelveBaud said:

    When Java 5 came out they switched from semantic versioning (Java 2 Standard Edition, version 1.5) to marketing versioning (Java 5). They still use the semantic version in all the SDK and tooling, but all the branding uses the marketing version.
    And just for added funness, when Java 1.2 was released they stuck the 2 into "Java 2 Standard Edition", and, contrary to your comment, that persisted through 1.5 until it was removed for 1.6. So it was Java 5 was Java 2 SE 1.5 -- which has three different version numbers.



  • @aliceif said:

    Tell him to put breakpoints on each line and click continue execution to go to the next 🚎

    Sometimes this is the only practical thing to do in the Verilog/SystemVerilog simulator/debugger we use. Step-over is prone to breaking deep inside the framework for no apparent reason. I think it may have something to do with the way it's multithreaded (hardware description languages are massively parallel — logically, if not in actual CPU threads), but I've never figured out what's really going on. It adds an extra level of challenge to debugging.



  • @EvanED said:

    C++...optimized assembly...

    Because... I don't actually know what I'd use that for.

    I think the argument goes something like this: if you don't care about what your code is compiling to at the asssembly level, you might be better off writing your code in a less, ah, [i]challenging[/i] language than C++, because that way you have many fewer opportunitues to shoot yourself in the lower body, plus you get platform niceties like GC and JIT optimization and what have you...



  • @HardwareGeek said:

    Sometimes this is the only practical thing to do in the Verilog/SystemVerilog simulator/debugger we use.

    For Android NDK development, AFAICT, if you want to step through your executable, you have to [i]compile it with a while(1); loop at the start[/i], and break out of that after you've connected gdb and set up your breakpoints etc.



  • @tar said:

    I think the argument goes something like this: if you don't care about what your code is compiling to at the asssembly level, you might be better off writing your code in a less, ah, challenging language than C++,
    Yeah, but I mean I don't know what I'd be able to have the compiler compute that would be useful.



  • @EvanED said:

    Yeah, but I mean I don't know what I'd be able to have the compiler compute that would be useful.

    I see. You can have all your string literals converted to djb2 hashes at compile time? That's probably not the best example though...



  • You know what keys are right under your fingers when you use a keyboard? The letters. Programmers need to realize that number of characters typed is a bullshit metric. if then end if is so much easier to read and to write than any of the alternatives. I just don't get where everyone else's predilection for stretching out their right pinky finger comes from.


  • Java Dev

    I disagree. Scanning for {} in a code file is easier to read. if then end if require actual reading, rather than recognizing a character, and are only easy with code highlighting and proper formatting. In which case it's the highlighting and formatting doing the work, rather than the identifier.



  • What are the use cases for scanning for braces? The only one I can think of is finding the one that closes the current block, which is far easier to a accomplish when it doesn't involve picking one identical character out of a line-up.



  • @EvanED said:

    If you have a time-traveling debugger... well, I'm jealous. I don't think they are very widespread yet.

    Visual Studio has one.



  • @EvanED said:

    If you have a time-traveling debugger... well, I'm jealous. I don't think they are very widespread yet.

    VS has done this for like... 7-8 years. Maybe more.

    @EvanED said:

    The closet I know is a watchpoint -- but that tells you when a location is read or written, not when a read results in a particular value or a particular value is written.

    Sounds like you just need a conditional breakpoint. Unless I'm misunderstanding this rambling.


  • Discourse touched me in a no-no place

    The worst I saw with with an old skool FORTRAN developer I once worked with. We were coding in Java. Everything was indented by exactly 7 spaces, no more, no less. (This was back in the days of Java 1.1 so the state of tooling was crude.)

    Fortunately I didn't have to fix the case of everything. He must've been one of the lower case FORTRAN (“fortran”?) heretics…



  • @Buddy said:

    You know what keys are right under your fingers when you use a keyboard? The letters. Programmers need to realize that number of characters typed is a bullshit metric. if then end if is so much easier to read and to write than any of the alternatives. I just don't get where everyone else's predilection for stretching out their right pinky finger comes from.

    Then why'd mathematicians develop symbolic notation and shorthand abbreviations, instead of spelling 'add' 'subtract' 'logarithm' 'cosine' out every time?

    Besides, that's not where the Ada/VHDL verbosity complex comes from (Forth's IF is an IF-ELSE-THEN and I don't complain about it, so I don't know why you're thinking of that example) -- it's a higher level problem where the language violates DRY in several ways.


  • :belt_onion:

    I personally find

    if(foo)
    {
        someObject.doBar();
    }
    

    much more readable than

    if(foo) then
        someObject.doBar()
    end
    

    But that may just be me.

    The use case for scanning for braces is something like nested if statements... It's much easier to count the open-close braces than it is to count the if-ends (or whatever the syntactic sugar is for end-if)


  • ♿ (Parody)

    @sloosecannon said:

    The use case for scanning for braces is something like nested if statements... It's much easier to count the open-close braces than it is to count the if-ends (or whatever the syntactic sugar is for end-if)

    Stuff like end if is useful when it's around other stuff like end for or end while. Whereas braces all look alike. /bracist



  • @PleegWat said:

    {}

    I've said it before, but: a block delimiter token which is more than one character long is an abomination.



  • @tar said:

    I've said it before, but: a block delimiter token which is more than one character long is an abomination.

    Agreed. Besides, then both VIM and Visual Studio can do parenthesis matching easily ( % and ctrl+} ) without knowledge of the source code language. (probably other editors too, but those are all I use)



  • Eclipse and Emacs both do it as well -- in fact, paren-matching's pretty well as old as the hills, as its one of the things that keeps LISPers from going completely batty.


  • :belt_onion:

    @tarunik said:

    LISPers from going completely batty.

    Didn't know that was possible


  • BINNED

    If you think Lispers are crazy, you should spend some time on comp.lang.forth. Respected members of the newsgroup seriously advocate rolling your own basic data structures instead of using a library.



  • That libraries all incomprehensible are because only is.


  • BINNED

    The nerdy jokes thread is ↩ 🔄 ↕ over there.



  • Aside: Forgetting the redundant parens around the conditional is actually the most common mistake I make that no IDE can help me with, and one of the major reasons I love ADA (VHDL) syntax.



  • @Buddy said:

    Aside: Forgetting the redundant parens around the conditional is actually the most common mistake I make that no IDE can help me with, and one of the major reasons I love ADA (VHDL) syntax.

    Interesting -- although Forth (condition before IF) and Python (if condition: ) also deal with this without parens...



  • That had by far the highest time-spent-trying-to-read to number-of-words ratios I've seen in a while. :-)

    As for parens after if/while/etc., I agree there. I've actually come to wish that more languages were like ML or Ruby and didn't require them for function calls either. :-) (Though I'm sure that if, say, Python were changed by a genie to be that way, I'd regret that statement.)


  • ♿ (Parody)

    @EvanED said:

    and didn't require [parentheses] for function calls either.

    Eek. I guess you just have to optionally put them in when you use that in an expression? I don't like optional stuff like that, because it makes me think harder when I look at something.



  • @tarunik said:

    Then why'd mathematicians develop symbolic notation and shorthand abbreviations, instead of spelling 'add' 'subtract' 'logarithm' 'cosine' out every time?

    Pen and paper. The best languages for writing mathematical notation in are *tex, where the majority of symbols can be written as a string of alphanums, and a single, relatively easy to reach, command character.

    Besides, that's not where the Ada/VHDL verbosity complex comes from (Forth's IF is an IF-ELSE-THEN and I don't complain about it, so I don't know why you're thinking of that example) -- it's a higher level problem where the language violates DRY in several ways.
    My opinion is that the redundancy in the language is actually well considered and useful, as it makes it easier to orient yourself within a file without having to scroll up, and that the way that strong typing is baked into the language, in violation of Chomskyan (seriously, how can someone so smart be so wrong about everything?) surface structure vs. deep structure doctrine makes it possible to eliminate other, unnecessary types of redundancy. For instance, being able to use unqualified enum member names without having to worry about namespace pollution.

  • ♿ (Parody)

    @Buddy said:

    soup wrong

    Sorry, but your whole post was gibberish.

    [spoiler]
    Actually, I heartily agree with what you said here.
    [/spoiler]



  • Ruby's has a really great feature (in the same way that the worst excesses of C could be considered great) that you can pass an anonymous method to a function with nothing to indicate that that's what's happening.

    It allows people to easily create what Ruby fans call “Domain Specific Languages”, and everyone else refers to as “what the fuck am I even looking at here”.

    This is an rspec test, from their home page:

    # bowling_spec.rb
    require 'bowling'
    
    describe Bowling, "#score" do
      it "returns 0 for all gutter game" do
        bowling = Bowling.new
        20.times { bowling.hit(0) }
        bowling.score.should eq(0)
      end
    end
    

    My best guess at what's happening is that it is just a really poorly named function that sets up the method-not-found handler to do an assert in certain situations—such as when the should method gets called—before running the code that was passed in.



  • @Buddy said:

    Ruby fans call “Domain Specific Languages”, and everyone else refers to as “what the fuck am I even looking at here”.

    :)
    @Buddy said:
    This is an rspec test, from their home page:
    Link doesn't work. Needs more http:



  • @HardwareGeek said:

    Link doesn't work. Needs more http:

    That got me so bad in the emoji thread the other day (4 edits!), and here I am doing it again...



  • @Buddy said:

    My opinion is that the redundancy in the language is actually well considered and useful, as it makes it easier to orient yourself within a file without having to scroll up, and that the way that strong typing is baked into the language, in violation of Chomskyan (seriously, how can someone so smart be so wrong about everything?) surface structure vs. deep structure doctrine makes it possible to eliminate other, unnecessary types of redundancy. For instance, being able to use unqualified enum member names without having to worry about namespace pollution.

    Strong typing is useful -- unfortunately, Pascal descendants get it wrong in that they were left with a stunted type algebra. If you want strong typing, you need to have the ability to operate on types flexibly in order to avoid stunting expressiveness. (See: Haskell)



  • @Buddy said:

    My best guess at what's happening is that it is just a really poorly named function that sets up the method-not-found handler to do an assert in certain situations—such as when the should method gets called—before running the code that was passed in.
    That's somewhat what's happening. (There's no method_missing in there. It's something like describe is a function of one argument, which is a block. That block is evaluated in a context where there is a defined function called it of two arguments: the name of the test and the body of the test.)

    But to be honest, I don't even think the it think is all that remarkable or poorly-named; the goal is to make that your test descriptions read like a sentence: and "it returns 0 for all gutter game" is at least rather closer to that than something like def returns_0():. I'm not totally sold on that goal, but I think it's at least not overtly bad.

    I think much more {remarkable,awesome,WTFy} is the monkeypatching of object (or whatever Ruby calls it) to insert the should function, which allows you to say stuff like 2.should == 2 instead of assert_equals(2,2).

    That's the kind of thing that I have really mixed feelings about. One one hand, if you know what it means it gives a pretty natural description of what is going on. It's closer to how most people would state the requirement. On the other hand, if you don't know the language or what is going on, it seems like it has the potential to be extremely opaque.

    I think this happens a lot with DSLs. DSLs are great if you're familiar with the DSL, and awful if you're not familiar with the DSL. (As a general statement.)

    rspec seems like a particularly-dependable position, because if you're going to be using rspec you'll be probably be using it a lot -- not just in like one small part of your program where you need a library. So a bit of learning cost is worth it.

    Keep in mind that all of this is coming from someone who hasn't programmed in Ruby. I've just watched a couple seasons of Gary Bernhardt's screencasts, which are often in Ruby and use rspec a lot. :-)

    @Buddy said:

    that you can pass an anonymous method to a function with nothing to indicate that that's what's happening.

    Oh, and I'm not sure where you get the "nothing to indicate" part of that. The do/end, along with the fact that (I think) there's no other legal syntax of that form, tell you that is what is going on. That also to me seems quite reasonable.



  • How does rspec monkeypatch object, then? My understanding was that Rails accomplishes similar tasks by handling some kind of missing method error—hence why rails enums have to use foo.bar!, because foo.bar = true would fail silently (from the noob programmer's perspective) by creating an ordinary property on the object that can't invoke any ‘rails magic’—and I assumed rspec would do it similarly.

    @EvanED said:

    no other legal syntax of that form

    I realize that this doesn't excuse my error there, but if, while, and for can look like that, which is exactly the point that I wanted to make: that rspec files (and rails routes, and god knows what else) look like an entirely different language, and even taking a file in context within a project, it's not always clear what tf is going on, or even where to start looking to figure that out.



  • @aliceif said:

    Visual Studio has one.

    @blakeyrat said:

    VS has done this for like... 7-8 years. Maybe more.
    There are a couple reasons that I didn't mention this, and I still stand my point for a few reasons which I may explain later. But I want to learn more before I say too much. The latest Ultimate version I have is 2010 though. Has much changed since then?



  • @Buddy said:

    How does rspec monkeypatch object, then?

    I don't know enough to address most of what you say. What I do know is the following. One of those screencasts I mentioned above shows how to build the basics of rspec syntax: the describe/it thing for defining tests, and enough infrastructure to allow 2.should == 2 to pass and 1.should == 2 to fail. Here is all it is:

    class object
        def should
            DelayedAssertion.new(self)
        end
    end
    

    where DelayedAssertion is a class defining == that checks the object passed to the constructor to the other argument of ==.

    @Buddy said:

    that rspec files (and rails routes, and god knows what else) look like an entirely different language
    That much I'll give you. And like I said, I waffle on how much of a good/bad idea I think that is, and it varies by situation.



  • raise MovieTheaterOnFire unless not movie_theater[:on_fire]
    


  • @blakeyrat said:

    Why is someone learning C++ in 2015?

    Cobol was offered as an elective at my university until about 2000. I know someone who took it but I didn't.

    Learning old things isn't always bad.



  • @Zemm said:

    Learning old things isn't always bad.

    Learning COBOL is though.



  • @EvanED said:

    I've actually come to wish that more languages were like ML or Ruby and didn't require them for function calls either.

    Scala has a post-fix operator feature so "object method arg" is the same as "object.method(arg)". It's another one of them very sharp double-edged swords. It can be used well to make DSLs that are easy to read, but it makes procedural code less familiar. It's best used for configurations and other things of a declarative nature.

    Scala Specs2 library example:

    import org.specs2.mutable._

    class HelloWorldSpec extends Specification {

    "The 'Hello world' string" should {
      "contain 11 characters" in {
        "Hello world" must have size(11)
      }
      "start with 'Hello'" in {
        "Hello world" must startWith("Hello")
      }
      "end with 'world'" in {
        "Hello world" must endWith("world")
      }
    }
    

    }

    This example shows another scary Scala feature: implicit type conversions. 'in' and 'must' and 'should' methods on strings? Where did they come from?


  • I survived the hour long Uno hand

    @ben_lubar said:

    raise MovieTheaterOnFire unless not movie_theater[:on_fire]

    That reminds me: I hate how junit assertion failure messages work. Compare:

    assertTrue("Product not added to truck",sidebar.isSKUInTruck(productSKU));
    

    With:

    assert.equal(model.get("failCount"),1, "model has the wrong number of failures");
    

    It just flows better in qUnit :/


  • Discourse touched me in a no-no place

    I'd rather write that as:

    # It's nice to give tests a name; useful for reporting
    test model-123 "model has right number of failures" -body {
        model get -failcount
    } -result 1
    

    But that's me…


  • I survived the hour long Uno hand

    1. If you don't know the difference between a test and an assertion, there's no real point having a conversation, is there?
    2. Nope, that's even less readable than either example.

  • Discourse touched me in a no-no place

    @Yamikuronue said:

    If you don't know the difference between a test and an assertion, there's no real point having a conversation, is there?

    I think you should isolate tests to the point where you're making only one assertion per test case. Sure, that can be a bit annoying to set up, but having lots of checks mulched together into one testcase is long-term worse since it's harder to work out what went wrong from reports from the field.

    It's a wonderful thing to be able to tell exactly what is wrong from the pattern of test failures without having to open up a debugger at all.


  • I survived the hour long Uno hand

    @dkf said:

    only one assertion per test case

    I can live with that philosophy, but it's clunky sometimes. My philosophy is that each test should test one logical concept. If the concept is "the output is a valid PlayingCard", having an assertion for the suit and an assertion for the number in the same case is fine to me.

    But more importantly, I showed you a syntax for an assertion and you responded with protocol about naming test cases, which I did not show. And your syntax is horrible and you should feel bad.


Log in to reply