Programming Language Learning Curves



  • @Maciejasjmj said:

    Sigh. Nobody uses do...while these days.

    do...while doesn't help for cases like these unless you use a goto, and then you're basically at boomzilla's version just with the condition written on the other brace.

    @tar said:

    Filed under: a = b = c = d = 0.f;
    You don't need = as an expression evaluating to a value to get the ability to do that. Python allows a = b = 0 but not (a = b) + 2, because assignment is an actual statement.

    @tar said:

    Filed under: other people can actually write C
    I think all you have to do is browse the list of CVEs at MITRE to get proof that other people can't actually write C. (Or at least proof that there aren't enough people who can write C to even work on widely-used, security-critical software.)



  • @tar said:

    Oh great, it's the "Wah I Can't Write C Brigade". Again.

    It's not whether you can or can't write C. C requires great care and discipline to do properly. The never-ending stream of buffer overflow vulnerabilities on CVE are proof that simply "being good" isn't enough. Computers are supposed to automate the boring parts of business. Extra processor cycles are much better used to drive code compiled from richer languages.



  • @boomzilla said:

    @tar said:
    better

    Markdown actually gives us a hint. More *****s means better.


    char a = 'a'; // ok
    char *b = &a; // good
    char **c = &b; // getting there
    char ***d = &c; // better


  • ♿ (Parody)

    @tar said:

    Markdown actually gives us a hint. More *s means better.

        char a = 'a'; // ok
        char *b = &a; // good
        char **c = &b; // getting there
        char ***d = &c; // better
    ```</blockquote>
    
    Well...now we're on the same page.


  • @Jaime said:

    The never-ending stream of buffer overflow vulnerabilities on CVE are proof that simply "being good" isn't enough.

    I think that's just an indication that a lot of C programmers are, just like, really, sloppy and lazy.





  • @tar said:

    I think that's just an indication that a lot of C programmers are, just like, really, sloppy and lazy.

    Nope. Compare overflow vulnerabilities to SQL Injection. Injection peaked in 2008, and then dropped rapidly. Tools, techniques, and exposure fixed the problem. Overflow has been an issue forever, and it's only getting worse. It's hard to argue that C programmers are lazier than PHP programmers.



  • @Jaime said:

    It's hard to argue that C programmers are lazier than PHP programmers.

    When it comes to writing mad code that does crazy stuff, I'd say the PHP programmers are vastly more industrious...



  • @Yamikuronue said:

    In your .net example, where does the data that was read go?

    using(var dr = someCommand.ExecuteReader())
    {
      while(dr.Read())
      {
        var blah = dr.GetString("blah");
      }
    }

  • Java Dev

    While I'm fine with the while( (pointer = function(...)) ) {...} pattern, if I had to use getchar() to any degree I'd probably define a helper.

    int getbyte(int *c)
    {
        *c = getchar();
        return c == EOF;
    }
    
    int main()
    {
        int c;
    
        while( getbyte(&c) )
        {
        }
    }
    


  • Out of curiosity, why the difference? In your case I would really like to see an explicit != NULL comparison in there... is the ability to omit that why you are okay with the pointer version?


  • Java Dev

    I detest the separate != NULL as code clutter, and use ! instead of == NULL


  • BINNED

    @Jaime said:

    A good language makes it harder to write bad code.

    So you prefer Haskell? Ada also makes it harder to write bad code, and you have to do twice as much typing in the process. On the other hand, you can go back to something you wrote 6 months ago and actually read it.



  • @tar said:

    C was good enough for the Win32 API, and the Mac Classic. What more do you want?

    Mac Classic was in PASCAL.



  • ...but why are you are okay with if ((pointer = f())) but not if ((c = getchar()) != EOF)?


  • Java Dev

    Not sure. Too many nesting levels, I think.

    And if you have to, at least make it if (EOF != (c = getchar()))


  • FoxDev

    @PleegWat said:

    EOF != (c = getchar()))

    aaaah... Yoda Conditions


  • Java Dev

    It keeps the logic closer together if the function on the right-hand side gets arguments. Also applies if you have a right-hand function whose value you're not assigning.



  • @PleegWat said:

    I detest the separate != NULL as code clutter, and use ! instead of == NULL

    Yes, because clarity is a horrible thing. If you use ! on something other than a boolean, you are doing something not entirely clear. I know that C doesn't actually have a boolean type, but that's because it was the right language thirty years ago.



  • @PleegWat said:

    I detest the separate != NULL as code clutter, and use ! instead of == NULL

    Yah, seriously, fuck (x != NULL). I'm tempted to replace it with ((x != NULL) != false) wherever I see it, so that it the code's even 'clearer' and more 'explicit'.

    (I mean, more garbage characters per line is better, isn't it?)



  • @blakeyrat said:

    Mac Classic was in PASCAL.

    Meh. How do I facts?



  • To be fair, after about 1990 or so it's safe to say most Mac development was done in C. But Mac Classic itself was written in PASCAL (and assembly.)





  • @PleegWat said:

    I detest the separate != NULL as code clutter, and use ! instead of == NULL

    @tar said:

    Yah, seriously, fuck (x != NULL).
    I have mixed feelings about that.

    On one hand, I tend to be fairly dogmatically a strong-typing fan, and as a result I dislike the fact that you can say if (p) in the first place. (As mentioned above, I like the Java treatment, where if takes an explicit Boolean type and nothing more.) So from that perspective, part of me likes including the == NULL or != NULL because it turns something that isn't a Boolean into something that actually is morally a Boolean (and in C++, is is a Boolean). (And this is why ((x != NULL) != false) is a thoroughly unconvincing analogy to me, because x != NULL already was morally Boolean but x wasn't.)

    On the other hand, I feel like at that point I'm fighting the language, and the "when in Rome" part of me really _dis_likes doing things that seem to be very unidiomatic C/C++.

    So I don't think I'm particularly consistent in what I actually do, unfortunately. I think I would almost always say if (p) or if (foo()), and usually say if (!p) and if (!foo()). But the less obvious that the thing being tested is a pointer is, and the more complex the expression, the more likely I am to be explicit. if ((p = foo())) is fairly weird, and that's why I'd like to see it there.



  • There's always the argumentum-ad-compound-expression:

    if((a && !b) || c) {
        /*...*/
    }
    

    vs

    if(((a != NULL) && (b == NULL)) || (c != NULL)) {
        /*...*/
    }
    

    @EvanED said:

    if ((p = foo())) is fairly weird, and that's why I'd like to see it there.

    I actually sometimes quite like this idiom in C++:

    if(SomeType *p = obj->get_some_type_ptr()) {
        // do stuff with p...
    }
    // p is out of scope

  • Java Dev

    I only do it where the 'falsey' case is different already. I wouldn't say if (arraylen). But I do say if(p) or if(f()) where f() returns 0 on success and the if() is handling the error case.. It's kinda hard for me to quantify as I think about it though.



  • @tar said:

    There's always the argumentum-ad-compound-expression:
    I have two counter-"arguments." (Like I said, I have mixed feelings.) First, you have simple expressions that are being tested for NULLness; in those cases, I probably wouldn't use explicit == NULL as stated above. If they were more complicated, I think I would still prefer the explicit NULL in the compound statement (and then format it better to help readability). Second, I don't even find your second case less readable. Third, I don't necessarily think that some expression necessarily "needs" to be written the same when in different contexts (i.e. right under an if or as an operand to &&), though I can't think of another example where I'd do something different other than whitespace & parentheses.

    @tar said:

    I actually sometimes quite like this idiom in C++:
    I am actually pretty fine with that, for a number of reasons; that's a case where practicality definitely overrides my somewhat dogmatic strong-typing preference.

    @PleegWat said:

    I only do it where the 'falsey' case is different already. I wouldn't say if (arraylen). But I do say if(p) or if(f()) where f() returns 0 on success and the if() is handling the error case.. It's kinda hard for me to quantify as I think about it though.
    This is another good point that I didn't think about when I was writing above, which is that saying that pointers aren't "morally Boolean" is a little bit incomplete, because a foo* is, in some sense, more or less a maybe<foo>, and that's kinda sorta Booleanish, which is another reason that I tend to ignore my dogma in most practical cases. Constrast to something like int len = strlen(s); if (len != 0) ...; in that case, the length is a lot less a maybe<int> and basically just a straight int, so I definitely would prefer to see it there even though len is a simple check.


  • Java Dev

    @EvanED said:

    Constrast to something like int len = strlen(s); if (len != 0) ...

    I had that in my example, then realized I'd probably write if (*s) over if (0 != strlen(s)) or if (strlen(s)). Though I would likely go for your version if I did need the length further on in the code.



  • @PleegWat said:

    I had that in my example, then realized I'd probably write if (*s) over if (0 != strlen(s)) or if (strlen(s)).
    Yes, definitely1; I was assuming that the length was used later.

    1 Well, it'd be if (s[0]) for me... interestingly now that I write that, probably not if (s[0] != '\0')


  • Java Dev

    @EvanED said:

    Well, it'd be if (s[0]) for me

    I might pick whether to do pointer arithmetic or array indexing further on, but I'd never write an empty check as if(s[0]). If I did use a (nonzero) array index I'd almost certainly do if(s[19] != '\0'), although that would probably be a compound condition.



  • package main
    
    import (
    	"bufio"
    	"fmt"
    	"os"
    )
    
    func main() {
    	scanner := bufio.NewScanner(os.Stdin)
    	for scanner.Scan() {
    		fmt.Println(scanner.Text()) // Println will add back the final '\n'
    	}
    	if err := scanner.Err(); err != nil {
    		fmt.Fprintln(os.Stderr, "reading standard input:", err)
    	}
    }
    


  • @PleegWat said:

    int getbyte(int *c)
    {
    *c = getchar();
    return c == EOF;
    }

    If the pointer to c is ever equal to -1, I'd worry about the cases where it starts not equal to that and somehow changes position during the loop.



  • You misspelled buffalo.



  • @EvanED said:

    while ((c = getchar()) != EOF);

    It's the 21st century. We have generators now. Besides, for (int c = getchar(); c != EOF; c = getchar()).



  • //import mozzarella
    import real_mozzarella
    

  • Discourse touched me in a no-no place

    @ben_lubar said:

    If the pointer to c is ever equal to -1, I'd worry about the cases where it starts not equal to that and somehow changes position during the loop.

    Clearly there's a volatile keyword missing...


  • Java Dev

    not replying is a barrier to high post counts.

    I dry code snippets I post here. If that were real code, I'd have compiled it, and the compiler would've told me about comparing a pointer to an integer, and I'd have added a *.


  • Java Dev

    @Buddy said:

    Besides, for (int c = getchar(); c != EOF; c = getchar()).

    I've done that with strtok(), but it duplicating the function call doesn't scale well with more complex function invocations.


  • Discourse touched me in a no-no place

    @PleegWat said:

    I've done that with strtok(), but it duplicating the function call doesn't scale well with more complex function invocations.

    It's pretty reasonable in my experience. Going to a for is often justified anyway though, as it's not uncommon for an iteration API to have one call to initialise and another to step.


  • Java Dev

    @dkf said:

    It's pretty reasonable in my experience. Going to a for is often justified anyway though, as it's not uncommon for an iteration API to have one call to initialise and another to step.

    True, but that's a different interface. We use linked lists a lot so I know what for() is useful for.



  • Well, as I said, I was in the third grade when I learned C. I was learning what the = sign means in class, and then saw it getting abused. Of course it was a WTF moment. By the time I was in college, I saw the simpler models of computation (finite state machines, in particular) and understood that the imperative model defines a functor on states, and that what programmers call "variables" query the state, etc. Imperative programming languages (the fragments of languages that put things in steps) are monads.

    That said, things that aren't sequences of steps shouldn't be expressed as sequences of steps. Conversely, things that are, should. A good programmer is one that knows the difference and knows both models well enough to be effective. Ignoring one model in favor of the other is just tying your own hands.



  • Made me laugh, but I think the original productivity of JavaScript is a little bit overestimated, while the pride is grossly underestimated.



  • @Captain said:

    I was learning what the = sign means in class, and then saw it getting abused.

    Or not. In C and its progeny it means "set this thing equal to that thing", and they use == to mean "compare this thing to that thing and see if they're the same".

    Earlier, in the days of Algol and its little brother Pascal, a plain = meant "compare this thing to that thing" while := was used for assignment. Never did understand why C and company had to switch it around.

    Other languages had other ways of distinguishing the two meanings of "is it equal?' and "set it equal"; Fortran used = for assignment and .EQ. for comparison; APL preferred = for comparison and a left-pointing arrow for assignment. Even the original BASIC required the "LET" keyword to ensure that you knew you were setting a value and not comparing for equality.

    Only PL/1, alone among all the languages I've worked in, used the same symbol for both and left it to context to tell them apart, hence my earlier a=b=c example.



  • Or not. In C and its progeny it means "set this thing equal to that thing", and they use == to mean "compare this thing to that thing and see if they're the same".

    And it meant something completely different for about 450 years, and still does, to everybody outside of programming (very likely including your boss). Equality is a declarative statement, not a verb or a question. Because of its declarative semantics, equality is reflexive, symmetric, and transitive. Assignment is none of these. Even == isn't symmetric or transitive, if the language has assignment.

    The irony is that using the = symbol to mean "setting" and "testing" is actually closer to every other use in the world, which doesn't confuse anybody.


  • Discourse touched me in a no-no place

    @Captain said:

    Even == isn't symmetric or transitive, if the language has assignment.

    While I understand and pretty much agree with your points, I'm a little surprised by this one. What is the basis for it?

    Sure, things can get complicated if you've got assignments on one or both of the sides, but that's because of the semantics of assignment more than anything else, and it's well-defined semantically in most languages (initially in terms of small-step operational semantics, but you can derive a denotational semantics from that). The main exceptions are C and C++, which don't define the evaluation order (so allowing extensive shenanigans and letting bad programmers make some awful mistakes).



  • The original problem I mentioned wasn't the interpretation of = as assignment or equality, but the permanency if the meaning. For example, once you know that F=ma in an inertial reference frame, you can calculate future values as long as you have two of the three.

    In math, x = 1 means that x has always and will always equal one, regardless of changes in other variables, for the current definition of x. If programs were expressed the same way math is expressed, then CursorPosition = UPPER_LEFT would mean that the cursor stays there. CursorPosition = UPPER_LEFT + t * CHARACTER_WIDTH would mean that the cursor is slowly moving right. And...
    x = 1
    y = 2
    x = y
    ... is nonsense.


  • BINNED

    Wouldn't that actually be := ?


  • Discourse touched me in a no-no place

    You should look up Static Single Assignment form. Once that transform is applied to your code, the meaning of equality is much simpler and the meaning of assignment is clearer. (It's also one of the most important steps in how modern compilers actually work.)



  • That's a complier optimization that improves a program's memory footprint. It doesn't change the semantics of the language.



  • @Onyx said:

    Wouldn't that actually be := ?

    Maybe in higher-level formal math. High school algebra students only recognize =.


Log in to reply