How can we increase adoption of c/c++ alternatives?



  • @Jaloopa said:

    And then you are doomed to never again STFU about Lisp

    Lisp isn't exactly alone at the top of the language power web...


  • Discourse touched me in a no-no place

    @ben_lubar said:

    A good language is Turing-complete. Go is Turing-complete. Any questions?

    Yes. Why didn't you finish the syllogism? Is it because the obvious conclusion is a fallacious form?


  • Discourse touched me in a no-no place

    @Gaska said:

    Also, fucking nested quotes how do they work?

    They don't.


  • Discourse touched me in a no-no place

    @riking said:

    I don't know of any valid, portable, plain C source code that compiles to the x86_64 SYSCALL instruction.

    __asm { db 0x9f } ?


  • Java Dev

    I recommend you reread the post above @riking's you quoted. 🎏 for woosh.


  • BINNED

    @tarunik said:

    Lisp isn't exactly alone at the top of the language power web...

    Agreed. Forth and Haskell enthusiasts have the same problem, except worse because Haskell has a reputation of being an academic language that isn't suitable for production applications, and people just think you're crazy if you use Forth (though if you spend any time on comp.lang.forth you'll think that assessment is entirely justified).


  • BINNED

    @antiquarian said:

    Forth

    I read that as Froth. Every. Fucking. Time.


  • Discourse touched me in a no-no place

    @PleegWat said:

    I recommend you reread the post above @riking's you quoted.

    I did. I thought it was obvious I was joking, but you are free to attempt to corral a couple of other flaggers.



  • @Gaska said:

    No, it means the exact opposite - it's BAD DESIGN DECISION for systems language. If your language has hard-coded feature that heavily depends on runtime implementation, you can't do runtimeless build anymore no matter what. It becomes impossible. Your language becomes useless everywhere where you can't run your runtime. For example, barebone PC without an operating system in place.

    If your language semantics specify certain behavior, the compiler can output any code it likes that conforms to that. C is bad because it's over-specified on the parts that count and under-specified on things that no longer make life any easier for compiler writers. By including garbage collection and synchronization primitives, languages enable more optimizations at a lower level than c does. For instance: hardware garbage collection or transactional memory, which provide a performance boost to all code written in the language, not just code custom-built to take advantage of those facilities.

    When did I say anything about performance<abbr title="the Java optimizing compiler remark doesn't count because it's different discussion">?</abbr> But answering to your non sequitur, Rust has both memory safety and low-level access. *And* doesn't depend on its runtime.
    You're comparing apples to oranges. Rust is a low level language for systems programming; go is a high level language for distributed applications. Go's semantics are appropriate for it's domain; the human brain is not well-suited to reasoning about memory allocation in a complex parallel program.


  • @Gaska said:

    Yes. malloc() for instance. And many other memory-related syscalls.
    And, as established, the kernel can handle those. (That's... almost a tautology.)



  • @Buddy said:

    transactional memory

    From what I've heard, transactional memory isn't that much about performance(*), but more about making it easier to write concurrent code correctly.

    The point being, that people researching TM don't expect TM to replace current solutions (explicit locking and/or lock-free patterns) in performance critical situations just now. So, while it may boost performance of all code on average, it may be a bad choice in an environment where people expect control over this stuff explicitly.

    (*) IIRC it only is guaranteed to improve performance if the locking it replaces was overly pessimistic and if there's not too much contention. With a lot of contention, TM is apparently pretty expensive.



  • Fair enough. The point was more about the paradigm shift away from giving the programmer the freedom to specify exactly what happens, at the cost of limiting the types of optimizations that can be done below the source code level. It's a process that has been happening since computers were invented, and I am arguing that we are no longer at a stage where a C-like level of control is desirable.



  • Holy fuck that was poorly written. What I mean is that
    @Buddy said:

    giving the programmer the freedom to specify exactly what happens, at the cost of limiting the types of optimizations that can be done below the source code level.

    is what c does to a lesser extent than assembly and to a greater extent than bytecode languages, and that technology is at a stage where the semantics of bytecode languages make them more optimizable than C.


  • Banned

    @Buddy said:

    You're comparing apples to oranges. Rust is a low level language for systems programming; go is a high level language for distributed applications

    Last I checked, both Rust and Go were marketed as both low- and high-level languages suitable for everything where Python is a bad choice. And I assure you that at least Rust meets that claim.

    @Buddy said:

    Go's semantics are appropriate for it's domain; the human brain is not well-suited to reasoning about memory allocation in a complex parallel program

    If you can't get a grasp of what memory belongs to what task, then you're doing concurency wrong.



  • Fair enough, as you say - for what's perhaps a majority of cases, I agree. I'd argue that there's still a few cases where you want that kind of control - and that's not only in OS-kernel stuff.

    Or rather, the places where you want ASM-like levels of control are perhaps more frequent out-of-kernel, and in various libraries like video-codecs, BLAS, etc. For some of these problems, the hardware has very specific support (like special video-encoding instructions, or even just the various SIMD extensions). Plus, you want to be able to have tight control over memory for cache friendliness etc - which gets harder for each layer of abstraction you add.

    It'd be nice to not have deal with this stuff (and instances where you have to do this are getting less frequent TBH), but ATM there's still too much perf to be gained by doing it.

    @Buddy said:

    Holy fuck that was poorly written ...

    Nah, I got the point you were making. And, it's certainly true - the byte code can include much more information about what you're trying to do, rather than specifying a set of operations that happen to do that. So, when you're looking at the byte code, you can make much more informed decisions for optimizing when you actually know what you're running on.

    And for the vast majority of cases, that's good. It's essentially what LLVM tried to do with its bitcode when it was first conceived (the life-long program optimization stuff).

    In some cases (e.g., in the kind of libraries mentioned above), you're way past that, and are already dealing with all the icky details about the underlying architecture by hand. At that point, anything you don't have control over just becomes painful.


  • Discourse touched me in a no-no place

    @tarunik said:

    Lisp isn't exactly alone at the top of the language power web...

    And if they weren't at the top of the language power web, they'd never notice anyway because of the Blub effect…


  • Java Dev

    I did some looking into Rust this weekend, and it does look interesting. I like strong compile-time safeguards, and it seems Rust is delivering without sacrificing speed.


  • Banned

    It's also worth noting that, unlike C++, implementing traits (interfaces) for your structs doesn't increase the resulting size of object - vtables aren't stored in the object itself, but are passed along the reference to object each time it's needed.


  • Discourse touched me in a no-no place

    @Gaska said:

    It's also worth noting that, unlike C++, implementing traits (interfaces) for your structs doesn't increase the resulting size of object - vtables aren't stored in the object itself, but are passed along the reference to object each time it's needed.

    So now instead of increasing the size of each object, you're increasing the size of each reference to the object? That's a win exactly how? (FWIW, references tend to outnumber objects by a very large ratio in any normal program…)


  • Banned

    @dkf said:

    So now instead of increasing the size of each object, you're increasing the size of each reference to the object?

    No - it's more like, whenever you have a reference to trait object (ie. not reference to object of concrete type, not a reference of generic parameter type, just when you have a reference in form of &MyTrait), what you actually have is a tuple of object reference and vtable pointer. Everything else is unaffected. And since functions are often declared as fn myFunction<T: MyTrait>(t: &T) rather than fn myFunction(t: &MyTrait), it's almost a non-issue.

    In short - "each time it's needed" == "very rarely".


  • Discourse touched me in a no-no place

    @Gaska said:

    it's almost a non-issue.

    In short - "each time it's needed" == "very rarely".

    @dkf looks seriously dubious about that…


  • Java Dev

    How does that work with inheritance? (which I think is only on traits)? It merges the vtables?


  • Banned

    @PleegWat said:

    How does that work with inheritance? (which I think is only on traits)? It merges the vtables?

    I'm not an expert, but I think that each struct has separate vtable for each trait it implements, including traits implemented by its traits, and when an object is passed to a function taking a reference to trait, a matching vtable is chosen out of the available ones.


  • Banned

    @dkf said:

    @dkf looks seriously dubious about that…

    If you looked at some Rust code, you'll see that 99.9% of generic functions look like C++ templates - the remaining 0.1% being some weird recursion practices and code where you absolutely can't determine at compile time what type your data will be.



  • Which, by the way, is the same as what Go does. It can't do otherwise, because it does not know what interfaces the type will conform to when compiling it, because interfaces can be defined later and any object with corresponding methods will fit. Rust can't do otherwise either, because while it implements traits explicitly, you can still define a new trait and implement it for a type you imported from a crate (rust name for library).



  • @Bulb said:

    when compiling it

    * but it can when linking, so that's when it gets to fill in the vtable pointer literals in the code.

    Of course, you also have reflection, but that's useless when the values are link-time constants.


  • Banned

    @riking said:

    * but it can when linking, so that's when it gets to fill in the vtable pointer literals in the code.

    And the consequence is... what?

    @riking said:

    Of course, you also have reflection, but that's useless when the values are link-time constants.

    I'm not sure you mean what I think you mean when you use word "reflection", but in case I am right, Rust doesn't have them.



  • @riking said:

    ... Can a C runtime be written exclusively in C?

    A C-language-only implementation of <stdarg.h> would be... fairly interesting, I think.



  • @Gaska said:

    And you must write that runtime without using runtime.

    You must write parts of that runtime without using the same parts of the runtime, but there's no reason you can't have a high-level runtime which calls into a lower-level runtime.

    (To be a little more concrete, there's no fundamental reason I can't implement calloc() in terms of malloc() and memset().)



  • @EvanED said:

    I count reference counting as (poor1) GC. RAII also doesn't give you memory safety; you'd need a language that enforced it (as well as removing the other things that makes C++ memory-unsafe).

    That's what Rust is attempting to do, I think.
    Filed under: piss off, Discoourse, I'll replybomb threads as I catch up with them if I choose to



  • I always giggle when people want to replace C but don't think about the embedded space with micros with 1kb ram. There are more micros in the world than computers.


  • Discourse touched me in a no-no place

    @delfinom said:

    I always giggle when people want to replace C but don't think about the embedded space with micros with 1kb ram.

    I wouldn't get too giggly about it if they say they're using Forth. It's a weird language I know, but it goes small very nicely.



  • @tarunik said:

    Lisp isn't exactly alone at the top of the language power web...

    Exactly! There is also Scheme!



  • @FrostCat said:

    @riking said:
    valid, portable, plain C source

    __asm { db 0x9f } ?

    The double underbar at the start is somewhat of a hint there, isn't it?



  • @antiquarian said:

    Agreed. Forth and Haskell enthusiasts have the same problem, except worse because Haskell has a reputation of being an academic language that isn't suitable for production applications, and people just think you're crazy if you use Forth (though if you spend any time on comp.lang.forth you'll think that assessment is entirely justified).

    Forth is fascinating in what can be done with it, and how efficient it is, but at the same time it seems to encourage borderline incomprehensible code.

    Haskell is fascinating in what can be done with it, and how efficient it is, but at the same time it seems to encourage borderline incomprehensible code.



  • @dkf said:

    I wouldn't get too giggly about it if they say they're using Forth. It's a weird language I know, but it goes small very nicely.

    Forth programmers being about the only people who routinely critique C for being bloated and slow while actually having a valid point.



  • @delfinom said:

    I always giggle when people want to replace C but don't think about the embedded space with micros with 1kb ram.

    "Giggle"?

    @delfinom said:

    There are more micros in the world than computers.

    Tee-hee! Tee-hee!



  • @blakeyrat said:

    "Giggle"?

    "Titter" is fitter.


  • Banned

    @blakeyrat said:

    "Giggle"?

    Do you mean this word doesn't exist?


  • FoxDev

    it also has nice consonance.


  • BINNED

    @tar said:

    Forth is fascinating in what can be done with it, and how efficient it is, but at the same time it seems to encourage borderline incomprehensible code.

    Haskell is fascinating in what can be done with it, and how efficient it is, but at the same time it seems to encourage borderline incomprehensible code.

    Incomprehensible Forth code is a sign that you haven't factored your definitions well enough. Incomprehensible Haskell code is a sign that you drank too much of the point-free Kool-Aid. One of these problems can be fixed.



  • @antiquarian said:

    Incomprehensible Forth code is a sign that you haven't factored your definitions well enough.

    That, and some of the Mooreisms like the use of THEN to mean ENDIF.

    Filed under: 2 2 = IF " OK" . THEN


  • BINNED

    @tar said:

    That, and some of the Mooreisms like the use of THEN to mean ENDIF.

    That shouldn't throw off anyone who's been writing Forth code for more than five minutes. You'll be complaining about fi in bash next.



  • It was about the only complaint I could raise without, y'know, actually reading up on the matter. (I vaguely remember CREATE>DOES, named variables, and some of the [] words being hard to reason about, but it's been a while. Also explicit use of the instruction stack, reverse polish notation über alles and an almost fanatical insistence on the tersest possible coding style at all times. Not that there aren't things to like about Forth.)



  • ...and whats the deal with fi in Bash?

    Filed under: bash is a truly vile thing, but at least it's not sh



  • how much CPU power does an elevator controller really need?

    If by "elevator controller" you mean the computer that schedules elevator trips in response to calls for the elevator and destinations, it's a lot of cpu. The number of possible trips explodes in the number of floors.



  • Why does Rust require a comma to disambiguate a single value ((x,)) from a single value ((x))? Wouldn't it make more sense to make those two things semantically equivalent?



  • Same for Python, and while I see where you're coming from, I definitely disagree. There are two reasons:

    • Ignore order for a second and just imagine mathematical sets. The only x for which x = {x} is true are infinite structures (or maybe just one infinite structure, {{...{...}...}}). So if (x,) built a set instead of a sequence, it wouldn't be true from a mathematical point of view. Why would the fact that it's a sequence matter?
    • From a type perspective, you can't have type((1,2,3)) be int or whatever, that's just wrong; it has to be tuple or int tuple or int * int * int or something depending on how you want your type system to work. Similarly, () has type () and int is wrong. If (1,) was the same as 1, that would mean I could make a 0-element tuple, or a 2-element tuple, or a 3-element tuple, etc., but I can't make a 1-element tuple. Why the special case? (This one especially applies to Python because the tuple's type doesn't specify the length.)


  • @EvanED said:

    From a type perspective, you can't have type((1,2,3)) be int or whatever, that's just wrong; it has to be tuple or int tuple or int * int * int or something depending on how you want your type system to work. Similarly, () has type () and int is wrong. If (1,) was the same as 1, that would mean I could make a 0-element tuple, or a 2-element tuple, or a 3-element tuple, etc., but I can't make a 1-element tuple. Why the special case? (This one especially applies to Python because the tuple's type doesn't specify the length.)

    There's no reason that Tuple<T> can't inherit from T. The argument in favor of this special case is that it is special. Singles are inherently different from doubles, triples etc in that they only contain a single value.

    Alternatively, completely contradicting the previous paragraph: Rust has a pretty cool type inference feature where how a literal or variable is used determines its type. So you can have an ambiguous assignment to a variable, then later use it in a situation that requires it to be a certain type, and the compiler just figures it all out for you[eg.]. Why can't they do that with 1-tuples?



  • @Buddy said:

    There's no reason that Tuple<T> can't inherit from T.
    Actually, I can think of reasons that it can't. In particular, suppose that T and Tuple<T> both have the same method. I don't know Rust syntax, but suppose you have something like

    class MyClass {
        int n1() {...}
    }
    var mc = MyClass()
    var t = (mc,)
    

    and that (mc,) is interpreted as identical to mc. What does t.n1() call? If you say it's MyClass::n1, then t isn't really a tuple. If you say it's tuple::n1, then t really isn't a MyClass.

    @Buddy said:

    Singles are inherently different from doubles, triples etc in that they only contain a single value.
    But I can say the same thing about three. Triples are inherently different from singles, doubles, quads, etc. in that they contain exactly three values.

    What's so special about one?

    (Note: like I said, I see where you're coming from; I'm exaggerating a little bit there. That being said, I don't think that one element should be anywhere as close to a special case as you are advocating. This is rather different, but take vector<bool> in C++; the intention behind declaring that <bool> gets its own specialization was a pretty reasonable one, and you could make a good case that <bool> is reasonably a special case in the language, because it's the only primitive type that could be stored in a single bit if only you could address that. But it turned out to be a big mistake.)


Log in to reply