Zig programming language


  • Discourse touched me in a no-no place

    @Gąska said in Zig programming language:

    your compilers are buggy as hell

    Very true but irrelevant. 😜 Or were you referring to compilers I merely use rather than ones I write?


  • Banned

    @dkf the latter. I was alluding that you might have trust issues.


  • BINNED

    @Gąska said in Zig programming language:

    1. Mixing different sets of optimization flags within one build sounds interesting, but not very useful.

    Why is that even mentioned? Sounds more like a tooling thing than a language thing? I can do that in C, I just probably don't want to without mucking with the Makefiles.

    1. defer considered harmful.

    What do you mean by that?
    (I haven't read the page, but your numbering doesn't seem to explicitly align with it, so I'd have to)

    1. I like the very explicit memory allocation (for those who haven't read the article - all allocating functions take allocator as argument, and you MUST handle allocation failure). Though I'd probably quickly learn to hate it.

    Yeah, that's going to make you confident that you actually handle error conditions and have a stable program.
    Then Linux is going to come around "Memory? Sure you can have memory, I have lots... Oops, OOM killed."



  • @Gąska said in Zig programming language:

    Mixing different sets of optimization flags within one build sounds interesting, but not very useful.

    I locally enable optimizations for a bunch of well-tested code during "normal" debug builds. The specific code is quite compute heavy (e.g. decompression), and ends up being used to load data on startup. Having initial data loading being a lot faster in debug builds is worth the problems it causes.

    I've seen people go the other way as well, where they always do release builds, and just disable optimizations/enable debugging selectively (pragmas/macros). I'm not a fan for a number of reasons: code is littered with those statements, and IME it's just much easier to identify a few hotspots that actually matter and special-case those.

    @topspin said in Zig programming language:

    Why is that even mentioned? Sounds more like a tooling thing than a language thing? I can do that in C, I just probably don't want to without mucking with the Makefiles.

    GCC and Clang can change it on a per-function level even, via pragmas. Last I tried, msvc can only disable optimizations locally, not the other way around. Different compilation flags per file can be done, of course, but there are some gotchas involved. More so in the Windows world than in the Linux one.



  • @dkf said in Zig programming language:

    I did some work with/on a language (for hardware) that had explicit integer widths. In that case it made a ton of sense, but it was definitely hard work.

    I remember that from a C-flavour targeting FPGAs. I think it would default to the bit-width that was large enough to hold the result (adding two integers => increase bitwidth of the larger by one, multiplication => add bitwidths of the multiplicands, ...). You could override it (and pretty much would have to if you didn't want to end up with something like a 381 bit integer by accident).

    This was perhaps one of the more sensible things in that language. (The compiler being written in Java was definitively one of the less sensible things.)


  • BINNED

    @dkf said in Zig programming language:

    In languages without RAII, you get constructs like using (C#), with (Python) or try-with-resources (Java). There are equivalents in other languages too. These all have the controlled wind-down after running a block of code yet also have the explicit-ness property. The examples I've cited are all in garbage-collected languages, but that's a wholly orthogonal feature.

    Orthogonal maybe, but not necessary unrelated.
    The explicitness always leaves open the option to miss it and have a resource leak (maybe you could enforce it, but then why not make it automatic to begin with). These languages do it explicitly for non-memory resources, but for memory they have the very much implicit magic of the GC enforce correct clean-up, because it doesn't fit with their managed/safe philosophy.
    In C++ you always have the possibility for leaks or worse, but that's because it has a very different approach with footguns included, and at least you can use RAII if you want.



  • @topspin said in Zig programming language:

    Then Linux is going to come around "Memory? Sure you can have memory, I have lots... Oops, OOM killed."

    Ah yes, the "Wirecard" approach to money memory allocation...


  • Banned

    @dkf said in Zig programming language:

    you seem to be getting some misconceptions of what that might mean as a result of your exposure to massive impact-induced brain trauma C++.

    Interesting. I could say the same thing about you.

    1. I couldn't even figure out if the language has constructors, their documentation was that poor. If it doesn't, lacking destructors is at least unsurprising (and the result is that you have to make allocation and freeing functions for all your non-trivial data structures, just as one probably does in C at the moment).

    Constructors and destructors are orthogonal concepts. Moreover, constructor is mostly useless. It's basically a regular static method with some extra restrictions on how it can be called and what it can return that get in the way and bring no benefit at all. Whereas destructors allow you to do things that are otherwise impossible (like reduce A LOT of cleanup boilerplate). Rust has destructors but no constructors and it works wonderfully.

    1. The language designers probably see the “no transparent smart pointers” as a feature.

    Of course they do. Doesn't mean they're right. Wasting time on manually writing trivial-to-automate boilerplate doesn't make your program faster or have less bugs. If the type name isn't enough of a clue what's going on, then something is seriously wrong, and the wrongness is located in the middle of some pair of a keyboard and a chair. And worrying about other people abusing some language features is pointless, as idiots will always find a way.

    it's not that smart pointers are themselves a problem, but rather that making them implementable using standard features opens up the less-than-wise to being able to make some true horrors. Which is the whole point of “no-implicit”.

    All Rust smart pointers are implementable (and implemented) with regular language features, but thanks to just a tiny bit of automation at language semantics level, they feel no different than regular references (everyone calls them references even though officially they're known as pointers). Somewhat related - the Option type is implementable (and implemented) using regular language features too; even the ABI compatibility between optional reference and C pointer can be replicated by anyone in a custom type, without a single unsafe instruction.

    1. In languages without RAII, you get constructs like using (C#), with (Python) or try-with-resources (Java).

    But that's RAII too. You acquire resources on object creation and release it on object destruction. It's just that you still have to remember about it so you lose out on the main benefit.

    Also note that all of the mentioned constructs rely on finalizers, which are destructors by another name. You can't have scoped resources without destructors.

    1. Lambdas (...) capture (...) closure semantics (...)

    Correct me if I'm wrong, but it seems you insist on keeping the captured context at the same address it was originally. Stop doing that. All your problems go away when you use move semantics to relocate the relevant variables into the closure object. And a destructor to clean up afterwards. That - among all its other uses - is why I consider destructors to be essential in modern GC-less programming. You can't have closures without automatic cleanup. You can't have ergonomic iterator adaptors without closures. You can't have modern programming without iterator adaptors.

    Not capturing at all would be extremely explicit, but make lambdas little more than unnamed conventional functions.

    From what I've gathered, that's the current proposal. Weird bunch these Zig guys, aren't they.

    Details matter. Which is where the crappy documentation comes to the fore…

    Fully agreed. I found myself frantically googling every few paragraphs for clarifications, and that's not a good sign.


  • Discourse touched me in a no-no place

    @Gąska said in Zig programming language:

    Correct me if I'm wrong,

    Since you are wrong, I will be correcting you. HTH!

    but it seems you insist on keeping the captured context at the same address it was originally. Stop doing that.

    I'm assuming that there will be an object at a definite location that the compiler will determine, or that the compiler will decide that everything accessing the object will need to use some indirection. The ownership context gets complicated when you have the same variable captured by two or more lambdas closures that have non-trivial lifetimes. The problem is hugely simpler in a garbage collected language; you detect that the lifetime scope of the object is “difficult to determine”, promote it to the heap, let everything have shared ownership, and then clean up eventually once the object becomes no longer reachable.

    I do not assume that the address of the object is on the stack where it is initially declared. In fact, the physical address of the object should not need anything to do with the declaration point. Some languages make it work that way anyway, but it is not necessary.

    The multi-closure case is an absolute murderer for move semantics. An alternative approach is to promote the lifetime of the object to the LUB of lifetimes of the closures that capture it, but that's a rather more sophisticated approach than any C++ compiler designer seems to have ever really contemplated. (It's more like what Rust would like to do, but they don't quite do that and use a different approach.)

    Of course, if you make all objects immutable (at least in terms of observable semantics) then you can provably use reference counting instead of full GC, and that basically gets you the benefits of both. OTOH, that has some subtle consequences for thread handling if you want to do it efficiently. (These as hard and interlinked problems even though they don't appear to be so at first glance.)

    All your problems go away when you use move semantics to relocate the relevant variables into the closure object. And a destructor to clean up afterwards. That - among all its other uses - is why I consider destructors to be essential in modern GC-less programming. You can't have closures without automatic cleanup. You can't have ergonomic iterator adaptors without closures. You can't have modern programming without iterator adaptors.

    As someone who actually does language and compiler design (well, some of the time) I do not believe that series of assertions. You appear to be assuming C++ semantics, but those are very much not the only way to solve things.


  • Considered Harmful

    @dkf said in Zig programming language:

    In languages without RAII, you get constructs like using (C#), with (Python) or try-with-resources (Java). There are equivalents in other languages too. These all have the controlled wind-down after running a block of code yet also have the explicit-ness property. The examples I've cited are all in garbage-collected languages, but that's a wholly orthogonal feature.

    Which are half-assed hacks around not having deterministic destruction, and so provide deterministic disposal (and still using 'magic' in the form of a blessed interface or method name). Languages that do have deterministic destruction should use it.

    (Capturing mutable values without GC is where language implementations get… tricky.)

    Ever used Rust?


  • Discourse touched me in a no-no place

    @pie_flavor said in Zig programming language:

    Ever used Rust?

    Nope. Not enough free mental bandwidth right now.

    I know a bit about it though. In particular, I know that it has both ordinary and mut references and that they have different characteristics. I've also seen some stuff on something that was said by that blog post to be related to memory lifetime (I was looking up why callbacks are tricky) but I know I did not understand the significance of it. The syntax didn't impress me, but that's not a very deep observation.

    If you wish to discuss this more, you probably ought to indicate more precisely what you mean. It would make things more accessible to other readers too…


  • Discourse touched me in a no-no place

    @pie_flavor said in Zig programming language:

    Which are half-assed hacks around not having deterministic destruction, and so provide deterministic disposal (and still using 'magic' in the form of a blessed interface or method name). Languages that do have deterministic destruction should use it.

    The fun thing is that languages that do it the other way round consider RAII to be a half-assed hack that hides far too much trickery. Impasse.


  • BINNED

    @dkf said in Zig programming language:

    @pie_flavor said in Zig programming language:

    Which are half-assed hacks around not having deterministic destruction, and so provide deterministic disposal (and still using 'magic' in the form of a blessed interface or method name). Languages that do have deterministic destruction should use it.

    The fun thing is that languages that do it the other way round consider RAII to be a half-assed hack that hides far too much trickery. Impasse.

    How is RAII more trickery than a GC?


  • Considered Harmful

    @dkf Okay then. I was referencing the fact that Rust closures seamlessly capture mutable values. This is done by there being three traits for closures, instead of one: Fn, FnMut, and FnOnce. The first can only capture immutable references, but calling it only takes an immutable reference; the second can capture mutable references, and calling it requires a mutable reference too; and the third's code can consume uncopiable values that it's captured, but calling it consumes it too, hence 'Once'. Any closure implementing Fn also implements FnMut, and any closure implementing FnMut, including ones implementing Fn, also implements FnOnce.

    It should be noted that these contracts are guaranteed through regular language features, and the only magic as far as the traits are concerned is the function syntax for using them.


  • Considered Harmful

    @dkf said in Zig programming language:

    @pie_flavor said in Zig programming language:

    Which are half-assed hacks around not having deterministic destruction, and so provide deterministic disposal (and still using 'magic' in the form of a blessed interface or method name). Languages that do have deterministic destruction should use it.

    The fun thing is that languages that do it the other way round consider RAII to be a half-assed hack that hides far too much trickery. Impasse.

    And yet they implement finalizers.


  • Considered Harmful

    @dkf said in Zig programming language:

    I know that it has both ordinary and mut references and that they have different characteristics

    just clarifying: namely, that you can either have any number of immutable references, or exactly one mutable reference, to a value. You can't have multiple mutable references to the same value, or mixed mutable/immutable. Creating through unsafe code references that violate these rules is undefined behavior.



  • @dkf said in Zig programming language:

    An alternative approach is to promote the lifetime of the object to the LUB of lifetimes of the closures that capture it

    Not lubbing the unexplained acronym there...


  • Discourse touched me in a no-no place

    @Mason_Wheeler Least Upper Bound



  • @pie_flavor said in Zig programming language:

    (Capturing mutable values without GC is where language implementations get… tricky.)

    Ever used Rust?

    Naming a language that's widely infamous for its tricky semantics on this point does not counter this assertion.



  • @dkf said in Zig programming language:

    you detect that the lifetime scope of the object is “difficult to determine”, promote it to the heap, let everything have shared ownership, and then clean up eventually once the object becomes no longer reachable.

    That sounds pretty much contrary to the "no-implicit-stuff" stance, though. Moving to the heap isn't exactly free (compared to using stack anyway). It seems to me that it would have to be done rather frequently; not promoting to the heap and giving it shared ownership would only be possible when you have perfect information of the full call-graph to which the value is potentially passed through references at the time of that decision.


  • Discourse touched me in a no-no place

    @cvi said in Zig programming language:

    perfect information of the full call-graph

    A lot of modern optimisation works on that basis. Or at least on the basis of having large chunks of the call graph understandable that way. Yes, there'll be chunks that are not handled that way, but they tend to be a minority. (They tend to be key toplevel things such as your main loop or critical primary callbacks.)


  • Banned

    @dkf said in Zig programming language:

    The multi-closure case is an absolute murderer for move semantics. An alternative approach is to promote the lifetime of the object to the LUB of lifetimes of the closures that capture it, but that's a rather more sophisticated approach than any C++ compiler designer seems to have ever really contemplated. (It's more like what Rust would like to do, but they don't quite do that and use a different approach.)

    Rust has very simple yet elegant solution: moving twice is a compile error. Not just in closures, but anywhere. It's perfectly consistent with Rust lifetime model, it's simple to wrap your head around, and eliminates all tricky cases - and if you want to share one object between two closures, you do the same thing that you do when you want to share one objects between any other two variables (e.g. a smart pointer). Rust lambdas are also capture-by-reference-by-default, which allows multiple immutable references to the same data, but doesn't allow lambdas to outlive their captures; you have to opt-in to move semantics by marking the lambda as move, which lets it outlive the original declaration of its capture, but marks the original declaration "dead" starting from where the lambda is introduced. This is also in line with the rest of Rust language, where it's also statically checked that pointers never outlive their pointees.

    All your problems go away when you use move semantics to relocate the relevant variables into the closure object. And a destructor to clean up afterwards. That - among all its other uses - is why I consider destructors to be essential in modern GC-less programming. You can't have closures without automatic cleanup. You can't have ergonomic iterator adaptors without closures. You can't have modern programming without iterator adaptors.

    As someone who actually does language and compiler design (well, some of the time) I do not believe that series of assertions. You appear to be assuming C++ semantics, but those are very much not the only way to solve things.

    Let's consider an arbitrary closure returned from an arbitrary function that contains some resource that needs to be released when the closure is no more, passed to some other function that knows nothing about that resource. Forget about RAII, forget about move semantics, forget about C++. Just a closure with a resource and an oblivious higher-order function (iterator adaptor is a great example). How do you release that resource in a timely manner? You can't rely on the higher-order function to clean it up for you because it doesn't know that cleanup is needed, much less what needs cleaned up (and if it knows that last one, you lose a lot of genericity).

    The only sane way I can see to achieve this is to detect when the capture leaves the scope and becomes completely inaccessible (to prevent releasing the resource early when it's still in use), and automatically run a cleanup handler without the programmer making an explicit call. Call it what you want, run it where you want, I don't care if it's done on closing brace or after mark&sweeping a heap - if there's a cleanup handler that's automatically run when the compiler/runtime detects the object is inaccessible, that's destructor to me.

    The technically equivalent but actually a nightmare solution is to pass around an explicit cleanup handler together with the closure, and that sucks for at least two reasons. One, you need to be very aware of its existence, and make sure you call it exactly once, and make sure you do it when the closure is very definitely dead. And because higher-order functions are involved, it's going to be much harder to keep track of than your usual call to free() - especially since you don't even know if it's free() or something else. That effectively means that next to the closure you care about, you get a second closure that you need to drag around everywhere. One closure for the price of two! The second reason it sucks is that now the closures that DON'T need cleanup will be just as much PITA to handle as closures that do - either that you'll have to write two variants of everything that touches lambdas, from top to bottom, from beginning to end; one with cleanup and one without.

    If there's some third way I missed, feel free to inform me of it.



  • @dkf said in Zig programming language:

    A lot of modern optimisation works on that basis. Or at least on the basis of having large chunks of the call graph understandable that way. Yes, there'll be chunks that are not handled that way, but they tend to be a minority. (They tend to be key toplevel things such as your main loop or critical primary callbacks.)

    How do you do that practically? Require functions to annotate whenever they can extend the lifetime of a reference passed to them beyond their scope? I can't really see how you would do that without making it part of the interface.


  • Banned

    @cvi except for libraries, the compiler can do whatever the fuck it wants with the interfaces of anything and everything. Imagine that the compiler inlines the entire codebase into one huge main function and works backwards from there (it doesn't really do that, but it would if it could).



  • @dkf said in Zig programming language:

    In languages without RAII, you get constructs like using (C#), with (Python) or try-with-resources (Java).

    Which is not at all equivalent, because you cannot move the object to another scope without carrying that syntax with you every step of the way. And from my personal experience, most people fail to do that correctly.

    As a result, I've actually seen more Java code with serious exception safety deficiencies than C++ code with similar problems.



  • @Gąska said in Zig programming language:

    Also note that all of the mentioned constructs rely on finalizers, which are destructors by another name.

    :pendant: Technically, that description somewhat fits finalizers in Java as well, but I'm pretty sure that wasn't the concept you were trying to describe. Finalizers are completely useless at reliably doing anything.



  • @Gąska said in Zig programming language:

    @cvi except for libraries, the compiler can do whatever the fuck it wants with the interfaces of anything and everything. Imagine that the compiler inlines the entire codebase into one huge main function and works backwards from there (it doesn't really do that, but it would if it could).

    Yes, I was getting at libraries. I think there is some other stuff that might get tricky, such as supporting partial (re-)builds (especially if you have stuff like callbacks).


  • Banned

    @cvi remember that @dkf works in embedded. Things are "a little" different down there, and other things get priority than usual.



  • @dfdub said in Zig programming language:

    As a result, I've actually seen more Java code with serious exception safety deficiencies than C++ code with similar problems.

    In fact, even in very simple cases, Java programmers routinely get exception safety horribly wrong, since they're under the mistaken belief that they don't have to care much about it. I can't tell you how many classes I've seen that have two AutoCloseable members and implement their own close() method as follows:

    void close() throws Exception {
      a.close();
      b.close();
    }
    

    I can actually count the number of times I've seen someone correctly handle exceptions thrown from the close() methods of wrapped resources on one hand. Literally.

    </rant>



  • @dfdub

    That's what GC is for, D'uh!


  • Banned

    @dfdub said in Zig programming language:

    @dkf said in Zig programming language:

    In languages without RAII, you get constructs like using (C#), with (Python) or try-with-resources (Java).

    Which is not at all equivalent, because you cannot move the object to another scope without carrying that syntax with you every step of the way. And from my personal experience, most people fail to do that correctly.

    As a result, I've actually seen more Java code with serious exception safety deficiencies than C++ code with similar problems.

    I think this is the best illustration of my argument. The explicit vs. implicit debate is really about not trusting people not to pull stupid shit where you least expect it, vs. not trusting people to get their boilerplate right every single time. Personally, I think the latter is a much higher risk with much more severe consequences - especially when you have working code review process and stupid shit gets cut out before merge to master.



  • @Gąska said in Zig programming language:

    The explicit vs. implicit debate is really about not trusting people not to pull stupid shit where you least expect it, vs. not trusting people to get their boilerplate right every single time.

    Fully agreed. I'd rather have a slightly more complicated concept that only requires one programmer to get the boilerplate right than a seemingly simpler concept that requires every user of the interface to know what they're doing.

    Since looking at real-world code shows that even correctly initializing multiple resources inside a single try-with construct is too hard for many programmers, I have a very strong preference for implicit destruction.


  • Banned

    @dfdub said in Zig programming language:

    @Gąska said in Zig programming language:

    The explicit vs. implicit debate is really about not trusting people not to pull stupid shit where you least expect it, vs. not trusting people to get their boilerplate right every single time.

    Fully agreed. I'd rather have a slightly more complicated concept that only requires one programmer to get the boilerplate right than a seemingly simpler concept that requires every user of the interface to know what they're doing.

    And in cases of resource-capturing closures specifically, the "automatic" way is actually many times simpler.



  • @dkf said in Zig programming language:

    I did some work with/on a language (for hardware) that had explicit integer widths.

    It's very, very common in the hardware world. For example, Verilog has an integer data type that is 32 bits signed (SystemVerilog adds shortint, int, longint, byte and bit. The difference between int and integer is that integer is a 4-value type (in addition to 0 and 1, each of its bits can have two different kinds of FILE_NOT_FOUNDunknown values). Bits of an int (and shortint, etc.) are a normal booleans; anything that would cause a bit to be unknown are treated as 0s.), but the hardware is usually implemented using wires and regs, which are single bits by default, but are commonly used as vectors. (A wire has the value — possibly after a delay — of whatever it is connected to; a wire that for whatever reason isn't connected to anything to give it a value has the special unknown value 'Z'. A reg is assigned a value and retains that value until assigned something else.) Both the width and endianness of reg and wire vectors are explicit and, for the width, arbitrary.

    wire [15:0] foo;
    reg  [0:15] bass_ackwards;
    reg  [4:0]   count;
    reg  [35:0] ecc_mem [1023:0];   // 36 bits wide; 1k words
    wire [NTHINGS*4-1:0] thing_fubars;
    

  • BINNED

    @Gąska said in Zig programming language:

    The explicit vs. implicit debate is really about not trusting people not to pull stupid shit where you least expect it, vs. not trusting people to get their boilerplate right every single time.

    Something I read from Sutter about access specifiers in C++ (they're for design, not for security, and can be circumvented if you try hard enough) in the same vein: it's about protecting against Murphy, not protecting against Machiavelli.
    The same is what I usually think about the "no operator overloading" argument ("well, don't do stupid shit"), except it's a bit weaker, i.e., not doing "stupid shit" means also not breaking the principle of least surprise.


  • Discourse touched me in a no-no place

    @cvi said in Zig programming language:

    Yes, I was getting at libraries. I think there is some other stuff that might get tricky, such as supporting partial (re-)builds (especially if you have stuff like callbacks).

    Libraries aren't a big problem, because we're talking doing interprocedural optimisation, probably at link time. LTO is known to be expensive, but the biggest problems with it are that it ends up generating much larger flow graphs than most compilers are optimised for handling. It most certainly stresses things more thoroughly than the usual conventional optimisations.

    DLLs are a hard boundary, of course.



  • @dkf said in Zig programming language:

    @Gąska said in Zig programming language:

    Correct me if I'm wrong,

    Since you are wrong, I will be correcting you. HTH!

    TDWTF in a nutshell.


  • Discourse touched me in a no-no place

    @Gąska said in Zig programming language:

    remember that @dkf works in embedded.

    I work in multiple levels; the project actually needs hybrid techniques (as it is deploying very large applications to novel hardware composed out of lots of embedded processors).

    Things are "a little" different down there, and other things get priority than usual.

    On embedded, it's pretty common to make virtually all your functions (except for things like interrupt handlers) be static inline. GCC at least treats that as a hint to really go to town on stripping shit out of them. LTO does something similar, except for a whole lot more functions. Our source is simple enough to not need that; we can just put everything we want to get cut out in header files. (Also, full rebuilds only take 20–30 seconds on platforms other than Windows.)


  • Banned

    @dkf as an interesting note, Rust generics work not by type erasure, but by having half-compiled code with "type holes" in the library, so that it can be specialized for each type separately, allowing for most (all?) of the same optimizations as regular non-library code. It's like the full source code access you get with C++ templates, but without the full source code access (although I heard decompilation is fairly trivial and readable). Dunno if something like that can be done for non-generic code - it feels like all the elements are in place and it's just a matter of hitting the switch, but I'm not sure if they did.


  • Banned

    @dkf said in Zig programming language:

    Things are "a little" different down there, and other things get priority than usual.

    On embedded, it's pretty common to make virtually all your functions (except for things like interrupt handlers) be static inline. GCC at least treats that as a hint to really go to town on stripping shit out of them. LTO does something similar, except for a whole lot more functions. Our source is simple enough to not need that; we can just put everything we want to get cut out in header files. (Also, full rebuilds only take 20–30 seconds on platforms other than Windows.)

    I've heard that some 15 years ago, many video game studios employed a technique of making release builds out of a single .cpp file that included the entire codebase, so you'd end up with one huge ass compilation unit, maximizing optimization potential.


  • Discourse touched me in a no-no place

    @Gąska said in Zig programming language:

    allowing for most (all?) of the same optimizations as regular non-library code

    That technique won't work completely for non-standard-width math because the higher-order requires a lot of non-trivial math magic to minimise errors. Very often constants are very slightly out from what you'd expect in order to get better rounding behaviour. In some use cases, it's actually better to use stochastic rounding (and that actually paradoxically lets you use the simpler constants), but that requires having a very fast hardware RNG to be actually useful.

    Math is hard, OK? I know just enough to leave it to the experts.


  • Banned

    @dkf said in Zig programming language:

    Math is hard, OK? I know just enough to leave it to the experts.

    #metoo


  • Discourse touched me in a no-no place

    @Gąska said in Zig programming language:

    I've heard that some 15 years ago, many video game studios employed a technique of making release builds out of a single .cpp file that included the entire codebase, so you'd end up with one huge ass compilation unit, maximizing optimization potential.

    That's used for some libraries too (I know SQLite does it) but it is in general a bit annoying to do because most codebases aren't designed for it. LTO sort of does something that too, except without you having to go through the tricky bits of getting everything into the same (virtual) file. (LLVM's version of LTO works best on the IR level.)


  • BINNED

    @dkf said in Zig programming language:

    static inline

    What does that do compared to just inline? From what I can see the only difference should be that the compiler isn't required to produce code for the function that is usable in other TUs (or that is addressable), but in the same TU the compiler can optimize exactly the same without it. So it would seem it just helps the linker discard unused text.



  • @dfdub said in Zig programming language:

    JavaSoftware ... Finalizers are completely useless at reliably doing anything.

    FTFTDWTF



  • @Gąska said in Zig programming language:

    I've heard that some 15 years ago, many video game studios employed a technique of making release builds out of a single .cpp file that included the entire codebase, so you'd end up with one huge ass compilation unit, maximizing optimization potential.

    Plenty of talk about doing "unity builds" these days still. It's not too hard to find, although the equally-named engine will mess up the searches a bit. Another big reason is apparently the reduction in compilation times. Topic pops up every now and then in r/cpp.

    It has a bit of problems if you're not careful from the get-go (e.g., statics/anonymous namespaces with clashing declarations).



  • @dkf said in Zig programming language:

    Math is hard, OK? I know just enough to leave it to the experts.

    FTFM



  • @dkf said in Zig programming language:

    @Gąska said in Zig programming language:

    I've heard that some 15 years ago, many video game studios employed a technique of making release builds out of a single .cpp file that included the entire codebase, so you'd end up with one huge ass compilation unit, maximizing optimization potential.

    That's used for some libraries too (I know SQLite does it)

    For optimization? I thought that was just to make it really simple to include into a project. (In this one simple step...)


  • Banned

    @topspin said in Zig programming language:

    @dkf said in Zig programming language:

    static inline

    What does that do compared to just inline? From what I can see the only difference should be that the compiler isn't required to produce code for the function that is usable in other TUs (or that is addressable), but in the same TU the compiler can optimize exactly the same without it.

    I don't know the answer, but I'd guess it includes something like "GCC is over 30 years old and it was built incrementally by Unix hackers who are very eager to add random non-standard extensions".



  • I'm confused as to how

    no hidden control flow

    matches up with

    Top level declarations such as global variables are order-independent and lazily analyzed.


Log in to reply