An amusing rant about C


  • Java Dev

    @cvi Slaying my own warthog and googling: Yes it does.


  • Banned

    @cvi said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    Nowadays? I don't think there's a single platform combo where you're likely to deploy code on both and actually benefit from having a smaller int on one of them.

    Android? Still plenty of 32-bit devices out there, but 64-bit devices aren't uncommon either.

    Yes but you probably don't want all your 4-byte ints become 8-byte ints when you compile your codebase on ARM64.

    4-byte int is still the most performant to work with on x64. Caches are hell of a drug.

    That's an optimization

    So is matching your int size to processor word. If you ignore performance, there's no inherent reason why any int size is naturally preferable to another, except value range and storage space.

    and you should make a conscious decision to perform this optimization as a programmer

    Wait, are you arguing for or against variable-sized ints? Because variable-sized ints are the opposite of a conscious decision.

    knowing that by doing so you declare that the value is never going to exceed 2G or 4G.

    int is 4 bytes by default on 64-bit platforms in basically all languages. It's the larger size that you have to opt into.

    Besides, there are plenty of cases where

    • the integer is never going to touch memory, but just lives in a register, so who cares about caches

    Optimization!

    • the integer is already 64-bits in memory (e.g. container sizes etc), and you're just unnecessarily downsizing it to 32-bits

    And it's C's fault for making it so easy to do. If not for implicit conversions, it wouldn't happen nearly as often.

    • the integer should actually be 64-bits when possible, because you're dealing with something like file IO, were a few giga-elements isn't uncommon

    And it's unlikely those same things will shrink to under a gigabyte just because you compiled for a different platform. 64-bit integers good. Variable-sized integers bad.

    • the integer is stored in memory, but followed by a 64-bit element (e.g., a pointer), so you get an extra 32 bits of padding regardless

    Depends. Very often it still makes sense to make it 32-bit in this situation anyway, because other parts of code need it as 32 bits. And padding in file formats is free forward compatibility.

    If you're designing a data structure that is in memory and that potentially has a few million element, yeah, then keeping the size down makes a ton of sense. You're probably still better of using something that isn't int, e.g., something like the int32_t that you suggested earlier.

    With millions? Yes, you're right. But with thousands, 32 bits beat both 16 bits and 64 bits. It's weird, but that's how it is. Modern CPUs aren't optimized for 16-bit operations.

    Because there's only one way to implement unsigned integers, but many to implement signed. And saturating unsigned math is rarely useful.

    Unsigned integers can

    • wrap
    • saturate
    • signal

    on over-/underflow (maybe more). All three behavious are useful on occasion. Same for signed integers. Not sure what makes unsigned integers different in this respect.

    Remember that the decision about it was made in times when 1's complement was still a thing.


  • Discourse touched me in a no-no place

    @cvi said in An amusing rant about C:

    You might want to count /MT, /MTd, /MD, /MDd et al. as different runtimes. (Not sure if this is what @dkf meant in his reply.)

    Not quite. There are other incompatibilities in the various runtimes from different vendors (and no, they're not all the MSVC one). Those tend to mostly be at the usage-protocol level though; the calling convention and types match (necessary in order to use the OS API/ABI, and nobody really cares about programs that can't do that).

    Doing lots of support of people doing weird programming stuff lets you see all the really strange things that are out there in the wild…


  • Discourse touched me in a no-no place

    @PleegWat said in An amusing rant about C:

    I know ARM just rejects unaligned load and stores outright.

    Not so simple. It actually depends on whether you've got the optional MMU component on the chip. If you do, you can do unaligned memory accesses, and if you don't (like us at work) then no unaligned accesses for you! Smartphone processors will definitely have the MMU — they're totally necessary for complex operating systems with varying trust levels in user space — but lots of embedded devices won't.


  • Discourse touched me in a no-no place

    @Gąska said in An amusing rant about C:

    int is 4 bytes by default on 64-bit platforms in basically all languages. It's the larger size that you have to opt into.

    ILP64 (sizeof(int) = sizeof(long) = sizeof(void*) = 64 bits) was tried on the DEC Alpha back in the 1990s. It was found to suck and cause a lot of problems. All 64 bit platforms since then have tended to be I32LP64, though Windows prefers IL32P64 and that's nasty (it has 64 bit integers available, but with different non-standard names).


  • BINNED

    @dkf it’s just called long long. Not non-standard, but stupid to type.



  • @Gąska Some fair points in there.

    Just to clarify, I'm not trying to argue against the use of 32-bit integers in general 64-bit use, but rather lament the suggestion of default `int' for everything. First point is that it's too small for generic 64-bit code (so more from the POV of library code, less from an end application, where you might know more about the exact amounts of data you're processing; though I fail to see why you'd want to limit yourself here). (Second point is the whole signed vs unsigned mess that we don't necessarily need to get into again.)

    But with thousands, 32 bits beat both 16 bits and 64 bits. It's weird, but that's how it is. Modern CPUs aren't optimized for 16-bit operations.

    You're not wrong about modern CPU's not being optimized for 16-bit ops. (And subsequently are even worse at dealing with e.g. 8 bit data). I briefly went over the instruction lists for SkyCoffee/Cannon Lake (had that one open), and the instruction throughput doesn't significantly differ.

    So, I'm quite confident in saying that for ~thousands, other factors are much more important. You already pointed out caches. 16-bit will halve the memory, so you'll still utilize caches better. If you're not just crunching through an array, 16-bit data might let you pack your data better (as per your previous arguments).

    If you go to SIMD (not sure why you'd do that for ~thousands), the number elements you treat each instruction doubles.

    Remember that the decision about it was made in times when 1's complement was still a thing.

    I'm aware. Doesn't mean I can't wish for a different one. (Or at least for the option to declare an integer type with different behaviour.)


  • Considered Harmful

    I don't think anyone disagrees that using variable-sized integers nowadays, particularly in functions called by other people, is almost always a stupid idea.
    Does anyone disagree though that they were absolutely essential for at least the first two decades of C's existence?



  • @LaoC said in An amusing rant about C:

    I don't think anyone disagrees that using variable-sized integers nowadays, particularly in functions called by other people, is almost always a stupid idea.

    size_t?


  • Considered Harmful

    @cvi said in An amusing rant about C:

    @LaoC said in An amusing rant about C:

    I don't think anyone disagrees that using variable-sized integers nowadays, particularly in functions called by other people, is almost always a stupid idea.

    size_t?

    Almost


  • Discourse touched me in a no-no place

    @Gąska said in An amusing rant about C:

    Modern CPUs aren't optimized for 16-bit operations.

    That again depends on exactly which architecture you're talking about. It may even vary with sub-architecture; there are ARM variants that have operations that are extremely useful for 16-bit operations (and usually it's just your compiler that knows about them). The exact variations that you see in practice will depend on what application area the particular processors are intended for.

    You wanted this story to be simple? Well tough. 😜


  • Discourse touched me in a no-no place

    @cvi said in An amusing rant about C:

    size_t?

    That's an API type, not an ABI type.



  • @LaoC said in An amusing rant about C:

    Almost

    What do you propose to use instead?



  • @dkf said in An amusing rant about C:

    That's an API type, not an ABI type.

    Not sure I agree with the distinction, although it is typically tied to the size of a pointer (segmented memory is as always more complicated).


  • Discourse touched me in a no-no place

    @cvi said in An amusing rant about C:

    Not sure I agree with the distinction

    The point is that it is an unsigned integer type that is large enough to hold the maximum size of any memory object that can be allocated. But that's the name of a type in the API; in the ABI, you'd instead version things so that you have parallel allocators, one for 32-bit sized things and another for 64-bit sized things; bind to the right one and all will work. But normally you don't need that sort of thing; you'll usually have the width of size_t always being the same width as pointers, and you'll have the size of pointers dictated by the more basic requirements (i.e., function call signatures).

    Memory allocation is tied to the platform very tightly exactly because it is minting pointers. There is something of a sense that it's the exact thing that a platform “triple” is really describing.


  • Banned

    @LaoC said in An amusing rant about C:

    I don't think anyone disagrees that using variable-sized integers nowadays, particularly in functions called by other people, is almost always a stupid idea.

    Um... @djls45 in this very thread?

    Does anyone disagree though that they were absolutely essential for at least the first two decades of C's existence?

    I do.


  • Considered Harmful

    @dkf said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    Modern CPUs aren't optimized for 16-bit operations.

    That again depends on exactly which architecture you're talking about. It may even vary with sub-architecture; there are ARM variants that have operations that are extremely useful for 16-bit operations (and usually it's just your compiler that knows about them). The exact variations that you see in practice will depend on what application area the particular processors are intended for.

    You wanted this story to be simple? Well tough. 😜

    And this isn't even talking about massively parallel rough int stuff like a physics accelerator does. ALL the optimized path is 16 bit and there's no slow path.



  • @Gąska said in An amusing rant about C:

    @djls45 said in An amusing rant about C:

    This is a stupid complaint by an author who doesn't understand what she's actually complaining about.

    This author literally wrote a tool to automatically detect ABI incompatibilities. Beyond that, they're a long time contributor to the Rust compiler and the Rust project in general; they wrote most of the official documentation of unsafe Rust features; as far as I can tell they're a paid Mozilla employee (Mozilla is in charge* of Rust); and they seem to have a lot of friends close to C and C++ standards committees.

    But sure, go on.

    Yeah, I googled her and saw all that (before you replied, even!), but that doesn't mean she's not mistaking what she's complaining about.

    Integer types aren't well-defined in C? That's because integer types aren't well-defined across all processor architectures.

    Didn't stop Rust :mlp_shrug: There's u8, u16, u32, u64, u128, usize, i8, i16, i32, i64, i128, isize, and that's it. No shorts, no longs. Just fixed size integers + one pointer-sized type.

    the problem is moreso with programmers who wrote their code assuming that int would be the same across all architectures and platforms, and then didn't want to fix their code's portability problem when int on a different architecture turns out to be a different size. In actuality, those programmers were writing Assembly-in-C code, not C code.

    Look - when I make a program, I want it to behave in a certain way. Ideally, I want the behavior to stay the same, regardless of where, when, or how I compile it. The more things are left up for the compiler to decide, the less trust I have in my own code and the more effort I have to spend making sure identical code actually works identically on all platforms. The more my tools get in the way of getting the actual work done, the shittier they are. And C is at the extreme end of getting in the way of cross-platform coding.

    Is this shittiness justifiable? Yeah, for the most part it is. But justifiable shittiness is still shittiness. C is the worst language for cross-platform development specifically because of all those features you mentioned that were intended with cross-platform development. Unfortunately, the designers of C missed the mark completely and in nearly every case did the exact opposite of what would be actually helpful for cross-platform development. "I want my file format's version indicator to take twice as many bytes on 64-bit architectures" said no one ever.

    Sure, I can move everything to uint32_t and friends. In my code. I can't do the same with the standard library. And almost nothing in the standard library uses fixed-size types. So I have to deal with variable-sized integers whether I want it or not. (Edit: also, a recommendation to use uint32_t wherever possible is itself an admission variable-sized integers were a mistake.)

    All of this goes back to the issue of writing Assembly-in-{other language}. High-level, cross-compatible code ideally should not have anything in it that relies on the specific underlying hardware. Using specific integer sizes is one of those architecture-specific things that breaks cross-compatibility, so C doesn't tell you how big each one of its integral types are "supposed" to be. A char type represents a character, regardless of the actual number of bits used to represent it. A short indicates small numbers, long for large numbers, long long for extra-large numbers, and int for whichever of those sizes is most efficient for the processor to handle.

    But apparently no one wants to write programs that way, so languages have to drag all the possible variations in size up to the coder's level explicitly.

    The pointer type is really the only one left that almost everyone uses without caring how big it is.

    You want to write a language as portable as C? Well, then either you have to write all the different possible compilation targets that C supports or you have to write your "transpiler" to output C code and pass it to a C compiler. One of these involves a whole lot more work than the other, so guess which one nearly everyone picks?

    Out of popular commercially-viable compilers, every single one went with the former. I wonder why that is.

    And how many of those compilers are themselves written in C, or can be traced back to a version that was originally written in C?

    Just to be absolutely clear: these are not C's problems!

    No, strictly speaking it isn't. But they are problems of every interface that relies on C API. Every single one of them. And there are a lot of very important C APIs in use. So they became everybody's problems instead.

    No, these are architecture compatibility problems that everybody has to deal with anyways, and that would exist regardless of C.



  • @LaoC said in An amusing rant about C:

    I don't think anyone disagrees that using variable-sized integers nowadays, particularly in functions called by other people, is almost always a stupid idea.

    It is stupid nowadays, but only because programmers are unwilling to use them properly.

    Does anyone disagree though that they were absolutely essential for at least the first two decades of C's existence?

    Yes. They were not essential then, nor are they essential now. But modern programmers (and so, the languages they use) are all stuck in this idea that cross-compatibility means making sure that all programs function exactly identically, even when that's unnecessary or even more difficult with certain processors.



  • @djls45 said in An amusing rant about C:

    The pointer type is really the only one left that almost everyone uses without caring how big it is.

    Who remembers near and far pointers for 8- and 16-bit Intel processors? Yay, segmented architecture.


  • Discourse touched me in a no-no place

    @HardwareGeek said in An amusing rant about C:

    Who remembers near and far pointers for 8- and 16-bit Intel processors?

    Please do not remind me. There are things best forgotten.


  • Banned

    @djls45 said in An amusing rant about C:

    Integer types aren't well-defined in C? That's because integer types aren't well-defined across all processor architectures.

    Didn't stop Rust :mlp_shrug: There's u8, u16, u32, u64, u128, usize, i8, i16, i32, i64, i128, isize, and that's it. No shorts, no longs. Just fixed size integers + one pointer-sized type.

    the problem is moreso with programmers who wrote their code assuming that int would be the same across all architectures and platforms, and then didn't want to fix their code's portability problem when int on a different architecture turns out to be a different size. In actuality, those programmers were writing Assembly-in-C code, not C code.

    Look - when I make a program, I want it to behave in a certain way. Ideally, I want the behavior to stay the same, regardless of where, when, or how I compile it. The more things are left up for the compiler to decide, the less trust I have in my own code and the more effort I have to spend making sure identical code actually works identically on all platforms. The more my tools get in the way of getting the actual work done, the shittier they are. And C is at the extreme end of getting in the way of cross-platform coding.

    Is this shittiness justifiable? Yeah, for the most part it is. But justifiable shittiness is still shittiness. C is the worst language for cross-platform development specifically because of all those features you mentioned that were intended with cross-platform development. Unfortunately, the designers of C missed the mark completely and in nearly every case did the exact opposite of what would be actually helpful for cross-platform development. "I want my file format's version indicator to take twice as many bytes on 64-bit architectures" said no one ever.

    Sure, I can move everything to uint32_t and friends. In my code. I can't do the same with the standard library. And almost nothing in the standard library uses fixed-size types. So I have to deal with variable-sized integers whether I want it or not. (Edit: also, a recommendation to use uint32_t wherever possible is itself an admission variable-sized integers were a mistake.)

    All of this goes back to the issue of writing Assembly-in-{other language}. High-level, cross-compatible code ideally should not have anything in it that relies on the specific underlying hardware. Using specific integer sizes is one of those architecture-specific things that breaks cross-compatibility, so C doesn't tell you how big each one of its integral types are "supposed" to be.

    You're correct about the first half, but sorely mistaken about the second. It's when you don't use specific integer sizes that you are relying on specific underlying hardware. C forces you to be aware of architectural differences.

    A char type represents a character, regardless of the actual number of bits used to represent it.

    Actually no. The C standard defines char to be exactly one byte, and a byte to be at least 8 bits and correspond to the smallest addressable unit of the machine.

    A short indicates small numbers, long for large numbers, long long for extra-large numbers, and int for whichever of those sizes is most efficient for the processor to handle.

    Which is an utterly useless feature unless you can guarantee a certain integer can fit in a certain type, which you cannot because the whole point is to not know the specific value range.

    When you're messing with memory in some way, then yes, (u)intptr_t is a useful abstraction for e.g. number of elements in an array, when the array can potentially span the whole address space. But in all other use cases, the value range you must be able to handle is independent of architecture you're running on.

    Whenever interacting with outside world - through files, network packets etc. - you MUST use precise sizes or you'll be unable to de/serialize objects properly. You use int32_t and friends. When you're on some retarded platform where they're not available, you use int_least32_t, which is mandatory per standard, to at least ensure it can contain all possible values. And if you really really care about performance so much you're willing to inconvenience yourself, but not enough to actually learn the target architecture, you use int_fast32_t instead.

    These cover literally every use case you might possibly think of. There's no place for raw short, int, long or long long in C code at all. Hasn't been for the last 23 years.

    You want to write a language as portable as C? Well, then either you have to write all the different possible compilation targets that C supports or you have to write your "transpiler" to output C code and pass it to a C compiler. One of these involves a whole lot more work than the other, so guess which one nearly everyone picks?

    Out of popular commercially-viable compilers, every single one went with the former. I wonder why that is.

    And how many of those compilers are themselves written in C

    Only GCC, Clang and Intel's C/C++ compiler, as far as I can tell. MSVC is made in C++ AFAIK. Every other native compiler is written in the language it's compiling. There may be some exceptions depending on how obscure you wish to go.

    or can be traced back to a version that was originally written in C?

    That's cheating. Every new language must, by necessity, start with a compiler written in another language.

    Just to be absolutely clear: these are not C's problems!

    No, strictly speaking it isn't. But they are problems of every interface that relies on C API. Every single one of them. And there are a lot of very important C APIs in use. So they became everybody's problems instead.

    No, these are architecture compatibility problems that everybody has to deal with anyways, and that would exist regardless of C.

    No. Not architecture compatibility. Just library compatibility. Even if you only target one compiler on one system on one architecture, you still have to deal with that.

    And anyway. If C was meant to abstract away architectural differences, but you must be acutely aware of architectural differences when writing portable C... Don't you think C failed at abstracting those away?



  • @Gąska said in An amusing rant about C:

    And anyway. If C was meant to abstract away architectural differences, but you must be acutely aware of architectural differences when writing portable C... Don't you think C failed at abstracting those away?

    I'm completely ignorant of the broader debate, being a lowly peon writing in high level languages. But this paragraph, to me, seems exactly right. An abstraction that doesn't abstract what it says it does... Isn't a good abstraction.



  • @Gąska said in An amusing rant about C:

    Every other native compiler is written in the language it's compiling.

    Bootstrapping notwithstanding.



  • @Benjamin-Hall said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    And anyway. If C was meant to abstract away architectural differences, but you must be acutely aware of architectural differences when writing portable C... Don't you think C failed at abstracting those away?

    I'm completely ignorant of the broader debate, being a lowly peon writing in high level languages. But this paragraph, to me, seems exactly right. An abstraction that doesn't abstract what it says it does... Isn't a good abstraction.

    Not abstracting machine-specific details like word size was a deliberate decision that dates back at least to K&R. The standards committee it eventually accreted did not (and I guess still does not) "...want to force programmers into writing portably, to preclude the use of C as a “high-level assembler”: the ability to write machine-specific code is one of the strengths of C."
    [http://www.open-std.org/jtc1/sc22/wg14/www/docs/C99RationaleV5.10.pdf]


  • Banned

    @Watson with dedication to everyone whose browser cannot handle 200 pages of PDF inside onebox. topspin



  • @Gąska said in An amusing rant about C:

    @dkf fixed point math implementation is on my list of things I hope to never have to do in my career.

    I concur. And I say this after having had to make use of fixed-point math about 25 years ago to get decent performance out of a 486 as compared to FPU operations.



  • @PleegWat said in An amusing rant about C:

    @HardwareGeek Yup, I learned recently (well, last year or so) that famous processors from the 80s like the 6502 still have close variants in production to this day.

    My favorite was probably the Z80 as it's weird in the x86 kind of way but also reasonably flexible. The lack of native multiplication and division instructions is disappointing though.



  • @Gąska said in An amusing rant about C:

    Much like with JS, they could've chosen anything they wanted, and they chose the absolutrly worst possible option.

    JS is the best option to choose because it's what web browsers use.

    Sort of like how HTTP is the best option for a web application framework because it's also what web browsers use.

    And we all know the story of how each of those highly inappropriate candidates were chosen to become the foundations of our future - by accident of history.


  • BINNED

    @Gąska said in An amusing rant about C:

    @Watson with dedication to everyone whose browser cannot handle 200 pages of PDF inside onebox. topspin

    Did I complain about this before?
    I’m old (not really, but really) and my memory sucks.



  • @Benjamin-Hall said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    And the remark that Windows does so much better seems weirdly tangential

    Yeah, that's how tangents work.

    You know what's really weirdly tangential?

    Opposite over adjacent?



  • @Groaner said in An amusing rant about C:

    @Benjamin-Hall said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    And the remark that Windows does so much better seems weirdly tangential

    Yeah, that's how tangents work.

    You know what's really weirdly tangential?

    Opposite over adjacent?

    No, that's normally tangential. Although the phrase "normally tangential" is weird, because if an one line segment is normal to another at a point, it can't also be tangential at that same point. Normally. But this discussion about angles is also tangential. So it's both normally tangential and weirdly tangential at the same time.



  • @cvi said in An amusing rant about C:

    That's an optimization, and you should make a conscious decision to perform this optimization as a programmer, knowing that by doing so you declare that the value is never going to exceed 2G or 4G.

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    When it comes to things like database clustering keys (where the size of the key has propagating effects) for low-activity tables, it might not be a terrible idea.



  • @djls45 said in An amusing rant about C:

    A char type represents a character, regardless of the actual number of bits used to represent it.

    I miss the days when char actually was a character. But I'm fine making the concession of it now being atomic UTF8 encoding unit inside of null-terminated C-string.



  • @HardwareGeek said in An amusing rant about C:

    @djls45 said in An amusing rant about C:

    The pointer type is really the only one left that almost everyone uses without caring how big it is.

    Who remembers near and far pointers for 8- and 16-bit Intel processors? Yay, segmented architecture.

    I do!

    Fortunately, I started my low-level programming days at the tail end of that era and spent most of my time in the midst of DOS Extenders, which provided a flat memory model and you'd use the segment registers for selectors instead.

    The idea that a segment and offset of A000:0000 combined to yield an actual address of A0000 was kind of weird though.


  • Banned

    @topspin said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    @Watson with dedication to everyone whose browser cannot handle 200 pages of PDF inside onebox. topspin

    Did I complain about this before?
    I’m old (not really, but really) and my memory sucks.

    Remember when I linked C++ spec?



  • Now on to the article...

    This Aria person seems to be quite upset about being put in a position where one is expected to solve thorny legacy architectural problems and muddle through difficult decisions while still having to live within suffocating parameters.

    That's why we get paid.

    I'm sure more than a few people might have cursed K&R's decision to define time as the number of seconds expressed as a 32-bit integer from midnight January 1, 1970 UTC. However, I seem to remember that Dennis himself considered it a job well done if his timing system was able to encapsulate his entire life. And it did. His time was carefully spent building the very foundations of our digital world today, and now the torch has been passed on to us to maintain it. He most definitely had to work within difficult constraints in his day, just as we do now.

    The only reason we are able to look back and call out crappy flawed designs (which were good enough back in the day) is that we have the benefit of 40+ years of hindsight.


  • Banned

    @Groaner said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    That's an optimization, and you should make a conscious decision to perform this optimization as a programmer, knowing that by doing so you declare that the value is never going to exceed 2G or 4G.

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    On one hand, yes, these in particular will never be in billions. On the other, that was the original motivation behind 16-bit Unicode encoding.


  • Banned

    @Groaner said in An amusing rant about C:

    Now on to the article...

    This Aria person seems to be quite upset about being put in a position where one is expected to solve thorny legacy architectural problems and muddle through difficult decisions while still having to live within suffocating parameters.

    That's why we get paid.

    As if you never complained on this forum about having to work with legacy code as part of your job. That you get paid for.

    The only reason we are able to look back and call out crappy flawed designs (which were good enough back in the day) is that we have the benefit of 40+ years of hindsight.

    Absolutely. Doesn't make any less crap. That's how progress works. We know better now therefore we're able to identify bad decisions in old software.


  • BINNED

    @Gąska said in An amusing rant about C:

    @topspin said in An amusing rant about C:

    @Gąska said in An amusing rant about C:

    @Watson with dedication to everyone whose browser cannot handle 200 pages of PDF inside onebox. topspin

    Did I complain about this before?
    I’m old (not really, but really) and my memory sucks.

    Remember when I linked C++ spec?

    To be fair, that's 1000+ pages.


  • Banned

    @topspin 1841 to be exact.

    In my defense, I didn't realize it's going to onebox EVERYTHING.



  • @Groaner said in An amusing rant about C:

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    What about using just 16 bits then? What about 24 bits? :half-trolling:

    There is this (VkAccelerationStructureInstanceKHR) structure in the Vulkan spec that uses 24-bit integer fields, and that people occasionally trip over and whine about.



  • @cvi said in An amusing rant about C:

    (VkAccelerationStructureInstanceKHR)

    :wtf: was that named by a diehard Java framework architect?

    (Really, I know nothing about the topic, but that name...)


  • Banned

    @Benjamin-Hall said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    (VkAccelerationStructureInstanceKHR)

    :wtf: was that named by a diehard Java framework architect?

    Design by committee. Literally.

    Edit: also, no namespaces. Another thing that makes C naturally painful to use anywhere for anything.



  • @Benjamin-Hall said in An amusing rant about C:

    was that named by a diehard Java framework architect?

    The name makes sense in the context. Instance/Instancing refers to using a single mesh/model multiple times in a scene. In this case, the mesh is represented by an acceleration structure (e.g., a bounding volume hierarchy) that is used to accelerate ray tracing/ray queries. (Vk is the Vulkan prefix, because as @Gąska points out, no namespaces. KHR is a suffix indicating that this isn't core functionality, but rather a Khronos extension. Khronos extensions tend to be multi-vendor, NVIDIA's original ray tracing stuff had a NV suffix to indicate that it was NVIDIA-specific.)

    In terms of Javaesque naming, it's by far not the worst offender. Let me introduce you to stuff like

    • VkPhysicalDeviceShaderDemoteToHelperInvocationFeaturesEXT
      and
    • VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR


  • @cvi said in An amusing rant about C:

    @Benjamin-Hall said in An amusing rant about C:

    was that named by a diehard Java framework architect?

    The name makes sense in the context. Instance/Instancing refers to using a single mesh/model multiple times in a scene. In this case, the mesh is represented by an acceleration structure (e.g., a bounding volume hierarchy) that is used to accelerate ray tracing/ray queries. (Vk is the Vulkan prefix, because as @Gąska points out, no namespaces. KHR is a suffix indicating that this isn't core functionality, but rather a Khronos extension. Khronos extensions tend to be multi-vendor, NVIDIA's original ray tracing stuff had a NV suffix to indicate that it was NVIDIA-specific.)

    In terms of Javaesque naming, it's by far not the worst offender. Let me introduce you to stuff like

    • VkPhysicalDeviceShaderDemoteToHelperInvocationFeaturesEXT
      and
    • VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR

    I'm getting MFC flashbacks.



  • @Carnage It's definitively not for people who like to stick to max 80 characters line width.


  • BINNED

    @cvi said in An amusing rant about C:

    @Carnage It's definitively not for people who like to stick to max 80 characters line width.

    Or for people who use ALL_CAPS for macros.



  • @Gąska said in An amusing rant about C:

    @Groaner said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    That's an optimization, and you should make a conscious decision to perform this optimization as a programmer, knowing that by doing so you declare that the value is never going to exceed 2G or 4G.

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    On one hand, yes, these in particular will never be in billions. On the other, that was the original motivation behind 16-bit Unicode encoding.

    Which would have been plenty fine until some asshole decided it would be cool to not only have every possible variant of human script encoded, but every possible emoji with every possible skin tone.

    Because being able to name a SQL database with the eggplant emoji is the most pressing need in computing.


  • Java Dev

    @Groaner As I recall, the point they exhausted the BMP was before the fictional scripts & emojis started. Possibly even before they went all-in on extinct scripts.

    And 'every possible skin tone' is not nearly as bad as it sounds from a codepoint perspective since it is handled with combining characters.


Log in to reply