An amusing rant about C



  • @cvi said in An amusing rant about C:

    @Groaner said in An amusing rant about C:

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    What about using just 16 bits then? What about 24 bits? :half-trolling:

    There is this (VkAccelerationStructureInstanceKHR) structure in the Vulkan spec that uses 24-bit integer fields, and that people occasionally trip over and whine about.

    24-bit integers aren't an uncommon use case. They're called RGB colors, and I've had to manually adjust texture-loading code which was erroneously expecting RGBA to account for them.

    Unfortunately I don't recall there being a native type to store them ☹



  • @PleegWat said in An amusing rant about C:

    @Groaner As I recall, the point they exhausted the BMP was before the fictional scripts & emojis started. Possibly even before they went all-in on extinct scripts.

    In my day, there were only 256 characters, and we only really needed 128 of them since the extended characters were mostly for ASCII art. 🚎

    And 'every possible skin tone' is not nearly as bad as it sounds from a codepoint perspective since it is handled with combining characters.

    I'm not so sure. Every time someone proposes that we do away with everything except UTF-8, you'll get someone whining about how variable-sized characters trip up their use case, because they want to be able to randomly access the nth character of a string or something.



  • @Groaner said in An amusing rant about C:

    Because being able to name a SQL database with the eggplant emoji is the most pressing need in computing.

    Hey, "naming things" is widely recognized as one of the hardest problems in computer science!

    @cvi said in An amusing rant about C:

    In terms of Javaesque naming, it's by far not the worst offender. Let me introduce you to stuff like

    • VkPhysicalDeviceShaderDemoteToHelperInvocationFeaturesEXT
      and
    • VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR

    Now imagine how much better would these be with aubergine 🍆


  • Discourse touched me in a no-no place

    @Groaner said in An amusing rant about C:

    My favorite was probably the Z80 as it's weird in the x86 kind of way but also reasonably flexible. The lack of native multiplication and division instructions is disappointing though.

    If you've got the right access to the IP, I think there's enough holes in the Z80 instruction set that you could staple your own in there as a vendor extension. I'm not sure about that though. I know for sure you can do that sort of thing with RISC-V and can't usually with ARM (memory mapped processor extensions are the thing there). And the x86/ia64 are irrelevant there; you don't typically get the option to use those in a SoC in the first place.


  • Discourse touched me in a no-no place

    @Groaner said in An amusing rant about C:

    In my day, there were only 256 characters, and we only really needed 128 of them since the extended characters were mostly for ASCII art.

    That works for most people, except that they want a different subset of all conceivable characters and you want to be able to to communicate with each other sometimes. Early solutions for this involved changing the font, but that really sucks; expanding the encoding is far easier to handle. (The result of many rounds of standardisation — Unicode using UTF-8 as the encoding — has also ended up with something simpler than some of the weirder shit that it replaced. Shift encoding schemes were really nasty.)



  • @Groaner said in An amusing rant about C:

    24-bit integers aren't an uncommon use case. They're called RGB colors, and I've had to manually adjust texture-loading code which was erroneously expecting RGBA to account for them.

    24-bit is largely unsupported in the desktop (hardware) world. See list here (all platforms) / here (windows only) / here (android only). R8G8B8_UNORM is one of the most supported formats, and less than 20% platforms support it overall for texture sampling. For desktop only the figure is at about 10% or less. (Other usage modes are even less supported.)

    If you ask for a 24-bit texture format in OpenGL, the drivers are most likely lying to you and converting it to 32-bit R8G8B8X8 internally. DirectX apparently ended up ditching 24-bit formats entirely with DX11.

    Subsequently, pretty much anything that exposes 24-bit formats ends converting those to 32-bit at some point. The sooner it hits a GPU, the sooner it ends up 32 bits.



  • @topspin said in An amusing rant about C:

    Or for people who use ALL_CAPS for macros.

    ALL_CAPS is used for constants and macros. (Yes, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR is an integral constant that is used to identify a structure in memory of being of the C type VkPhysicalDeviceShaderSubgroupUniformControlFlowFeaturesKHR. Yes, there is practical reason for this existing. Is it a good reason? *shrug* It's a C API, what do you expect?)


  • Banned

    @Groaner said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    @Groaner said in An amusing rant about C:

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    What about using just 16 bits then? What about 24 bits? :half-trolling:

    There is this (VkAccelerationStructureInstanceKHR) structure in the Vulkan spec that uses 24-bit integer fields, and that people occasionally trip over and whine about.

    24-bit integers aren't an uncommon use case. They're called RGB colors

    It's been literally decades since the last time I've seen actual 24-bit RGB. In my experience it's always been either R5G6B5, R8G8B8A8 or R10G10B10A2.



  • @Gąska said in An amusing rant about C:

    @Groaner said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    @Groaner said in An amusing rant about C:

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    What about using just 16 bits then? What about 24 bits? :half-trolling:

    There is this (VkAccelerationStructureInstanceKHR) structure in the Vulkan spec that uses 24-bit integer fields, and that people occasionally trip over and whine about.

    24-bit integers aren't an uncommon use case. They're called RGB colors

    It's been literally decades since the last time I've seen actual 24-bit RGB. In my experience it's always been either R5G6B5, R8G8B8A8 or R10G10B10A2.

    Curious--assuming that the numbers mean the numbers of bits per channel...why is green given 1 more bit than red or blue in that first spec? Something something color perception?



  • @Groaner said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    @Groaner said in An amusing rant about C:

    This is often a very reasonable assumption, though. Picklists of states/provinces and countries will never have billions of items, and most warehouses, for example, will have trouble picking and shipping more than a billion orders in the service lifetime of the facility.

    What about using just 16 bits then? What about 24 bits? :half-trolling:

    There is this (VkAccelerationStructureInstanceKHR) structure in the Vulkan spec that uses 24-bit integer fields, and that people occasionally trip over and whine about.

    24-bit integers aren't an uncommon use case. They're called RGB colors, and I've had to manually adjust texture-loading code which was erroneously expecting RGBA to account for them.

    Unfortunately I don't recall there being a native type to store them ☹

    Once upon a time, I designed hardware to deal with 24-bit RGB pixels, with 3 uint32_ts holding 4 pixels. Of course, the alignment within any given uint32_t depended on whether it was the first, second, or third uint32_t in the group. :fun:



  • @Benjamin-Hall said in An amusing rant about C:

    Something something color perception?

    Yes. The human eye is slightly more sensitive to green.



  • @HardwareGeek 4 pixels in a row/column, or 4 pixels in a 2x2 quad?



  • @cvi IIRC, 4 pixels in a row, with guaranteed power-of-2 dimensions. For scan line-by-line display, 2x2 quad would have been nasty.



  • @HardwareGeek said in An amusing rant about C:

    guaranteed power-of-2 dimensions.

    Heh. Until recently, I thought that the days of requiring POT dimension had been long over. Then I learned from a colleague that Unity apparently still requires POT dimensions when rendering to textures. Wonder what pre-2003-level hardware they still want to support.

    For scan line-by-line display, 2x2 quad would have been nasty.

    Yeah, that sounds less than ideal.


  • Java Dev

    @HardwareGeek said in An amusing rant about C:

    Of course, the alignment within any given uint32_t depended on whether it was the first, second, or third uint32_t in the group. :fun:

    I'm reminded of our base64 code.



  • @cvi said in An amusing rant about C:

    I thought that the days of requiring POT dimension had been long over.

    For the particular application, dimensions were always either 16x16, 32x32, or 64x64. I think. Maybe it was 8x8, 16x16, or 32x32. It was a long time ago.

    I think 24-bit mode only supported one of those dimensions, but I don't remember which (smallest, I think). And there was no padding at row boundaries, so the alignment shifted from row-to-row, too.


  • Banned

    @cvi said in An amusing rant about C:

    @HardwareGeek said in An amusing rant about C:

    guaranteed power-of-2 dimensions.

    Heh. Until recently, I thought that the days of requiring POT dimension had been long over. Then I learned from a colleague that Unity apparently still requires POT dimensions when rendering to textures. Wonder what pre-2003-level hardware they still want to support.

    My guess would be newest Radeon. They've still been super picky about what they're willing to work with way into 2010s, and I doubt it has changed since.

    A classic tale of workaround standardization: some hardware doesn't support something so developers avoid it like fire, so the hardware developers feel no incentive to ever add support for it.



  • @Gąska said in An amusing rant about C:

    My guess would be newest Radeon

    On one hand: I doubt that. On the other hand: I also really don't doubt that.



  • @cvi said in An amusing rant about C:

    @Carnage It's definitively not for people who like to stick to max 80 characters line width.

    That's part of our .clang-format file spec. sigh.


  • BINNED

    @dcon said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    @Carnage It's definitively not for people who like to stick to max 80 characters line width.

    That's part of our .clang-format file spec!. sigh.

    🔧


  • Banned

    @dcon said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    @Carnage It's definitively not for people who like to stick to max 80 characters line width.

    That's part of our .clang-format file spec, which is just patently stupid. sigh.

    Make use of those characters!



  • @Gąska said in An amusing rant about C:

    @djls45 said in An amusing rant about C:

    Integer types aren't well-defined in C? That's because integer types aren't well-defined across all processor architectures.

    Didn't stop Rust :mlp_shrug: There's u8, u16, u32, u64, u128, usize, i8, i16, i32, i64, i128, isize, and that's it. No shorts, no longs. Just fixed size integers + one pointer-sized type.

    the problem is moreso with programmers who wrote their code assuming that int would be the same across all architectures and platforms, and then didn't want to fix their code's portability problem when int on a different architecture turns out to be a different size. In actuality, those programmers were writing Assembly-in-C code, not C code.

    Look - when I make a program, I want it to behave in a certain way. Ideally, I want the behavior to stay the same, regardless of where, when, or how I compile it. The more things are left up for the compiler to decide, the less trust I have in my own code and the more effort I have to spend making sure identical code actually works identically on all platforms. The more my tools get in the way of getting the actual work done, the shittier they are. And C is at the extreme end of getting in the way of cross-platform coding.

    Is this shittiness justifiable? Yeah, for the most part it is. But justifiable shittiness is still shittiness. C is the worst language for cross-platform development specifically because of all those features you mentioned that were intended with cross-platform development. Unfortunately, the designers of C missed the mark completely and in nearly every case did the exact opposite of what would be actually helpful for cross-platform development. "I want my file format's version indicator to take twice as many bytes on 64-bit architectures" said no one ever.

    Sure, I can move everything to uint32_t and friends. In my code. I can't do the same with the standard library. And almost nothing in the standard library uses fixed-size types. So I have to deal with variable-sized integers whether I want it or not. (Edit: also, a recommendation to use uint32_t wherever possible is itself an admission variable-sized integers were a mistake.)

    All of this goes back to the issue of writing Assembly-in-{other language}. High-level, cross-compatible code ideally should not have anything in it that relies on the specific underlying hardware. Using specific integer sizes is one of those architecture-specific things that breaks cross-compatibility, so C doesn't tell you how big each one of its integral types are "supposed" to be.

    You're correct about the first half, but sorely mistaken about the second. It's when you don't use specific integer sizes that you are relying on specific underlying hardware. C forces you to be aware of architectural differences.

    I would argue that it's the other way around. You can simply use int on most processors, and C will compile just fine. But for other languages, you have to know which sizes are supported by your target architecture, so for example, you have to know whether you can use i32 or only u32 or if i16/u16 is the largest size available.

    A char type represents a character, regardless of the actual number of bits used to represent it.

    Actually no. The C standard defines char to be exactly one byte, and a byte to be at least 8 bits and correspond to the smallest addressable unit of the machine.

    It does now, because an 8-bit byte has become the most common size of byte, but that was originally a POSIX standard. A char was defined to be of a size large enough to represent any of the "execution character set."
    Addressable unit of memory is likewise not a perfect descriptor. Some embedded systems can be addressable down to the nybble or even individual bit, or have different sizes between data and code memory.

    A short indicates small numbers, long for large numbers, long long for extra-large numbers, and int for whichever of those sizes is most efficient for the processor to handle.

    Which is an utterly useless feature unless you can guarantee a certain integer can fit in a certain type, which you cannot because the whole point is to not know the specific value range.

    How often really does a programmer work near the limits of any type's value range? If you need to know exactly how big the range limit is, you're not coding as high-level as you may think you are; you're getting close to the metal.

    When you're messing with memory in some way, then yes, (u)intptr_t is a useful abstraction for e.g. number of elements in an array, when the array can potentially span the whole address space. But in all other use cases, the value range you must be able to handle is independent of architecture you're running on.

    Whenever interacting with outside world - through files, network packets etc. - you MUST use precise sizes or you'll be unable to de/serialize objects properly. You use int32_t and friends. When you're on some retarded platform where they're not available, you use int_least32_t, which is mandatory per standard, to at least ensure it can contain all possible values. And if you really really care about performance so much you're willing to inconvenience yourself, but not enough to actually learn the target architecture, you use int_fast32_t instead.

    These cover literally every use case you might possibly think of. There's no place for raw short, int, long or long long in C code at all. Hasn't been for the last 23 years.

    In literally none of those cases does the programmer need to know that the data is divided into blocks of a certain size. That could all be transparent to him.

    You want to write a language as portable as C? Well, then either you have to write all the different possible compilation targets that C supports or you have to write your "transpiler" to output C code and pass it to a C compiler. One of these involves a whole lot more work than the other, so guess which one nearly everyone picks?

    Out of popular commercially-viable compilers, every single one went with the former. I wonder why that is.

    And how many of those compilers are themselves written in C

    Only GCC, Clang and Intel's C/C++ compiler, as far as I can tell. MSVC is made in C++ AFAIK. Every other native compiler is written in the language it's compiling. There may be some exceptions depending on how obscure you wish to go.

    or can be traced back to a version that was originally written in C?

    That's cheating. Every new language must, by necessity, start with a compiler written in another language.

    Sure: Assembly. No one had to pick C for the first version.

    Just to be absolutely clear: these are not C's problems!

    No, strictly speaking it isn't. But they are problems of every interface that relies on C API. Every single one of them. And there are a lot of very important C APIs in use. So they became everybody's problems instead.

    No, these are architecture compatibility problems that everybody has to deal with anyways, and that would exist regardless of C.

    No. Not architecture compatibility. Just library compatibility. Even if you only target one compiler on one system on one architecture, you still have to deal with that.

    Libraries were written by programmers, who, as I stated earlier, would rather work close to the underlying architecture than to abstract their code to be truly cross-compatible without changing it per compilation target. But even then, the C standard libraries and headers do a lot to try to allow coders to just focus on their algorithms instead of worrying about whether an int is 16, 24, 32, 48, or 64 bits or whether it's signed or unsigned.

    And anyway. If C was meant to abstract away architectural differences, but you must be acutely aware of architectural differences when writing portable C... Don't you think C failed at abstracting those away?

    Only insofar as no language has abstracted those away, because programmers don't want that much abstraction.

    All the complaints so far have been about programmers and how they want to write code, not about C itself.


  • BINNED

    @HardwareGeek said in An amusing rant about C:

    @Benjamin-Hall said in An amusing rant about C:

    Something something color perception?

    Yes. The human eye is slightly more sensitive to green.

    “slightly”

    307ba17e-4da4-404f-ae97-fe369d3ca9c7-image.png

    Increased luminosity doesn’t mean you need more precision – just make each interval smaller if that’s the problem. How sensitive you are to the voltages in a monitor is irrelevant to encoding.

    Producing whites requires significantly more green luminosity than the other primaries¹, and people like when [1, 1, 1] corresponds to a white. So if each primary has a similar step in luminosity, more green steps are required for white.

    You care a lot less for each primary in isolation to increase by the same luminosity. But with one extra bit remaining after 16/3, you might as well give it to the one that needs to be brightest.


    ¹ sRGB primaries producing D65 white: 0.21 red, 0.72 green, 0.07 blue. Note nobody routinely gives the green channel three more bits than the blue.

    Filed under: Every sentence containing “the human [perception]” is highly suspect and almost certainly wrong


  • Banned

    @djls45 said in An amusing rant about C:

    All the complaints so far have been about programmers and how they want to write code, not about C itself.

    Have you missed the part about C headers being virtually unparseable?

    (Won't reply to the rest of your post because it's like talking to a wall. If you can't understand "I must ensure I can safely store object ID 745884 in memory" is a mission-critical and extremely common requirement, there's no helping you.)


  • Banned

    Although I will say this much because I love throwing quotes from standards in people's faces.

    @djls45 said in An amusing rant about C:

    A char type represents a character, regardless of the actual number of bits used to represent it.

    Actually no. The C standard defines char to be exactly one byte, and a byte to be at least 8 bits and correspond to the smallest addressable unit of the machine.

    It does now, because an 8-bit byte has become the most common size of byte, but that was originally a POSIX standard. A char was defined to be of a size large enough to represent any of the "execution character set."

    ANSI X3. 159-1989 Programming Language - C, section 2.2.4.2.1 defines CHAR_BIT, "number of bits for smallest object that is not a bit-field (byte)", to be at least 8.

    Unless you mean how things were before 1989, when there was no standard at all how C language should work.

    Addressable unit of memory is likewise not a perfect descriptor.

    Section 1.6 literally says "it shall be possible to express the address of each individual byte of an object uniquely." Again, this is from the original 1989 standard, not any newer revision (although they say the same thing - I checked.)

    Edit: and if you really want, I can also find the part where it says consecutive bytes must have consecutive addresses.



  • @Gąska said in An amusing rant about C:

    Unless you mean how things were before 1989, when there was no standard at all how C language should work.

    K&R was published in 1978.

    Again, this is from the original 1989 standard, not any newer revision (although they say the same thing - I checked.)

    And according to the Rationale file posted earlier, the standard (at least, as of 1995) was written primarily to "codify" how programmers had been writing C code instead of describing how C code should be written. I.e. programmers weren't using it the way it was originally designed, so they created and adapted the Standard to match what the majority of programmers were already doing.


  • Banned

    @djls45 if even 33 years ago people already were writing C very differently than originally designed, isn't that a good indication that the original design was shit?


  • Considered Harmful

    @Gąska said in An amusing rant about C:

    @djls45 if even 33 years ago people already were writing C very differently than originally designed, isn't that a good indication that the original design was shit?

    Nope. However, this would be the case for LISP or COBOL. But again nope for APL.

    :attack_helicopter_shrugging: I don't make the rules.



  • @Gąska Only insofar as programmers misunderstood its design. OTOH, though, it speaks well to its design that programmers were able to make use of its features in such a way that they could write the code as they wished, and though it was less portable than originally intended, it was (and is) still highly portable.


  • Banned

    @djls45 does it speak well of JavaScript's design that programmers managed to build an entire world of SPA, SAAS and cloud applications using it, to the point there's barely any frontend work left on the job market that isn't JavaScript?


  • Considered Harmful

    @Gąska many people would lean a rhetorical question the other way. I salute your choice here.



  • @dcon said in An amusing rant about C:

    @cvi said in An amusing rant about C:

    @Carnage It's definitively not for people who like to stick to max 80 characters line width.

    That's part of our .clang-format file spec. sigh.

    Also, it treats the limits as number of bytes (in case of UTF-8), so using eggplants won't help. Sigh indeed.


  • Banned

    @Kamil-Podlesak ASCII porn it is.



  • @dcon said in An amusing rant about C:

    That's part of our .clang-format file spec. sigh.

    What does that do to a constant like VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR (83 characters by itself)?

    Like .. do you have to go all

    #define JOIN(a,b) a##b
    //...
    foo = JOIN(VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_,
            SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR);
    

    on it? 🏆



  • @HardwareGeek said in An amusing rant about C:

    Yes. The human eye is slightly more sensitive to green.

    I think what's more relevant here is that the human eye can better distinguish small changes in brightness than small color variations. The green component is the most important for brightness (see @kazitor's post above) so having more resolution in the green component allows for smoother brightness changes.

    And anyway there's that left-over bit, can't let that go to waste now can we?



  • @Benjamin-Hall said in An amusing rant about C:

    was that named by a diehard Java framework architect?

    Right, isn't the Vulkan spec a thing of pure beauty?

    And if you think the naming conventions are inconvenient, just wait until you actually need to do something with that API (which tangentially fits this thread because apart from the subject matter being complex in itself, and the consequences of the design-by-committee, it complicates things by going out of its way to allow for ABI stability, with strict identification and versioning of all structs, versioned function names etc)


  • Discourse touched me in a no-no place

    @djls45 said in An amusing rant about C:

    Libraries were written by programmers, who, as I stated earlier, would rather work close to the underlying architecture than to abstract their code to be truly cross-compatible without changing it per compilation target.

    FWIW, a lot of C code is not highly portable and not desired to ever be so. The code for our systems at work doesn't need to work on your x86, and can't because it depends on having a bunch of custom hardware that your machine doesn't have (that's the fucking point of having special hardware). Portability of ABI is only one of the possible use cases.



  • As far as that rant from the OT is concerned (inb4 :doing_it_wrong:), the gripe seems to be that C header files are a poor way to specify an ABI because C does not define an ABI. Well, d'uh.



  • @Gąska said in An amusing rant about C:

    @djls45 does it speak well of JavaScript's design that programmers managed to build an entire world of SPA, SAAS and cloud applications using it, to the point there's barely any frontend work left on the job market that isn't JavaScript?

    And the remaining frontend work is in Typescript 🚎


  • Considered Harmful

    @Groaner well, for authoring. But of course it goes out as WASM.


  • BINNED

    @cvi said in An amusing rant about C:

    @dcon said in An amusing rant about C:

    That's part of our .clang-format file spec. sigh.

    What does that do to a constant like VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR (83 characters by itself)?

    Like .. do you have to go all

    #define JOIN(a,b) a##b
    //...
    foo = JOIN(VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_,
            SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR);
    

    on it? 🏆

    Of course not, it just yells at you:

    Overfull \hbox (badness 10000) detected at line 3.
    

    :half-trolling:, the formatting rules actually work similiarly as latex with regard to where best to put line breaks.



  • @topspin said in An amusing rant about C:

    the formatting rules actually work similiarly as latex with regard to where best to put line breaks.

    Can it do auto-hyphenation in identifiers?


  • Banned

    @cvi said in An amusing rant about C:

    @dcon said in An amusing rant about C:

    That's part of our .clang-format file spec. sigh.

    What does that do to a constant like VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR (83 characters by itself)?

    Like .. do you have to go all

    #define JOIN(a,b) a##b
    //...
    foo = JOIN(VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_,
            SHADER_SUBGROUP_UNIFORM_CONTROL_FLOW_FEATURES_KHR);
    

    on it? 🏆

    My macro-induced paranoia tells me it should actually be

    #define JOIN2(a,b) (a##b)
    #define JOIN(a,b) (JOIN2(a,b))
    

    But I can't tell you the reason why.


  • BINNED

    @Gąska said in An amusing rant about C:

    But I can't tell you the reason why.

    The answer should be somewhere in here, but I'm not going to read it to check:



  • @Gąska I can, and it doesn't matter in the example. :-)


  • Banned

    @cvi SHOULD, not MUST. Someone so familiar with C macros should know the difference 😉



  • @Gąska Yeah, fair. 🙂

    FWIW- works on MSVC, GCC &Clang (with and without nvcc), and who cares about other compilers (:tro-pop:). Used to work on icc & pgi too, but haven't tested those in a while. (I vaguely remember that if you want to use JOIN with the C preprocessor in Fortran with gfortran, you need to do some extra gymnastics, because they run a "special" legacy version of the preprocessor there.)



  • @cvi said in An amusing rant about C:

    JOIN with the C preprocessor in Fortran with gfortran



  • @Benjamin-Hall Have you tried programming Fortran? That language has absolutely zero meta-programming facilities. Legacy C preprocessor (which you get with the uppercase .F90 or whatever extension - I haven't read the Fortran standard, so I don't know if that's a standardized feature or just a de facto standard) is about as far as you can go.

    *cvi stares in the distance for a silent minute or two*

    Anyway.



  • @cvi said in An amusing rant about C:

    @Benjamin-Hall Have you tried programming Fortran? That language has absolutely zero meta-programming facilities. Legacy C preprocessor (which you get with the uppercase .F90 or whatever extension - I haven't read the Fortran standard, so I don't know if that's a standardized feature or just a de facto standard) is about as far as you can go.

    *cvi stares in the distance for a silent minute or two*

    Anyway.

    It was more about why would you do Fortran anyway (and yes, I've done a little bit of it). Shoehorning a C preprocessor (let's throw the absurdity that is C macros into the already toxic hellstew that is Fortran...said no one sane) into it is like seasoning your garbage dump salad with extra ricin.


Log in to reply