So I decided to try to update part of my toolchain...



  • @Gribnit said in So I decided to try to update part of my toolchain...:

    @Groaner no, no, an option is an asset that may or may not exist, and a future is like that except in the future, yeesh.

    Do either involve frogs?


  • Considered Harmful

    @Groaner Well, yes, but only if they involve frogs.



  • @Gąska said in So I decided to try to update part of my toolchain...:

    @Groaner he's a college student. Don't demand him to understand economy.

    I took economics and business classes when I was in college. 🤷🏻♂


  • Considered Harmful

    @Gąska No, the one in the middle determines new features. And cargo update should not change the one in the front.


  • Banned

    @levicki said in So I decided to try to update part of my toolchain...:

    @Gąska Just stop that :moving_goal_post: bullshit please.

    I don't know where I've done that, but okay, whatever.

    I will reiterate the important takeaways from the discussion:

    1. Proper way to install C++ runtime is by running installer and putting them in system folder where they belong to be used by every application.

    But it doesn't always work. And when the proper way doesn't work, you do what works.

    1. DLLs are good, save resources and should be used to avoid binary code duplication.

    But that leads to DLL hell, and the best way to avoid DLL hell is to bundle specific versions of DLLs with your programs.

    1. If anything was WTF in my story it was the requirement to have target OS which is not updated since release and then request software which needs sha256 signed files.

    Bridge-burning idealism, meet pragmatic reality. Where things working is more important than doing the current Right Thing To Do™.

    As for binary patching, I even have no clue why you brought it up in the first place and frankly I don't want to know.

    Because you said that DLLs are better because (among other things) they allow more efficient software updates. I wouldn't bring this up if you didn't.


  • Banned

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    @Gąska No, the one in the middle determines new features. And cargo update should not change the one in the front.

    Yes, it shouldn't. Because it's not meant for upgrating to latest and shiniest. But in this topic, we're talking about upgrading to latest and shiniest. Which cargo update is useless for, because it's not what it's meant for.



  • Well, this is promising...

    cmakehell3.png

    And this is not.

    cmakehell4.png

    D: was a hard drive on the computer with which this solution was last built. It is a DVD drive on this computer. This is why we always use relative paths wherever we can, kiddos. But I'm not surprised by this, as cmake likes absolute paths.

    Besides, I find updating include and library search paths (especially when one can copy-paste the text and do a find/replace on it) to be a much lesser form of punishment than dealing with cmake errors.


  • Discourse touched me in a no-no place

    @Groaner said in So I decided to try to update part of my toolchain...:

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    And who uses anything other than futures?

    Some of us prefer puts and calls.

    puts() is mostly deprecated in modern C, mainly because it tended to be matched to the eternally-unsafe-by-design gets().


  • Discourse touched me in a no-no place

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    No, the one in the middle determines new features.

    Strictly it's the pair of the first and second numbers. :pendant:



  • @Groaner said in So I decided to try to update part of my toolchain...:

    This is why we always use relative paths wherever we can, kiddos. But I'm not surprised by this, as cmake likes absolute paths.

    QFT.

    However, subst can be an useful preemptive workaround for absolute-path-happy tools. Use it to define a custom drive in the typically-unused drive letter region that maps to your working directory.


  • ♿ (Parody)

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    @Gąska Okay, and? cargo update should perform zero breaking changes unless you were lazy with your build configuration.

    The problem is usually less with your own stuff and more with all of the third party stuff you have. You'll learn.



  • @levicki said in So I decided to try to update part of my toolchain...:

    @Gąska Just stop that :moving_goal_post: bullshit please.

    I will reiterate the important takeaways from the discussion:

    1. Proper way to install C++ runtime is by running installer and putting them in system folder where they belong to be used by every application.
    2. DLLs are good, save resources and should be used to avoid binary code duplication.
    3. If anything was WTF in my story it was the requirement to have target OS which is not updated since release and then request software which needs sha256 signed files.

    As for binary patching, I even have no clue why you brought it up in the first place and frankly I don't want to know.

    So, is this a good time to mention this one fellow I knew, who liked to include the Java Runtime in his program's directory? He didn't want his users to have to cope with the JRE installer. 🚎


  • And then the murders began.

    @acrow said in So I decided to try to update part of my toolchain...:

    So, is this a good time to mention this one fellow I knew, who liked to include the Java Runtime in his program's directory? He didn't want his users to have to cope with the JRE installer.

    Even big companies who should know better do that; Atlassian ships a copy of the JRE in the program directory for their products. (Both JIRA and Confluence - I assume the others too, but those are the only ones we use.)


  • Discourse touched me in a no-no place

    @Unperverted-Vixen said in So I decided to try to update part of my toolchain...:

    Even big companies who should know better do that

    It's an option you pick when building the application packaging descriptor.



  • @levicki said in So I decided to try to update part of my toolchain...:

    Proper way to install C++ runtime is by running installer and putting them in system folder where they belong to be used by every application.

    If you want an installer that can make your product run without requiring an admin to install it, that ain't gonna work. (That's one reason I'm statically linking)



  • @acrow said in So I decided to try to update part of my toolchain...:

    @levicki said in So I decided to try to update part of my toolchain...:

    @Gąska Just stop that :moving_goal_post: bullshit please.

    I will reiterate the important takeaways from the discussion:

    1. Proper way to install C++ runtime is by running installer and putting them in system folder where they belong to be used by every application.
    2. DLLs are good, save resources and should be used to avoid binary code duplication.
    3. If anything was WTF in my story it was the requirement to have target OS which is not updated since release and then request software which needs sha256 signed files.

    As for binary patching, I even have no clue why you brought it up in the first place and frankly I don't want to know.

    So, is this a good time to mention this one fellow I knew, who liked to include the Java Runtime in his program's directory? He didn't want his users to have to cope with the JRE installer. 🚎

    I can think of a couple Java-based games that do that, because they had too many headaches trying to support newer or multiple JRE's or whatever.


  • Considered Harmful

    @boomzilla said in So I decided to try to update part of my toolchain...:

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    @Gąska Okay, and? cargo update should perform zero breaking changes unless you were lazy with your build configuration.

    The problem is usually less with your own stuff and more with all of the third party stuff you have. You'll learn.

    Which does not breakingly change when you run cargo update. We've been over this.


  • ♿ (Parody)

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    @boomzilla said in So I decided to try to update part of my toolchain...:

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    @Gąska Okay, and? cargo update should perform zero breaking changes unless you were lazy with your build configuration.

    The problem is usually less with your own stuff and more with all of the third party stuff you have. You'll learn.

    Which does not breakingly change when you run cargo update. We've been over this.

    Uh huh.



  • @boomzilla said in So I decided to try to update part of my toolchain...:

    @pie_flavor said in So I decided to try to update part of my toolchain...:

    @Gąska Okay, and? cargo update should perform zero breaking changes unless you were lazy with your build configuration.

    The problem is usually less with your own stuff and more with all of the third party stuff you have. You'll learn.

    AKA most of the meat of this thread. 🤦🏾


  • Discourse touched me in a no-no place

    @mott555 said in So I decided to try to update part of my toolchain...:

    I can think of a couple Java-based games that do that, because they had too many headaches trying to support newer or multiple JRE's or whatever.

    It's the age-old problem: ship a known runtime library that you've tested and where the bugs are at least stuff that you can reproduce, or rely on the system's version that might get fixes early or might break things under your feet unexpectedly.

    https://youtu.be/8Xjr2hnOHiM



  • COMPLICATIONS!

    While current Ogitor is on Qt5, the old version of Ogitor I have is on Qt4, and the newest Qt4 official binaries I can find are for VS2010. This means I'll need to somehow rebuild Qt 4.x for VS2017.

    Also, somewhere between Ogre 1.9 and 1.10, they switched from using OIS to SDL for input. Old Ogitor depends on OIS. My project depends on OIS. Moving to SDL was probably a good call on their part, as SDL seems to offer solutions for audio and the like that and filling in gaps in functionality that Ogre has had (Ogre is by itself just a rendering engine, after all). But for those of us who already have solutions to these problems, it presents a hassle.

    So I'm putting upgrading Ogitor on the back burner for now and working on upgrading the rest of my dependencies and the main project. I made a spreadsheet of all the dependencies and am about halfway done recompiling all with VS2017. In the past, Ogre's terrain format has had binary incompatibilities between minor versions. If that's not the case, then I can leave Ogitor on 1.9 to die a slow death while the rest of my project is on the latest. If it is still the case, I'll have to bring Ogitor forward as well.


  • Banned

    @Groaner I admire your persistence. I'd have given up ages ago.



  • This sounds almost as bad as the time I tried to build something called TShook. My friends had been using TeamSpeak since Skype shit the bed a few years ago. Then, one day, for absolutely no reason, the server decided to require a new version of the client that required Windows 7 SP1. I did some digging and it turns out, as I suspected, that it's a stupid version check and nothing more. Somebody had built this program to send an arbitrary version number that looked like what I needed. Except, like way too many open source projects, they can't be bothered to provide the binary with the source and, of course, the source won't compile because of something wrong with the VC runtime (what I don't know, the error message is useless). Is this crap difficult for a reason? I've never had these types of problems porting my code between compilers/versions.



  • @Zenith said in So I decided to try to update part of my toolchain...:

    Is this crap difficult for a reason? I've never had these types of problems porting my code between compilers/versions.

    It depends. C is usually not much of a problem. C++ can quickly turn into a nasty ball of #if __MSC_VER > whatever sections that make me want to burn my computer to the ground and go become a hermit on a mountaintop with no electronics whatsoever.



  • @Gąska said in So I decided to try to update part of my toolchain...:

    @Groaner I admire your persistence. I'd have given up ages ago.

    I'm not sure how reassuring this will sound, but things are going more or less as I expected they would. It took about a week of my spare time last time to upgrade and it looks like it'll be similar this time. Which is why I wait a few years between these big upgrades. And if things don't pan out, I have backups.

    As annoying as the situation is, it is nice having the full source code to my stack. Now that Unreal isn't $10k for a full source license (which was the case when I started this project), I've considered going that direction. Thing is, 1) that'd set me back about a year as I familiarize myself with that tech and port a 50+kLoC project over to it, and 2) I'm sure that to some extent, I'd be trading one set of problems for another.



  • @Zenith said in So I decided to try to update part of my toolchain...:

    Except, like way too many open source projects, they can't be bothered to provide the binary with the source and, of course, the source won't compile because of something wrong with the VC runtime (what I don't know, the error message is useless). Is this crap difficult for a reason? I've never had these types of problems porting my code between compilers/versions.

    A lot of it has to do with C++ not having a stable ABI, which means you either need the source or compiled libs/DLLs for your specific compiler and version. If all you have is a DLL built against VS2010, you're stuck. Meanwhile, 95+% of the time, the upgrade path for a C# application is to open it in the latest version of VS and set the .NET framework version. :facepalm:

    Then there's cmake hell and the like, but much of that is good ol' CADT, which is nothing new to the OSS world.


  • Notification Spam Recipient

    @acrow said in So I decided to try to update part of my toolchain...:

    @levicki said in So I decided to try to update part of my toolchain...:

    @Gąska Just stop that :moving_goal_post: bullshit please.

    I will reiterate the important takeaways from the discussion:

    1. Proper way to install C++ runtime is by running installer and putting them in system folder where they belong to be used by every application.
    2. DLLs are good, save resources and should be used to avoid binary code duplication.
    3. If anything was WTF in my story it was the requirement to have target OS which is not updated since release and then request software which needs sha256 signed files.

    As for binary patching, I even have no clue why you brought it up in the first place and frankly I don't want to know.

    So, is this a good time to mention this one fellow I knew, who liked to include the Java Runtime in his program's directory? He didn't want his users to have to cope with the JRE installer. 🚎

    I know three programs that do this.



  • @Groaner said in So I decided to try to update part of my toolchain...:

    @Zenith said in So I decided to try to update part of my toolchain...:

    Except, like way too many open source projects, they can't be bothered to provide the binary with the source and, of course, the source won't compile because of something wrong with the VC runtime (what I don't know, the error message is useless). Is this crap difficult for a reason? I've never had these types of problems porting my code between compilers/versions.

    A lot of it has to do with C++ not having a stable ABI, which means you either need the source or compiled libs/DLLs for your specific compiler and version. If all you have is a DLL built against VS2010, you're stuck. Meanwhile, 95+% of the time, the upgrade path for a C# application is to open it in the latest version of VS and set the .NET framework version. :facepalm:

    Then there's cmake hell and the like, but much of that is good ol' CADT, which is nothing new to the OSS world.

    For all the human errors that Stroustrup committed (and later committees failed to fix), I bet he'd never guessed that not defining C++ dynamically linked libraries' binary interface format was one, never mind tripping people and organizations in a serious way.

    AIUI even C only has a de-facto standard DLL ABI.

    Then again, you could say that GCC et.al. shoot themselves in the foot for not having option to generate shims to allow linking to VS DLLs on Windows. ...No, I'm not sure if this would actually be possible. I'm just 🚎


  • Discourse touched me in a no-no place

    @acrow said in So I decided to try to update part of my toolchain...:

    AIUI even C only has a de-facto standard DLL ABI.

    Not really. C itself doesn't have an ABI; that's up to the platform to specify.


  • ♿ (Parody)

    @dkf said in So I decided to try to update part of my toolchain...:

    @acrow said in So I decided to try to update part of my toolchain...:

    AIUI even C only has a de-facto standard DLL ABI.

    Not really. C itself doesn't have an ABI; that's up to the platform to specify.

    Woo! stdcall!


  • :belt_onion:

    @boomzilla said in So I decided to try to update part of my toolchain...:

    @dkf said in So I decided to try to update part of my toolchain...:

    @acrow said in So I decided to try to update part of my toolchain...:

    AIUI even C only has a de-facto standard DLL ABI.

    Not really. C itself doesn't have an ABI; that's up to the platform to specify.

    Woo! stdcall!

    Dude, don't call your STDs. That's bad.


  • ♿ (Parody)

    @levicki said in So I decided to try to update part of my toolchain...:

    @boomzilla stdcall is a calling convention, not ABI.

    Um, actually...

    A common aspect of an ABI is the calling convention, which determines how data is provided as input to or read as output from computational routines; examples are the x86 calling conventions.



  • While working on building CEGUI, which I believe is the last of my dependencies that needs recompiling, I had to replace Ogre::Image::Box with Ogre::Box and came across this snippet:

    ceguithrowshade.png

    I must say, I do appreciate one group of OSS developers throwing shade at another.



  • @boomzilla @dkf In any case, AIUI, C++ does not have one of those for OO use.

    Aaaand I'll be proven wrong in 3... 2... 1...


  • BINNED

    @Groaner I'm pretty sure I've written something very similar before, both the const_cast and the comment how it's needed because the API has the wrong declaration. But then, if I remember correctly, that was calling older C code that just didn't care for const.


  • Discourse touched me in a no-no place

    @acrow said in So I decided to try to update part of my toolchain...:

    In any case, AIUI, C++ does not have one of those for OO use.

    C++ technically doesn't have any; the calling style is not part of the language definition, but rather an extension. Different compilers support different extensions, of course, though there's often macros in practical projects to hide the differences.

    There are quite a few different calling conventions, FWIW. What they actually describe is how to pack the actual arguments to a function into registers and onto the stack, how to handle results, how to handle exception contexts, etc. If you're deeply into how to issue machine code to work efficiently on a particular architecture and in a particular set of use cases, this stuff is interesting. Everyone else (including me!) doesn't need to care beyond making sure it matches…



  • @dkf said in So I decided to try to update part of my toolchain...:

    @acrow said in So I decided to try to update part of my toolchain...:

    In any case, AIUI, C++ does not have one of those for OO use.

    C++ technically doesn't have any; the calling style is not part of the language definition, but rather an extension. Different compilers support different extensions, of course, though there's often macros in practical projects to hide the differences.

    There are quite a few different calling conventions, FWIW. What they actually describe is how to pack the actual arguments to a function into registers and onto the stack, how to handle results, how to handle exception contexts, etc. If you're deeply into how to issue machine code to work efficiently on a particular architecture and in a particular set of use cases, this stuff is interesting. Everyone else (including me!) doesn't need to care beyond making sure it matches…

    I'll have to take that at face value. The one time when I needed to know how to generate a DLL, I just copied the convention from the source of the DLL that I was replacing.
    But when I was reading up (in a hurry) about the subject, the instructions online told, in no uncertain terms, to never link to C++ DLLs from different compilers. And always use C for DLL APIs. In addition to calling style, there was something about memory management...?

    Anyways, I note that the same advice is repeated here.

    So, as regards having a standard (or de facto standard) ABI, "it depends on the definition of ABI", is it?


  • BINNED

    @acrow said in So I decided to try to update part of my toolchain...:

    @dkf said in So I decided to try to update part of my toolchain...:

    @acrow said in So I decided to try to update part of my toolchain...:

    In any case, AIUI, C++ does not have one of those for OO use.

    C++ technically doesn't have any; the calling style is not part of the language definition, but rather an extension. Different compilers support different extensions, of course, though there's often macros in practical projects to hide the differences.

    There are quite a few different calling conventions, FWIW. What they actually describe is how to pack the actual arguments to a function into registers and onto the stack, how to handle results, how to handle exception contexts, etc. If you're deeply into how to issue machine code to work efficiently on a particular architecture and in a particular set of use cases, this stuff is interesting. Everyone else (including me!) doesn't need to care beyond making sure it matches…

    I'll have to take that at face value. The one time when I needed to know how to generate a DLL, I just copied the convention from the source of the DLL that I was replacing.
    But when I was reading up (in a hurry) about the subject, the instructions online told, in no uncertain terms, to never link to C++ DLLs from different compilers. And always use C for DLL APIs. In addition to calling style, there was something about memory management...?

    Yes, even if different compilers will use the same calling convention (you probably can get gcc/clang/icc to follow the msvc standard on Windows), you need to be careful to not mix runtime libraries. Even mixing different versions of MSVC will result in dependencies on more than one msvcp**/msvcruntime**/....

    And then you get the problem you describe: you can't mix resource management across different runtimes, i.e. you can't free a pointer with one runtime you allocated with another, same with file handles etc. That's actually pretty obvious when you think about it. Also the reason why COM interop stuff declares things like CoTaskMemAlloc.
    And making sure your resources don't cross boundaries, or at least always use the correct acquisition/release pair, is going to be a pain. So if you mix compilers, make sure that you don't depend on multiple runtime libraries.

    Anyways, I note that the same advice is repeated here.

    So, as regards having a standard (or de facto standard) ABI, "it depends on the definition of ABI", is it?

    Pretty much.


  • Discourse touched me in a no-no place

    @acrow said in So I decided to try to update part of my toolchain...:

    But when I was reading up (in a hurry) about the subject, the instructions online told, in no uncertain terms, to never link to C++ DLLs from different compilers. And always use C for DLL APIs. In addition to calling style, there was something about memory management...?

    It depends on what platform you're on. Mixing stuff works on Linux and Mac (where there is a single common libc), but gets really hairy on Windows because each compiler stack has its own implementation of malloc() on top of the system page allocator. The reason why this matters? You really have to match which library provides the malloc() for a block of memory with the free() you use to dispose of it. And C++ (usually) wraps new and delete on top of malloc() and free()


  • Java Dev

    @dkf We have our own block allocators in C. You don't want to mix those with system malloc/free either.


  • ♿ (Parody)

    @acrow said in So I decided to try to update part of my toolchain...:

    @dkf said in So I decided to try to update part of my toolchain...:

    @acrow said in So I decided to try to update part of my toolchain...:

    In any case, AIUI, C++ does not have one of those for OO use.

    C++ technically doesn't have any; the calling style is not part of the language definition, but rather an extension. Different compilers support different extensions, of course, though there's often macros in practical projects to hide the differences.

    There are quite a few different calling conventions, FWIW. What they actually describe is how to pack the actual arguments to a function into registers and onto the stack, how to handle results, how to handle exception contexts, etc. If you're deeply into how to issue machine code to work efficiently on a particular architecture and in a particular set of use cases, this stuff is interesting. Everyone else (including me!) doesn't need to care beyond making sure it matches…

    I'll have to take that at face value. The one time when I needed to know how to generate a DLL, I just copied the convention from the source of the DLL that I was replacing.
    But when I was reading up (in a hurry) about the subject, the instructions online told, in no uncertain terms, to never link to C++ DLLs from different compilers. And always use C for DLL APIs. In addition to calling style, there was something about memory management...?

    Anyways, I note that the same advice is repeated here.

    So, as regards having a standard (or de facto standard) ABI, "it depends on the definition of ABI", is it?

    I think any ABI that ignores calling convention makes no sense. Except where you push it down to another level, as C++ does by letting compilers do their own thing. But without being able to make calls into the code there is no interface, and if you're using different calling conventions then bad things are definitely going to happen.

    But most people don't worry about things on that level. You only really need to know about the details at the level of assembly or lower, though this is what all the __declspec(dllexport) junk that you see (used to see?) in Windows C++ code is talking about.



  • @dkf You mean Windows API doesn't hide page allocation behind malloc/free (or similar)?

    @PleegWat In what system? You're going to have to be more specific, since they're all different.
    In the embedded world I live in, C library's malloc/free is all you have. Unless you bring in a library that does it better, like you're supposed to; gcc's built-in embedded malloc doesn't de-fragment, if I recall.


  • Java Dev

    @acrow High-volume data processing, x86_64 linux. We have frequently-used structs in the kilobyte range which we allocate in bulk and reuse.


  • Discourse touched me in a no-no place

    @topspin said in So I decided to try to update part of my toolchain...:

    So if you mix compilers, make sure that you don't depend on multiple runtime libraries.

    Or utterly make sure that you pair allocation and deallocation correctly.



  • This post is deleted!

  • Discourse touched me in a no-no place

    @acrow said in So I decided to try to update part of my toolchain...:

    You mean Windows API doesn't hide page allocation behind malloc/free (or similar)?

    Neither does POSIX at the syscall level, but for some reason everyone uses the same libc there (so there's a single library handling everything, except for when someone plugs in a replacement) whereas they don't on Windows.



  • @dkf said in So I decided to try to update part of my toolchain...:

    @acrow said in So I decided to try to update part of my toolchain...:

    You mean Windows API doesn't hide page allocation behind malloc/free (or similar)?

    Neither does POSIX at the syscall level, but for some reason everyone uses the same libc there (so there's a single library handling everything, except for when someone plugs in a replacement) whereas they don't on Windows.

    Hm. Say, just how well does libc handle memory fragmentation these days?
    Asking because we're going to start using Linux in some embedded products in the (near) future.



  • @boomzilla said in So I decided to try to update part of my toolchain...:

    But most people don't worry about things on that level. You only really need to know about the details at the level of assembly or lower, though this is what all the __declspec(dllexport) junk that you see (used to see?) in Windows C++ code is talking about.

    Oh, I very much see it, thank you very much! I've had to address a lot of cases like this:

    #ifdef BITCH_IF_YOU_DONT_DEFINE_THIS_THE_DLL_WONT_COMPILE_HAHAHA
      #define MyCoolLibraryExport __declspec(dllexport)
    #elif
      #define MyCoolLibraryExport __declspec(dllimport)
    #endif
    
    class MyCoolLibraryExport MyReallyCoolClass
    {
    ...
    

    The idea being that one variation is for compiling the DLL itself, the other for consumers of the DLL.



  • @boomzilla said in So I decided to try to update part of my toolchain...:

    You only really need to know about the details at the level of assembly or lower, though this is what all the __declspec(dllexport) junk that you see (used to see?) in Windows C++ code is talking about.

    __declspec(dllexport) doesn't affect the calling convention, though. It just tells the compiler/linker that a symbol (or multiple symbols if you put it on a class) should be exported.



  • REALLY CLOSE!

    Got a debug build of the app to run, and while I can't do full testing until I build some more stuff, prognosis looks good. Looks like my Ogre 1.9 terrain data loads just fine. But to get to this point, I had to battle with quite a few horrors:

    • There were the usual issues of some dependencies compiled with static CRT vs. dynamic CRT (i.e. /MTd vs /MDd), but that's to be expected.
    • The DirectX11 renderer doesn't like SkyX (a plugin for atmospherics and the like) for reasons I have yet to investigate. Set to OpenGL renderer for the time being.
    • Ogre 1.9 didn't care if you had multiple files with similar content in your resource tree. Ogre 1.11 does. So, I had to delete some duplicate files and comment out content in some scripts.
    • Exceptions were being thrown left and right, due to resource loaders not being able to find ResourceFile.xml. Yet, resourcefile.xml existed happily in the search path. Suspecting the worst, I renamed it to match case AND IT WORKED. Cue blakeyrant about case-sensitive filesystems. After doing some digging, I think I found the culprit:

    strict.png

    Seems like someone decided that the default should be STRICT instead of LEGACY. Whatever happened to "Be liberal in what you accept, be conservative in what you send?" Furthermore, what greater good is served by making sure my filenames match their resource entries not just to their names, but to their code points?

    • SkyX had to be updated as it was writing uint32s to a 24-bit RGB texture. Yes, this is a very good way to overflow the buffer you have allocated. So I changed it to RGBA, as that was quicker and easier than coming up with a 24-bit pointer type. No doubt that many tears will be shed over the approximately 300kb of memory wasted with this quick fix.

Log in to reply