Arithmetics...



  • "Bernie, can you explain: when does the configurator return int.MaxValue?".
    "Hm. The configurator looks for the setting in the config file, takes its value as a string, and then calls int.Parse(thatString, CultureInfo.InvariantCulture). So, you get int.MaxValue when you wrote that value into the config file. What's the problem?"
    "I configured 60, but it returns 4 billion ..."
    "4 billion? That's uint.MaxValue. Do you do some unchecked cast to uint?"
    "No, look here..." and David showed the simple line of code to Bernie. There was nothing wrong with it.
    "How do you know it returns that big value?"
    "It's shown in the UI. Normally, the inactivity timer counts down from 60 seconds, but on the demo system for our Big Boss, it shows 4 billion..." David is a master of communication. But eventually, Bernie coped with getting the relevant information from him.
    "Well, OK, our WPF UI: there is some ViewModel, and then some value is bound to the UI, perhaps a converter inbetween. Look at the code there, Kevin and even more Steven are very fond of initialising value type variables to their MinValue or MaxValue."

    Then, curious, also Bernie started digging into the code. The UserInactivity class is a gem by Steven. How many weird things can you in so few lines of code?

    public class UserInactivity
    {
        public static readonly UserInactivity UserInactivityStatic = new UserInactivity();
    
        public event Action<long> SecondsOfUserInactivity;
    
        [DllImport("User32.dll")]
        private static extern bool GetLastInputInfo(ref Lastinputinfo _plii);
    
        private struct Lastinputinfo
        {
            public uint m_CbSize;
            public uint m_DwTime;
        }
    
        private Lastinputinfo m_LastInputInfo;
    
        private readonly DispatcherTimer m_Timer = new DispatcherTimer { Interval = TimeSpan.FromSeconds(1) };
    
        private UserInactivity()
        {
            m_LastInputInfo = new Lastinputinfo();
            m_LastInputInfo.m_CbSize = (uint)Marshal.SizeOf(m_LastInputInfo);
    
            m_Timer.Tick += Timer_Tick;
            m_Timer.Start();
        }
    
        private void Timer_Tick(object _sender, EventArgs _e)
        {
            SecondsOfUserInactivity?.Invoke(GetIdleMsCount() / 1000);
        }
    
        /// <summary>
        /// Idle time in Ms
        /// </summary>
        private long GetIdleMsCount()
        {
            return Environment.TickCount - GetLastInputTime();
        }
    
        /// <summary>
        /// Last input time in ticks
        /// </summary>
        private long GetLastInputTime()
        {
            if (!GetLastInputInfo(ref m_LastInputInfo))
            {
                Logger.LogWarning(NAME, "Could not get LastInputTime.");
                return long.MinValue;
                //throw new Win32Exception(Marshal.GetLastWin32Error());
            }
            return m_LastInputInfo.m_DwTime;
        }
    }
    

    First, there is that public static accessor - the instance is actually a global instance.
    When it is created, it starts a DispatchTimer, which eventually calls GetLastInputInfo of the Windows API.
    If that function returns an error code, Steven threw an exception. Well, he corrected that later, and that's appropriate. What would happen with that exception? It bubbles up to Timer_Tick where it is NOT caught, and then further up where it eventually kills the Timer and crashes the application.

    So a problem in that function call must be handled differently. Steven's great idea was to return long.MinValue. Note that long is a signed 64 bit integer in the .Net world. LastInputInfo.m_DwTime, in contrast, is an unsigned 32 bit integer. Is that a problem?
    Not yet, but in the next step it is: he subtracts the return value from another value. Now just remember: -long.MinValue = long.MaxValue+1. When you subtract it from a positive value, you actually add 1 and that positive value to long.MaxValue - voila, here you have an ArithmeticOverflowException (because the application is compiled with the flag for overflow checking). That is, now he does not kill the application with the originally thrown error anymore, but with an ArithmetmeticOverflowException. Great achievement.

    Still, that does not explain the underlying cause of the behavior visible in the UI (the application does not crash). Bernie will continue to dig into the heap of shit code and perhaps report some more WTFs.


  • Discourse touched me in a no-no place

    @berniethebernie said in Arithmetics...:

    Well, he corrected that later, and that's appropriate.

    You know what? I think I disagree with that statement. The issue is not that errors are happening, but that the code is logging that something went wrong (but not including what that something was, of course) and then doing a problem-handling strategy that is in the best traditions of VB's ON ERROR RESUME NEXT. Yes, Timer_Tick isn't hardened against exceptions. That's part of the problem too.



  • @berniethebernie said in Arithmetics...:

    First, there is that public static accessor - the instance is actually a global instance.

    AKA, a Singleton....


  • ♿ (Parody)

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.



  • @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)


  • Discourse touched me in a no-no place

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)

    From elsethread, unsigned, probably.



  • @boomzilla In .Net, the integral types are sbyte, byte, short, ushort, char, int, uint, long and ulong. So bytes are unsigned by default, chars are always unsigned (they're WTF-16) and the others are signed by default.


  • ♿ (Parody)

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)

    I expected a 32-bit signed integer, of course.

    @pjh said in Arithmetics...:

    From elsethread, unsigned, probably.

    👅


  • Notification Spam Recipient

    @berniethebernie said in Arithmetics...:

    class UserInactivity
    UserInactivityStatic
    event Action<long> SecondsOfUserInactivity
    Lastinputinfo
    m_LastInputInfo;

    Let the defendant rise.
    Steven Steven, you have been found guilty of at least five counts of aggravated heinous naming. Seeing as this seems to be a deeply rooted habit, and not an isolated case of abhorrent behaviour, I must admit that this was one of the most disturbing cases in my career. Rarely do I see individuals so twisted and beyond repair in this courtroom. Therefore, after consideration, I sentence you to 15 years of help desk, without possibility of parole.
    May God have mercy upon your soul.



  • @boomzilla said in Arithmetics...:

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)

    I expected a 32-bit signed integer, of course.

    Surprised that you would think "long" to only be 32 bits...


  • Notification Spam Recipient

    @thecpuwizard said in Arithmetics...:

    Surprised that you would think "long" to only be 32 bits...

    There are many definitions of "long" :giggity:


  • ♿ (Parody)

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)

    I expected a 32-bit signed integer, of course.

    Surprised that you would think "long" to only be 32 bits...

    But are you surprised that I would think that Microsoft would think long to only be 32 bits?


  • Discourse touched me in a no-no place

    @boomzilla said in Arithmetics...:

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)

    I expected a 32-bit signed integer, of course.

    Surprised that you would think "long" to only be 32 bits...

    But are you surprised that I would think that Microsoft would think long to only be 32 bits?

    “That's OK darling, it's ‘long’ to me…”


  • 🚽 Regular

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @thecpuwizard said in Arithmetics...:

    @boomzilla said in Arithmetics...:

    @berniethebernie said in Arithmetics...:

    Note that long is a signed 64 bit integer in the .Net world.

    Huh. Did not expect that.

    What did you expect "long" to be?

    A System.TimeSpan instance??? ;)

    I expected a 32-bit signed integer, of course.

    Surprised that you would think "long" to only be 32 bits...

    char is 8 bits, int is 16 bits, long is 32 bits and long long is 64 bits. Just like god intended :)

    short is also 8 bits. A short long is 24 though.

    /Microcontrollers and DSPs, I swear I worked on something that had 12-bit data word-length too.



  • After some debugging, Bernie found the cause of the weird behavior: it's the overflow of GetLastInputInfo after some 25 days - the demo system of the Big Boss was running already a few days.

    Steven tried to prevent the issues with a cast to long - but a negative int still translates into a negative long... Actually, an unchecked subtraction Environment.TickCount - GetLastInputTime() (the return type of the latter fixed before, of course) does the trick (and is described on StackOverflow, by the way).



  • @boomzilla Not everyone here works with C#. Definitions may be different in other programming languages.


  • ♿ (Parody)

    @berniethebernie said in Arithmetics...:

    Not everyone here works with C#

    👋



  • @cursorkeys said in Arithmetics...:

    A short long is 24 though.

    This needs to become a real thing. (I mean, besides the fact that I could have made good use of a 24-bit type just last week.)

    had 12-bit data word-length

    May I suggest long short for this one?



  • @cursorkeys said in Arithmetics...:

    /Microcontrollers and DSPs, I swear I worked on something that had 12-bit data word-length too.

    Multiples of 8 bits really only became "a thing" with the advent of Microprocessors. 12 Bits (and 24, and 36) were quite common in the older machines.



  • @thecpuwizard 24 is a multiple of 8. You probably mean powers of 2 (from 8 up).


  • Discourse touched me in a no-no place

    @cvi said in Arithmetics...:

    had 12-bit data word-length

    May I suggest long short for this one?

    No, that's more of a short long char, where a long long char would be whatever blecherosity is needed to hold any Unicode codepoint…


  • ♿ (Parody)



  • @mrl The UserInactivity timer gets reset if the User is InActivity.

    Obvs.



  • @berniethebernie said in Arithmetics...:

    [DllImport("User32.dll")]

    I've never used C#, but I'm curious about that. Why do you need to, I assume, tell the code which DLL to use at that point? It seems weird to me to have that type of information in the source code itself? (that doesn't look like a comment)



  • @remi said in Arithmetics...:

    I've never used C#, but I'm curious about that. Why do you need to, I assume, tell the code which DLL to use at that point? It seems weird to me to have that type of information in the source code itself? (that doesn't look like a comment)

    You only need to do that if it's not a .NET DLL. The DllImport attribute is part of the System.Runtime.InteropServices namespace, aka. "me want run C code!"

    Needless to say, if that's in your program, your perfectly portable C# code becomes instantly not-at-all-portable, sorry buddy.


  • Considered Harmful

    @blakeyrat can't you write "user32" there instead of "user32.dll"? Because I do know that in .NET Core you can use DllImport with .so files.



  • @pie_flavor said in Arithmetics...:

    can't you write "user32" there instead of "user32.dll"?

    I don't know why you'd see an example of badly-written code and assume the ".dll" is there because of some limitation of .NET and not simply because you're looking at an example of badly-written code.

    In any case, I don't know the answer to your question. Look it up. But if it is required, since Attributes in C# are just methods, it'd be easy enough to write a version that checks the ".dll" path and if there's no file there also checks the ".so" path. So it's not like a huge emergency anyway.


  • Considered Harmful

    @blakeyrat All I'm saying is that yes it's still portable.



  • @pie_flavor said in Arithmetics...:

    All I'm saying is that yes it's still portable.

    Not really. I mean maybe as much as anything in the Linux world is portable (which is to say: barely, because Linux is shit).

    Unless you're shipping those shared libraries, I have no idea how you'd tell .NET where it's supposed to be finding them for every Linux distro, nor how you'd deal with the fact that their ABI's going to break every 36 nanoseconds, invalidating all of your application's pinvokes. Which is no worse than a C++ Linux app I suppose, but still shit.


  • Java Dev

    I could imagine it being useful if you're shipping your own dll and so files. Linux syscalls are relatively portable, though you may need to embed shell code to do anything else.


  • Considered Harmful

    @blakeyrat Obviously user32.dll code is not portable. I'm referring to the general practice of using native code - if you omitted the file extension, you could ship both .dll and .so and have it be cross-platform.



  • @blakeyrat said in Arithmetics...:

    You only need to do that if it's not a .NET DLL. The DllImport attribute is part of the System.Runtime.InteropServices namespace, aka. "me want run C code!"

    I see, thanks. It still doesn't look right to me to put that information into the source code itself. I understand that you might have to tell C# that the code will come from an external object, but having to tell it the exact file name where that code is feels... unnatural. To me that belongs to config/build system i.e. where you tell the code where everything it needs it located, not inside the code itself?


  • Considered Harmful

    @remi DLLs are funny. It helps if you remember you can just write "user32"; that lets you think of it more as a library and less as a filename. Part of DLL Hell is that it's simply an identifier, and the path could be any number of places.



  • @pie_flavor Even if it's just a library, not a file name, it's still mixing ideas about where the code actually resides vs. simply using that code.

    For me, the source code should only be concerned with "there is a function called this and that". Whether that function comes from the same file, another file in the same library, or an entirely different library, is a different idea, that we shouldn't really have to worry about when writing the code. Of course we have to care about that when compiling it (or executing it, for interpreted code), but that allows splitting concerns and focusing on one thing only at a time.

    (ideally, I'd even like to not have to put #include or similar into my code, but that's not reality... and there are practical issues about name conflicts that can only be solved by the person actually writing the code, so it's not really a strong ideal...)


  • Discourse touched me in a no-no place

    @remi said in Arithmetics...:

    Even if it's just a library, not a file name, it's still mixing ideas about where the code actually resides vs. simply using that code.

    QFFT

    Sure, I can code easily enough putting in paths /home/dkf/programming/furtlefurtle/foobar.dll but it really doesn't make my code portable at all. Much better to say I use the foobar library (with some version constraint) in my furtlefurtle project and have the build/packaging/install process do the right thing. It's a classic example of solving problems by adding a level of abstraction.



  • @remi said in Arithmetics...:

    Whether that function comes from the same file, another file in the same library, or an entirely different library, is a different idea, that we shouldn't really have to worry about when writing the code.

    But you do, even when writing pure managed code [specifically the assembly references in the .csproj file]



  • @thecpuwizard Yeah but like you said, it's in the .csproj file, not in the source code itself.

    At the instant when you are thinking "oh I need to call function foo() to do what I want", you shouldn't be worrying about where function foo() resides. A bit later, when you have written your code and want to get it to run, yes, of course you need to tell the system where to get that code, but it is at the point where you are concerned about getting the code to run, not when transcribing in code the algorithm that you have in mind.



  • @remi said in Arithmetics...:

    @thecpuwizard Yeah but like you said, it's in the .csproj file, not in the source code itself.

    At the instant when you are thinking "oh I need to call function foo() to do what I want", you shouldn't be worrying about where function foo() resides. A bit later, when you have written your code and want to get it to run, yes, of course you need to tell the system where to get that code, but it is at the point where you are concerned about getting the code to run, not when transcribing in code the algorithm that you have in mind.

    Assuming you are not doing everything in the global namespace, then you still need to supply that [typically with a using statement].

    And it is not "later, when you have written your code and want to get it to run", it is as you are writing the code so that you can use intelligence, auto-completion, LUT, and a host of other things.

    ps: even the assembly references in the .csproj file to NOT "tell the system where to get that code"...



  • @thecpuwizard said in Arithmetics...:

    @remi said in Arithmetics...:

    @thecpuwizard Yeah but like you said, it's in the .csproj file, not in the source code itself.

    At the instant when you are thinking "oh I need to call function foo() to do what I want", you shouldn't be worrying about where function foo() resides. A bit later, when you have written your code and want to get it to run, yes, of course you need to tell the system where to get that code, but it is at the point where you are concerned about getting the code to run, not when transcribing in code the algorithm that you have in mind.

    Assuming you are not doing everything in the global namespace, then you still need to supply that [typically with a using statement].

    And I also believe that this is a bad thing. It's the same stuff about #include. Yes, I need to write them before actually writing my code otherwise I won't be able to use a lot of features of the editor (although sometimes I don't need completion to write stuff that I've written tons of times before). But it still forces me to take my mind away from the ideas that are embedded into the code, and back to the technical details. Ideally (and I know this is not going to happen, but new features added to a language should not go further away from that goal!), I should not have to worry about this when writing code.

    If you want an analogy, that's like making some craft project and every time you need a different pen, or bit of scrap paper or whatever, instead of just reaching for it on your work table, you have to get up, walk to the cupboard, rummage through it until you find it and get back. If you are doing fiddly stuff, this will be very annoying very quickly. It's the same here.

    "OK, now I'm calling this function and... uh... why doesn't it auto-complete? oh, the function is an external assembly and I need to specify it... right, it must be... hum... this one? nope, I forgot, since version 42.numbers that's been split into that one, and... oh no, right, I was mistaken from start, it's there! now, what was I saying? should I pass this variable or that one? I don't know, I forgot while I was busy looking for the right file!"

    ps: even the assembly references in the .csproj file to NOT "tell the system where to get that code"...

    I don't do C# so yeah, maybe I'm wrong. I have no idea what a .csproj file is, I'm assuming it's the VS project file but for all I know it might be your tax returns records. But I still don't see why having to type the name of a DLL inside the source code itself is a good thing.



  • @remi said in Arithmetics...:

    If you want an analogy, that's like making some craft project and every time you need a different pen, or bit of scrap paper or whatever, instead of just reaching for it on your work table, you have to get up, walk to the cupboard, rummage through it until you find it and get back. If you are doing fiddly stuff, this will be very annoying very quickly. It's the same here.
    "OK, now I'm calling this function and... uh... why doesn't it auto-complete? oh, the function is an external assembly and I need to specify it... right, it must be... hum... this one? nope, I forgot, since version 42.numbers that's been split into that one, and... oh no, right, I was mistaken from start, it's there! now, what was I saying? should I pass this variable or that one? I don't know, I forgot while I was busy looking for the right file!"

    Different points of view [aka subjective].

    For your first analogy, I do not want every pen and piece of paper cluttering up my project area, I want to get the minimal known items that I will need.

    "the function is an external assembly"- isn't that something I should have thought out during design, what are the ramifications?

    "since version 42.numbers that's been split into " - again, before writing code, what was the design decisions about the range of versions that would be supported? What techniques are being used to mitigate changes which might occur in future version?

    etc. etc.



  • @thecpuwizard I still believe there is an underlying objective principle of not mixing stuff that belong to different spheres. Of course, that conflicts with many other principles and the final balance between all those is subjective.

    All the examples that you mention are valid, but you do yourself say that these are things to be considered during the design of the code. At the time you decide on a design (which may evolve, but that's another topic), ideally you haven't written code yet. And when you sit down and open your text editor to write the code, ideally again you should not change the design. So at that point, you should no longer (or not yet) be concerned about how to put together the physical pieces of code (import that DLL, make sure this symbol is defined), but only the logical ones (call this function, pass that value around).


  • Discourse touched me in a no-no place

    @thecpuwizard said in Arithmetics...:

    For your first analogy, I do not want every pen and piece of paper cluttering up my project area, I want to get the minimal known items that I will need.

    But I don't want to have spend all my time thinking about the stationery store where I get the pens and the paper from, or the name of the clerk at the store.



  • @remi I don't get the difference between telling a C# program that a certain C function is in User32.dll and telling a C++ program to #include "User32.dll".

    Or I guess put another way: which programming language does it right in your opinion?



  • @blakeyrat said in Arithmetics...:

    telling a C++ program to #include "User32.dll".

    That's not how you tell a C++ program. You include "windows.h" (or whatever header the API is defined in) and link with user32.lib.

    Edit: Or use LoadLibrary("user32.dll") and GetProcAddress("Func"), and do it the hard way...



  • @blakeyrat said in Arithmetics...:

    Or I guess put another way: which programming language does it right in your opinion?

    None, in that regard. Or at least none that I use or know of (before someone :pendant:s me with "Brainfuck does that" or whatever).

    I would tend to say that the C/C++ way of doing many (sometimes 10's of...) #include is worse than having a single library-wide [Dllimport(...)] like in the C# example that started this discussion, but I prefer the C/C++ way of putting all includes at top, at least it moves them out of the way. But it may be that the C# example above is simply bad code in not doing that, I don't know (in C/C++ you can put #include in the middle of your header file if you like, but apart from edge cases, no-one in his right mind would do that!).

    Even though I don't actually use it, I like the way Qt (C++) does it, with a single library-wide include, so you just put #include <Qt> and get all of them, without having to #include <QStuff> and #include <QMoreStuff> (and you can still use individual includes if for some reason that's what you want). That's still boilerplate code that I shouldn't have to worry about (ideally), but that makes it very short.

    Also, the C/C++ way at least includes source code files (headers), not binary DLLs, which makes it IMO more developer-friendly as you can actually go look into those files and see what's in there. Sure, for most system headers it's not easy to follow, but still, that's more readable than a DLL. Unless C# has a built-in way to allow the IDE to seamlessly map from the "User32.dll" string in the editor back to whatever header (or doc) that DLL has.



  • @dcon said in Arithmetics...:

    That's not how you tell a C++ program. You include "windows.h" (or whatever header the API is defined in) and link with user32.lib.
    Edit: Or use LoadLibrary("user32.dll") and GetProcAddress("Func"), and do it the hard way...

    Well ok, I don't use C++ because it's shit, but isn't the LoadLibrary method exactly the same as what C# is doing?

    In any case, if you have to tell the linker about a DLL, it kind of defeats the purpose of it being a dynamicly linked library and just becomes a normal library. It also precludes the ability to change that path at runtime to, for example, load a DLL as a plug-in to your application.

    @remi said in Arithmetics...:

    Also, the C/C++ way at least includes source code files (headers), not binary DLLs, which makes it IMO more developer-friendly as you can actually go look into those files and see what's in there.

    If you used an IDE, you could just right-click the DLL and choose "browse objects" or whatever and also "see what's in there" except it's way better because it's easier to navigate.

    I'm not a fan of arguments that sum up to, "doing X in text or a CLI is better because I only use tools from 1986!" Why don't we all just user modern tools that don't suck, is always going to be my response to that.



  • @blakeyrat said in Arithmetics...:

    @dcon said in Arithmetics...:

    Edit: Or use LoadLibrary("user32.dll") and GetProcAddress("Func"), and do it the hard way...

    Well ok, I don't use C++ because it's shit, but isn't the LoadLibrary method exactly the same as what C# is doing?

    Yes, but it's also @dcon kind of fucking with you. In C/C++, you would almost never directly load a library this way (or with dlopen() on Linux, which is more or less the equivalent), because as you say it defeats the purpose of the system. You would only use that when you're doing some low level stuff and you need to peek around a library for specific stuff, maybe in some plugin loading system or similar.

    The huge majority of source code will use #include <foo.h> and defer the DLL linking to, well, the linker (i.e. how you compile the code). And the linker will just check that the DLL/library exists and provides the required code, so of course at runtime you can set it up to use a different one than the one used while compiling (provided they're compatible, obviously).

    It also precludes the ability to change that path at runtime to, for example, load a DLL as a plug-in to your application.

    Note that in LoadLibrary(...) (or dlopen(...)) the arguments are just normal string variables, so they can (and would typically be, for a plugin system) vary at runtime. I'm saying that because it looks like this isn't the case with the C# [DllImport...] syntax, but keep in mind that I don't know C#, so maybe I missed your point here.

    @remi said in Arithmetics...:

    Also, the C/C++ way at least includes source code files (headers), not binary DLLs, which makes it IMO more developer-friendly as you can actually go look into those files and see what's in there.

    If you used an IDE, you could just right-click the DLL and choose "browse objects" or whatever and also "see what's in there" except it's way better because it's easier to navigate.

    OK, as I said I don't do C# so I wasn't aware of that. That does render my comment pointless.

    I'm not a fan of arguments that sum up to, "doing X in text or a CLI is better because I only use tools from 1986!"

    If that's how you read me, please put your 👽s under control. I specifically mentioned the IDE and how it should help here, and that applies for both languages.



  • @remi said in Arithmetics...:

    The huge majority of source code will use #include <foo.h> and defer the DLL linking to, well, the linker (i.e. how you compile the code).

    ... but again once you do that they're no longer dynamically linked libraries. So yes that works, just like pulling a Toyota with a couple horses works, but you're kind of missing the point of the exercise.

    @remi said in Arithmetics...:

    I'm saying that because it looks like this isn't the case with the C# [DllImport...] syntax, but keep in mind that I don't know C#, so maybe I missed your point here.

    As I said before in this thread, C# attributes are just methods by another syntax.

    @remi said in Arithmetics...:

    If that's how you read me, please put your s under control. I specifically mentioned the IDE and how it should help here, and that applies for both languages.

    Yeah but since you're obviously unaware that IDEs do that, maybe consider using one and joining the rest of us here in the 21st century?



  • @blakeyrat said in Arithmetics...:

    but again once you do that they're no longer dynamically linked libraries

    What usually happens is that you statically link not against the library itself, but against an export table. The linker then goes "cool, there's this function, with this calling convention and stack layout, and it's implementation is in outer space somewhere. I'll add that to my EXE ... 'outer... space... somewhere.' Good!" At runtime, the OS goes through the EXE's import table, loads all of the DLLs mentioned therein, and replaces the "outer space somewhere" stubs with proper links to the DLLs. That makes the links dynamic, in that you can replace the DLL with any other DLL that has the same names or ordinals in its export table and it still works. The loads are static, in that it always uses the same DLL name and the same function name or number.

    @blakeyrat said in Arithmetics...:

    Well ok, I don't use C++ because it's shit, but isn't the LoadLibrary method exactly the same as what C# is doing?

    Yep.

    @remi said in Arithmetics...:

    Also, the C/C++ way at least includes source code files (headers), not binary DLLs, which makes it IMO more developer-friendly as you can actually go look into those files and see what's in there. Sure, for most system headers it's not easy to follow, but still, that's more readable than a DLL. Unless C# has a built-in way to allow the IDE to seamlessly map from the "User32.dll" string in the editor back to whatever header (or doc) that DLL has.

    .NET modules include all the information that would go in a C++ header file in the module (DLL) itself, so you can actually go look at the definitions and see what's in there, even with no source code files, with zero loss in fidelity, using the exact same IDE windows and keyboard shortcuts you're familiar with from C++ development.

    Things get icky when doing Platform Invoke because the C# compiler can't read C++ header files. You're providing those header files yourself, using C# syntax, and then annotating which DLL the runtime should LoadLibrary and GetProcAddress against.

    @remi said in Arithmetics...:

    Even though I don't actually use it, I like the way Qt (C++) does it, with a single library-wide include, so you just put #include <Qt> and get all of them

    #define WIN32_LEAN_AND_MEAN; #include <Windows> and chill. But you still have to know whether to link against Kernel32, User32, Shell32, Gdi32, or a bunch of other fun ones; Visual Studio just goes "screw it, I'm just going to include all of them and let the linker sort it out" by default.



  • @blakeyrat said in Arithmetics...:

    @remi said in Arithmetics...:

    The huge majority of source code will use #include <foo.h> and defer the DLL linking to, well, the linker (i.e. how you compile the code).

    ... but again once you do that they're no longer dynamically linked libraries. So yes that works, just like pulling a Toyota with a couple horses works, but you're kind of missing the point of the exercise.

    Uh, no, I think you are missing how it works. Let me copy-paste what I said and tell me what's unclear about it:

    the linker will just check that the DLL/library exists and provides the required code, so of course at runtime you can set it up to use a different one than the one used while compiling (provided they're compatible, obviously).

    As I said before in this thread, C# attributes are just methods by another syntax.

    Right, so there is nothing different between [DllImport...] and LoadLibrary(...)?

    Before you yell at me again, that is a true question. You said that LoadLibrary() "precludes the ability to change that path at runtime". It doesn't. So what's the difference in your mind?

    Yeah but since you're obviously unaware that IDEs do that, maybe consider using one and joining the rest of us here in the 21st century?

    Holy fuck, even knowing you, I'm amazed at how short-sighted you can be, and how quick you are to let your shoulder aliens blind you.

    I am unaware of what C# IDE can do, for the simple reason that I don't fucking do C#. There is no DllImport in the language I use, so I don't have a fucking clue what an IDE is supposed to be doing when it sees one, and what information it might or might not be able to pull out of it. The language I use has #include, and the IDE is very able to do stuff with it, thank you very much.


Log in to reply