Linux system calls on Windows



  • Is there somewhere that lists Windows alternatives to Linux system calls in x86 assembly? Specifically, I need 1, 3, 4, and 45, and if the file descriptor differs for stdin/stdout, I need a way of getting those, too.



  • Is using Cygwin an option?



  • A compiler compiling to cygwin is a bit weird.



  • Sorry, not much of a programmer 😛 I just knew Cygwin implemented a lot of *nix syscalls for Windows, figured it'd be something to throw out there.



  • Umm... assembly uses C system calls now?

    exit, _read, _write, and my googlefu is failing me on a replacement for brk / segment size changing.


  • BINNED

    Actually it is common. Most of the time MingWin should suffice and produces faster code. Do you have any specific reason you want syscalls? I think brk would be tricky, you may have to recreate the process with bigger memory (something like fork). If your license is compatible, you should look at the source code for Cygwin.



  • Read this.

    I would consider the canonical solution to link against / load kernel32.dll (or ntdll if you need to go lower level) and call the appropriate methods from there. Appropriate methods: call ExitProcess, GetStdHandle, ReadFile and WriteFile instead of syscall 1, 3 & 4. AFAIK there's no brk/sbrk equivalent (FWIW the C-calls were removed from newer POSIX standards as well). You maybe could try to emulate it with memory mappings, but eh .. messy. Cygwin probably emulates brk, so you could also check out what they do.

    The other way would be to call exit, _read, _write from the CRT as @powerlord suggested.



  • Ah, FWIW, there's also this and this. Compile the latter in C and look at the generated ASM (there's a brk method). My guess is that it goes through a compatibility layer, though.



  • Basically, I have this:

    https://github.com/BenLubar/coolc/blob/master/libcool/basic_linux.s

    Memory allocations are handled by the runtime I wrote, so I can't use malloc or anything that might call malloc.

    I also still need to write the garbage collector and some optimizations for the code generated by the compiler (such as not boxing integers repeatedly during complicated arithmetic expressions).



  • @ben_lubar said:

    Memory allocations are handled by the runtime I wrote, so I can't use malloc or anything that might call malloc.

    If you look at how e.g. tcmalloc does this, you'll find that they can use mmap() on *nix, and something like VirtualAlloc on Windows. (I guess you could use CreateFileMapping() with INVALID_HANDLE_VALUE to do something similar to mmap(), but VirtualAlloc is more straight-forward).

    You'd use mmap/VirtualAlloc to reserve a large arena of memory from which your allocator grabs stuff. Neither actually commits physical memory until the pages are accessed, so essentially you only allocate a part of the address space.

    brk is similar (i.e., I'm guessing you use it to get memory for your allocator to play in), but perhaps more convenient because you know that it will return continuous memory. VirtualAlloc/mmap will return an address to you, albeit you can ask for it to give you a specific address (but you will only get it if it's available).



  • @cvi said:

    but perhaps more convenient because you know that it will return continuous memory.

    So I'll have to deal with non-contiguous memory allocations on Windows? What's the point of virtual memory if it's harder to use than physical memory?


  • BINNED

    It is contiguous virtual memory, but perhaps not physical. I think even brk does not promise contiguous physical memory, almost no one but hardware cares about that.



  • I don't care as long as saying "give me 0x1000 more virtual memory bytes" adds them directly to the end of what I already have.


  • BINNED

    That is not guaranteed, and that is the selling point for sbrk because it allocates from end of current process. I think on Windows you have to emulate it with a very big chunk (and MEM_RESERVE), do not worry it will not really allocate it until you start using it, so do not zero fill.

    I remember I had seen something that does not even need reserve and commit (that is the right way of course), a long time ago :)
    In the comments section not documented, but this is Windows.

    Note that even if you specify MEM_COMMIT (I.E using the MEM_COMMIT | MEM_RESERVE flag combination) to "commit" the page, the actual physical page is not guaranteed and probably will not have been loaded yet. You have to actually touch the page (read or write) for it to get loaded.

    That means you can get the memory, then zero fill and use the portion you want.



  • There does exist a dynamic binary translator for Linux ELF binaries on Windows that happens to emulate the Linux syscall interface. It may not be exactly what you're looking for, but the source code might prove useful to you. https://github.com/wishstudio/flinux



  • @dse said:

    I remember I had seen something that does not even need reserve and commit (that is the right way of course), a long time ago :)
    In the comments section not documented, but this is Windows.

    It does say this in the notes for MEM_COMMIT in the table:

    Actual physical pages are not allocated unless/until the virtual addresses are actually accessed.

    The pages are also zero-filled on the initial access.


  • BINNED

    yes, I think I am too out of date. In any case a proper implementation should reserve then commit when needed.



  • Yeah. In Ben's case, he could MEM_RESERVE a large-ish address range, and MEM_COMMIT parts of it as needed (i.e., brk could then be emulated with VirtualAlloc+MEM_COMMIT in the reserved range).



  • What are you trying to do ben? In what case are you writing assembly yet still able to work with the windows kernel? Is it 1990 again? Also if you are running against a full fledged virtual memory system, adding more virtual memory should be completely transparent to your code and therefore it will look like it IS at the end of your block



  • I'm writing a compiler for a garbage-collected language.



  • Will the compiled code be running in userspace in windows?

    If it is, and if I understand windows well enough, then it will think it has a perfectly linear memory model and has a beautiful landscape of infinite memory exactly how it needs it.

    Otherwise start looking for DOS function calls


  • FoxDev

    @mrguyorama said:

    infinite memory

    within address space limitations, and taking into consideration that unless running in 32 bit address space the address space is going to be several orders of magnitude bigger than the memory actually available to be mapped to the address space.



  • It's running in 32-bit address space because the language doesn't have any 64-bit integer type and it's easier to do it this way than to write separate code for 64-bit code generation.



  • @accalia said:

    within address space limitations

    Yes, but in a virtual memory environment (especially windows), as long as you don't actually use it, any "hey can I set aside some memory" will get a response akin to "Oh god yes I have so much of it here take it take it please it's burying me and I can't breathe oh god oh no call an ambulance"


  • FoxDev

    @mrguyorama said:

    @accalia said:
    within address space limitations

    Yes, but in a virtual memory environment (especially windows), as long as you don't actually use it, any "hey can I set aside some memory" will get a response akin to "Oh god yes I have so much of it here take it take it please it's burying me and I can't breathe oh god oh no call an ambulance"

    well yes. Any addressable memory can be reserved like that, certainly.

    that was rather my point. you can't address 74 yottabytes of memory with 64 bitaddresses.

    with 64 bit addresses you can address at most 16 exabytes of memory

    in fact with current processor implementations you only have 42(INTEL) or 48(AMD) address lines which means you can actually only address 4TB(INTEL) or 256TB(AMD) before hitting unadressable memory.

    so while your request for addressable memory will be granted (so long as you never allocate it and run into those physical limitations) you are still limited to between 4TB and 16EB of addressable memory, depending on your situation and which limits apply to you.



  • I just like the fact that a formerly useful function in the windows/linux/$OSNAME codebase has been reduced to:

    haveMemory(){
        return true;
    }


  • Well, you can't give an individual process more memory than the system has. And there are usage limits and all that.



  • That gets checked at actual allocation/use time though, doesn't it?

    I guess I need to reread my Operating Systems book, it was like a whole year ago.

    Related: Currently reading this



  • I think Linux still limits virtual memory, because otherwise you can allocate the entire memory space and then touch random parts of it and the kernel needs to keep track of where each piece actually goes, not just the ones that are touched.



  • @accalia said:

    you are still limited to between 4TB and 16EB of addressable memory, depending on your situation and which limits apply to you.

    16EiB of memory should be enough for anybody.


  • FoxDev

    @flabdablet said:

    16EiB of memory should be enough for anybody.

    that's what we said when we were still measuring in kibibytes.

    it might take us a while but we'll eventually need more than that for something



  • @accalia said:

    somethingDiscourse

    Post was discoempty

    Realistically though, if you're using

    @flabdablet said:

    16EiB of memory

    Then you are doing something insane. That would be on the order of simulating the particles of a massive physical system (planet scale maybe? I don't feel like doing the calculations) and by that time, I feel like the computation time required to actually use that data is too much for that much memory to be useful



  • (Windows also won't let you overcommit beyond the physical RAM + available virtual RAM limit. Which is usually constrained by disk space, if anything.)


  • Garbage Person

    Many processors sharing a single memory pool. Think "supercomputer interconnect".

    A run of the mill install (NASA, who has peanuts for a computing budget) sits around 4gib per core, and 64000 cores for a total of 256tib. Today. Which is the AMD addressing limit. A bunch of these machines get built with Intel, of course, and are thereby unable to use the shared memory model.



  • Hey look at that! I was wrong again! How swell

    @Weng said:

    supercomputer interconnect

    I wouldn't say supercomputers or clusters are used by just one "anybody"
    🚎


  • BINNED

    @mrguyorama said:

    Then you are doing something insane. That would be on the order of simulating the particles of a massive physical system

    Or solving chess.


  • Java Dev

    @ben_lubar said:

    I think Linux still limits virtual memory, because otherwise you can allocate the entire memory space and then touch random parts of it and the kernel needs to keep track of where each piece actually goes, not just the ones that are touched.

    Well, it probably won't put itself in the position where it has to swap out parts of the page table. Probably.

    I wonder how large a file I can memory map before things break. 100GB? 1TB? 10TB? (not on linux to test)


  • FoxDev

    @blakeyrat said:

    Windows also won't let you overcommit beyond the physical RAM + available virtual RAM limit.

    hmm... TIL. although that does make sense.

    yes that would be another limitation that would apply, even if it woulnd't stop you from say asking for "five hundred bytes starting at (really big address, say.... about a TB into the address space)"



  • Practically-speaking, it's a non-limit unless you're doing something dumb.

    You should be checking your error codes after trying to allocate memory, though. Don't just assume it'll succeed. There are also performance implications-- I remember back when Crimson Skies was new on Windows 98, it'd just try to load its entire CD into RAM all at once and let the OS sort it out-- turns out you had to let it sit and "settle" for ages until it was actually usable. Very annoying. (Nowadays, loading 700 MB into RAM all at once is almost a no-op.)

    I learned my defensive programming writing C on a Mac Classic, a system with no (or severely limited) virtual memory and no protected memory.


  • Garbage Person

    I was once instructed by a particularly oblivious manager to load all data to very large addresses to make it hard for 32bit malware to target. We work in c#.



  • @Weng said:

    hard for 32bit malware to target

    Once you've got malware that's able to touch your process's memory, you're on the wrong side of an airtight trash can lid or however that saying goes.


  • FoxDev

    @Weng said:

    I was once instructed by a particularly oblivious manager to load all data to very large addresses to make it hard for 32bit malware to target. We work in c#.

    smile, nod, charge the department 40 hours to perform the task, use the time to update your resume. ;-)


  • Garbage Person

    @blakeyrat said:

    a system with no (or severely limited) virtual memory and no protected memory.

    I vaguely recall you couldn't set virtual memory to more than twice your physical memory?


  • Garbage Person

    Let's put it this way. It wasn't at WTFcorp



  • You'd try to avoid it altogether since the VMM was slooooooooooooooooooooow.

    Became a huge issue when more and more apps began being ported over from systems with relatively speedy VM implementations.


  • Garbage Person

    It also didn't really help that they were shipping 8mb machines with 4 soldered to the board and 4 in the one and only slot.

    Within mere months my parents had sprung for a 16mb upgrade, for a total of 20mb.

    I had the bizarre Performa 630CD with a DOS compat card. FPU less 68LC040, 4mb RAM from the factory and 8 from the store. Shit box IDE drive instead of the nice SCSI all its contemporaries used.

    Really, that machine was the start of the downward crapple spiral that really got going when they switched to PPC.



  • I had one of those "crapple" PPCs, the 4400/200, and I liked it.

    Although most of that was probably the joy of finally having a computer that is YOURS and YOURS ALONE. NOBODY ELSE TOUCH.

    Also it ran Escape Velocity.

    I also took the steel case off and spray painted it black. Because fuck beige.


  • Garbage Person

    Oh, yeah, they didn't hit rock bottom until the G3s. And even then, the first model was good. And then they started cladding everything in brightly colored plastic.

    I used a heavily upgraded G3 Server for my Linuxing until I switched to VMs.



  • @blakeyrat said:

    Because fuck beige

    This needed emphasis.

    I buy my cases pre-blacked!



  • Back when that computer came out, there was literally no concept of a computer EVER that was not beige. All computers were beige. ALL OF THEM.


Log in to reply