@Sonic McTails said:
Ok, maybe I'm wrong about this, but when dymanicly loading libraries on Windows, the library itself is loaded into RAM, and then it's left there even when it's unused until Windows needs addition RAM, in which case it then unloads the library if nothing is using it. Admittly, if the library has functions a lot of programs need, or if IIS/Apache are using it as a module, then this might be a very effectient way to do it. However, such a case is rare enough I'd deem it WTF unless I saw the code ...
Yep, you're wrong about it ... :-(
When Windows "loads" a library, it does nothing more than establish a memory map between an address region and a file, which is the same behavior used for swapping. Nothing is copied to RAM until it's explicitly called by the program, and when it is called, it only gets swapped in a page at a time. A 10MB .DLL won't immediately chew up 10MB of physical RAM. The program will have 10MB of "virtual memory" allocated immediately, but all programs do that. It'll just get swapped in to real RAM as needed.
Like any other memory, if a page goes unused for a long time and memory is needed for another process, then that memory is freed and given to the requestor (and the original program's memory page is flagged as "swapped out".) In the case of executable code, that RAM doesn't have to be "written" to disk to be saved, because it's already there in the DLL or EXE file.
A single giant DLL shouldn't pose any "performance" issues to your application. As a matter of fact, it helps sidestep a little bit of "DLL hell" because all the modules are in sync with each other -- you can't get an old version of CallMyFunction() from the wrong DLL when it's all monolithic.
The drawback to a monolithic DLL is that of compiling and maintenance on the development side. If you've got a huge DLL made up of hundreds of source code modules, what are the chances that most of those modules don't change very frequently? Probably pretty good. So why recompile them constantly? Another reason to break them up into functional areas is so that one developer can be working on "PaymentModule.DLL" while another is working on "ItemPresentation.DLL", and a third can be working on "TaxCalculation.DLL". If you have a good model with solid interface definitions, those modules should really have nothing to do with each other.
Another hidden drawback of monolithic code is that it can hide "bad" or unsafe practices. For example, if you allocate memory in one module and free it in another, that's frequently a source of leaks. If they're in separate compilation units, however, it can be harder for them to have visibility to each other, thus making it harder for a developer to violate the rules unintentionally. (Of course a giant "precompiled.h" file with everything in the global namespace will shoot that in the foot anyway.)
John