The entire website on one dll?!?!



  • My friend is working for a company that has 6 web developers.  This company has recently decided to move to .NET - read heaps of classic .asp baggage found here... "code reuse" means copy-paste to them.  AAAAnyway, he was tellling me about the setup of the website they are building and I found out that the entire web solution is neatly combined into one dll.  Framework, menus, reports, etc, etc.  I told him, "You just shouldn't do that.  A project of that size (and it's huge by the way) should be neatly broken into parts and referenced when necessary."  I proceeded to tell him about how much simpler it is to roll out changes when you don't have to roll it all out at once; thereby decreasing the chances that you broke something before the big rollout.  Among other things...

    He agreed with me that this makes sense, but the powers that be will need some cites (and they still probably won't go for it because "this is the way we did it in the past").  And, for the life of me, I can't find any good cites.

    Thanks for any help.



  • Ok, maybe I'm wrong about this, but when dymanicly loading libraries on Windows, the library itself is loaded into RAM, and then it's left there even when it's unused until Windows needs addition RAM, in which case it then unloads the library if nothing is using it. Admittly, if the library has functions a lot of programs need, or if IIS/Apache are using it as a module, then this might be a very effectient way to do it. However, such a case is rare enough I'd deem it WTF unless I saw the code ...



  • @Sonic McTails said:

    Ok, maybe I'm wrong about this, but when dymanicly loading libraries on Windows, the library itself is loaded into RAM, and then it's left there even when it's unused until Windows needs addition RAM, in which case it then unloads the library if nothing is using it. Admittly, if the library has functions a lot of programs need, or if IIS/Apache are using it as a module, then this might be a very effectient way to do it. However, such a case is rare enough I'd deem it WTF unless I saw the code ...

    Yep, you're wrong about it ... :-(

    When Windows "loads" a library, it does nothing more than establish a memory map between an address region and a file, which is the same behavior used for swapping.  Nothing is copied to RAM until it's explicitly called by the program, and when it is called, it only gets swapped in a page at a time. A 10MB .DLL won't immediately chew up 10MB of physical RAM.  The program will have 10MB of "virtual memory" allocated immediately, but all programs do that.  It'll just get swapped in to real RAM as needed.

    Like any other memory, if a page goes unused for a long time and memory is needed for another process, then that memory is freed and given to the requestor (and the original program's memory page is flagged as "swapped out".)  In the case of executable code, that RAM doesn't have to be "written" to disk to be saved, because it's already there in the DLL or EXE file.

    A single giant DLL shouldn't pose any "performance" issues to your application.  As a matter of fact, it helps sidestep a little bit of "DLL hell" because all the modules are in sync with each other -- you can't get an old version of CallMyFunction() from the wrong DLL when it's all monolithic.

    The drawback to a monolithic DLL is that of compiling and maintenance on the development side.  If you've got a huge DLL made up of hundreds of source code modules, what are the chances that most of those modules don't change very frequently?  Probably pretty good.  So why recompile them constantly?  Another reason to break them up into functional areas is so that one developer can be working on "PaymentModule.DLL" while another is working on "ItemPresentation.DLL", and a third can be working on "TaxCalculation.DLL".  If you have a good model with solid interface definitions, those modules should really have nothing to do with each other.

    Another hidden drawback of monolithic code is that it can hide "bad" or unsafe practices.  For example, if you allocate memory in one module and free it in another, that's frequently a source of leaks.  If they're in separate compilation units, however, it can be harder for them to have visibility to each other, thus making it harder for a developer to violate the rules unintentionally.  (Of course a giant "precompiled.h" file with everything in the global namespace will shoot that in the foot anyway.)

    John



  • If everything is compiled into one dll, then usually it's all in one .NET project. That means that every class can call every other class, and when you make one change you have to test the entire application for possible errors. Now, if you break it up into several projects, then when you change one project, you only have to test that project and all projects that refer to it. Typically, you want your most stable projects down in the dependency hierarchy: they can be referenced by many other projects. Projects that change often should be referenced by as little other projects as possible. The most stable projects make up the framework and can be reused by different applications.


Log in to reply