Interfaces



  • I was wondering if the methodology of using interfacing for everything has any sense in it. I've seen it many times before, in C# and Java:

    @interfaces.dll said:


    public interface IPerson {
    string Name { get; set; }
    int Age { get; set; }
    }

    @entities.dll said:

    public class Person : IPerson {
    public string Name { get; set; }
    public int Age { get; set; }
    }

    And then users only ever use an IPerson, except when instantiating, which is done with the magical CreateInstance class:

    public class CreateInstance {
      public IPerson Person {
        get {
          return new Person();
        }
      }
    }
    

    So is there any real benefit to this at all? Because I don't see it.



  • For something this trivial? Nope.

    For actual classes that do things?  Yep.

    It makes writing unit tests much easier since all current mocking frameworks can automatically mock up interfaces for you.



  •  I know your pain.

    I picked up a fight to remove an hierarchy of interfaces from our system.

    It were like this:

     (The A <- B arrow means B extends A)

    IDataType <- IDataTypePlusValue <- IDataTypePlusValueWithORMInfo <- IDataTypePlusValueWithORMInfoEvent <- IDataWhateverSpecificEventType

     And a corresponding class hierarchy:

    DataType <- DataTypePlusValue <- DataTypePlusValueWithORMInfo.....................

     As you might guess,  DataType implements IDataType, DataTypePlusValue implements IDataTypePlusValue.... resulting in a 1-to-1 relation. And the interfaces contained the exact public interface of the corresponding class. 

     I have a couple of rules when designing interfaces: when it'll exist multiple possible implementations, to mark a class as implementing a special or secondary behavior or to enforce contracts when designing an interface. NEVER to specify the main or sole purpose of the class.

    Now I have to explain to an awful lot of people why this will not change a bit in runtime...



  • I wouldn't bother with interfaces for domain classes that are just properties (although I've seen it in the past) but for services, repositories and basically anything that has logic it's all but required if you want to have loosely-coupled code or use unit testing.

    Your example is silly though, and that's IMO a silly use of interfaces.  However if it was an IPersonRepository or an IPersonLoanChecker or whatever, then that would be a valid case to abstract the details away and make mocking/testing the class easier.



  • One day our entire computing architecture is going to collapse under the weight of abstrations of abstractions.



  • @ObiWayneKenobi said:

    mocking/testing the class easier

    Hehe, testing. You crack me up!



  • No, there is no sense in it. IMU, the idea behind this is that you can keep your old code where you write "a.b = ..." and have it automagically replaced by a function call. Until the time the function call does anything else than an assigment, it's just waste. And if your application is computationally heavy, an expensive bit of waste too.

    I had the fortune to work on a project that was started just after C# got these features, and consequently it was full of empty getters and setters (because not only C# got these features, but Visual Studio too, which made it really easy to use them). The code had a database connection with a field "open", which was used like this: if (!db.open) { ... }. Perfectly sane, right? Imagine my surprise when I finally discovered that the cause of a bug was that the setter looked like this:

    if (!value) db.close();

    open = value;



  • @configurator said:

    @ObiWayneKenobi said:
    mocking/testing the class easier

    Hehe, testing. You crack me up!

    Yeah I know :(.  The ignorance of so many developers in regards to testing is very depressing.


  • ♿ (Parody)

    @ObiWayneKenobi said:

    I wouldn't bother with interfaces for domain classes that are just properties (although I've seen it in the past) but for services, repositories and basically anything that has logic it's all but required if you want to have loosely-coupled code or use unit testing.

    It can actually be very useful. I've had classes that were in different hierarchies, but represented similar enough things that it made sense to treat them similarly in some cases.



  • @ObiWayneKenobi said:

    The ignorance of so many developers in regards to testing is very depressing.
     

    Did you mean regarding developers performing testing, or just recognising the importance and value of it?



  • @boomzilla said:

    It can actually be very useful. I've had classes that were in different hierarchies, but represented similar enough things that it made sense to treat them similarly in some cases.

    Except the interfaces here always have one and only one implementation... Exactly what interfaces are not for.


  • ♿ (Parody)

    @configurator said:

    @boomzilla said:
    It can actually be very useful. I've had classes that were in different hierarchies, but represented similar enough things that it made sense to treat them similarly in some cases.

    Except the interfaces here always have one and only one implementation... Exactly what interfaces are not for.

    Yes, this obviously doesn't have anything to do with your situation. Here, let me try a more obvious derailment...

    Clearly, the Super Bowl needs to be played in an outdoor stadium in a time zone where they can play during the day. And half time would be more interesting with the local high school marching band.



  • @boomzilla said:

    Clearly, the event which shall not be named needs to be ...

    FTFY. Also, nobody outside the US cares about that event.



  • @edgsousa said:

    I have a couple of rules when designing interfaces: when it'll exist multiple possible implementations, to mark a class as implementing a special or secondary behavior or to enforce contracts when designing an interface. NEVER to specify the main or sole purpose of the class.

    Now I have to explain to an awful lot of people why this will not change a bit in runtime...

    So, what happens when in the future a new implementation is needed? All class references have to be replaced with a newly created interface. In terms of extensibility it makes a lot of sense to create interfaces even if there is only one implementation at the moment.

     



  • @Netdiver said:

    So, what happens when in the future a new implementation is needed? All class references have to be replaced with a newly created interface. In terms of extensibility it makes a lot of sense to create interfaces even if there is only one implementation at the moment.

    Why would you reimplement a class full of dumb properties? These classes are POD.



  • @ObiWayneKenobi said:

    @configurator said:

    @ObiWayneKenobi said:
    mocking/testing the class easier

    Hehe, testing. You crack me up!

    Yeah I know :(.  The ignorance of so many developers in regards to testing is very depressing.
     

    Agreed.  I find the whole thing depressing.

     

    It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications.

    -- Edsger W. Djikstra. (Written in 1988, so it's now closer to 4.5 decades.)

     

     



  • @Mason Wheeler said:

    program testing
    may convincingly demonstrate the presence of bugs, but can never
    demonstrate their absence

    In other words, tests may lead to more bugs but never to fewer bugs.

    (A bug we don't know about doesn't exist, right?)



  • @Cassidy said:

    @ObiWayneKenobi said:

    The ignorance of so many developers in regards to testing is very depressing.
     

    Did you mean regarding developers performing testing, or just recognising the importance and value of it?

     

    Both.  I have yet to work with any developer(s) that recognized the value of testing (usually I got some kind of "That's a waste of time, we don't do that here" response), let alone actually have things in place to perform testing, let alone knew of or wanted to use TDD/BDD (not that I'm a TDD advocate)

     



  • @Netdiver said:

    @edgsousa said:

    I have a couple of rules when designing interfaces: when it'll exist multiple possible implementations, to mark a class as implementing a special or secondary behavior or to enforce contracts when designing an interface. NEVER to specify the main or sole purpose of the class.

    Now I have to explain to an awful lot of people why this will not change a bit in runtime...

    So, what happens when in the future a new implementation is needed? All class references have to be replaced with a newly created interface. In terms of extensibility it makes a lot of sense to create interfaces even if there is only one implementation at the moment.

     

    This is where your skill and experience come into play.

    If you have reason to believe multiple implementations will be necessary within the lifetime of the code/design in question and that the cost of making the code more complex now (and living with that complexity) is less than the cost of changing it later, then using interfaces is a decent choice. But you'd better make darn sure that you (and everyone else who will ever write code in the system) ALWAYS use the interfaces and never take shortcuts based on the fact that there's only one implementation at the moment.

    If, on the other hand, you think the chances are low that there will be multiple implementations or that they are so far away that it's likely the code will drastically change before the requirement for them becomes real, then it's probably wise to avoid using interfaces.



  • @configurator said:

    @boomzilla said:
    It can actually be very useful. I've had classes that were in different hierarchies, but represented similar enough things that it made sense to treat them similarly in some cases.

    Except the interfaces here always have one and only one implementation... Exactly what interfaces are not for.

    In fact, when there's only one implementation and the developers know it, the temptation to take shortcuts and cast to that one implementation can be too much to resist. Then when the need for a second implementation arises, you've got a serious problem.

    This happened to me once. The system in question had been written (not by me) such that everything had an interface IFoo, implemented by a single abstract class BaseFoo, with a single concrete subclass Foo.

    Its purpose was to take some data that was gathered in realtime and display it to the user.

    I was given a task that required taking that same data and allowing it to be displayed in realtime or stored in a database to be displayed later. I thought it'd be simple - I could separate the gatherer and the renderer, add an option for the gatherer to write to the database, use the existing IFoo implemention to feed realtime data to the renderer, and write another implementation of IFoo to hit the database for it.

    Except that the rendering system wanted Line objects, which weren't specified by the IFoo interface. They were provided directly by the Foo class (or possibly the BaseFoo class - I forget which). Througout the code, the original developer had take shortcuts based on the fact that there was only one implementation of the interface. And to make matters worse, the Line creation and the data gathering were so intertwined that there was no way to separate them. Between those two mistakes, there was no way to shoehorn in a second implementation - I literally had to redesign and recode almost the entire system.

    The lesson the original (somewhat junior) developer should have learned (but I don't think he did) is that using interfaces everywhere because that's "what you're supposed to do" isn't always the right answer, and that if you're going to do it, you have to be consistent.



  • @ObiWayneKenobi said:

    @Cassidy said:
    @ObiWayneKenobi said:
    The ignorance of so many developers in regards to testing is very depressing.
     

    Did you mean regarding developers performing testing, or just recognising the importance and value of it?

     

    Both.  I have yet to work with any developer(s) that recognized the value of testing (usually I got some kind of "That's a waste of time, we don't do that here" response), let alone actually have things in place to perform testing, let alone knew of or wanted to use TDD/BDD (not that I'm a TDD advocate)

     

    Then you've worked with a lot of smart developers.  Unit testing is a waste of time, based on a premise that's been known to be false since before unit testing was ever invented: the concept that testing can somehow prove your code to be correct.  As Uncle Bob described it:

    However, think about what would happen if you walked in a room full of people working this way. Pick any random person at any random time. A minute ago, all their code worked.

    Let me repeat that: A minute ago all their code worked! And it doesn't matter who you pick, and it doesn't matter when you pick. A minute ago all their code worked!

    That's the promise, and the lie. Write good tests, and all your code will magically work.  But as Djikstra explained, that's been known to be a silly idea for decades. Having lots of tests that all pass proves one thing and one thing only: that you have lots of tests, which all pass. It does not prove that what the tests are testing matches the spec. It does not prove that your code is free from errors that you never considered when you wrote the tests. (And the things that you thought to test were the possible issues you were focusing on, so you're likely to have gotten them right anyway!) And last but not least, it does not prove that the tests, which are code themselves, are free from bugs. (Follow that last one to its logical conclusion and you end up caught in an infinite recursion, with tests all the way down.)


    To give an example, there's an open-source scripting library that I use whose author boasts of over 90% unit test coverage and 100% coverage in all core functionality. But the issue tracker is almost up to 300 bugs now and they keep coming. I think I found five from the first few days of using it in real-world tasks. (To his credit, the author got them fixed very quickly, and it's a good-quality library overall. But that doesn't change the fact that his "100%" unit tests didn't find these issues, which showed up almost immediately under actual usage.)

    The other major problem is that as you go on,

    every hour you are producing several tests. Every day dozens of tests. Every month hundreds of tests. Over the course of a year you will write thousands of tests.

    ...and then your requirements change. You have to implement a new feature, or change an existing one, and then 10% of your unit tests break, and you need to manually go over all of them to discern which ones are broken because you made a mistake, and which are broken because the tests themselves are no longer testing for correct behavior. And 10% of thousands of tests is a lot of unnecessary extra work. (Especially if you're doing it 1 test at a time, as the three edicts demand!)

    When you think of it, this makes unit testing a lot like global variables, or several other bad design "patterns": it may seem to be helpful and save you some time and effort, but you don't notice the disastrous costs until your project becomes big enough that their overall effect is disastrous, and by that time it's too late.

    Unit testing is just another in a long line of development fads that promise to make it easier to write working code without actually being a good programmer. None of them have ever managed to deliver on their promise, and neither does this one. (Just look at how many unit testing WTFs we've had featured here on the front page.) There is simply no shortcut for actually knowing how to write working code.

    There are some reports of automated testing being genuinely useful in cases where stability and reliability are of paramount importance. For example, the SQLite database project. But what it takes to achieve their level of reliability is highly uneconomical for most projects: a test-to-actual-SQLite-code ratio of almost 1200:1. Most projects can't afford that, and don't need it anyway.  The client I work on at work is about 4 million lines of code.  If I tried to tell my boss that we needed (1200 * 4M = 4.8 billion) lines of test code to ensure that we're bug-free, he'd laugh me out of his office, and quite possibly out of my own as well!

     



  • @Mason Wheeler said:

    If I tried to tell my boss that we needed (1200 * 4M = 4.8 billion) lines of test code to ensure that we're bug-free

    you'd obviously be insane and your boss would be right to laugh.

    Unit testing has its place; in library code (e.g. if you implement a data structure you'd want to test that it works correctly), where you do complicated math, etc. it's extremely useful. But I wouldn't use it everywhere and I definitely wouldn't start retrofitting unit tests onto old code I wouldn't otherwise be touching.


  • ♿ (Parody)

    @Mason Wheeler said:

    Write good tests, and all your code will magically work.  But as Djikstra explained, that's been known to be a silly idea for decades. Having lots of tests that all pass proves one thing and one thing only:
    that you have lots of tests, which all pass. It does not prove that what
    the tests are testing matches the spec. It does not prove that your
    code is free from errors that you never considered when you wrote the
    tests. (And the things that you thought to test were the possible
    issues you were focusing on, so you're likely to have gotten them right
    anyway!) And last but not least, it does not prove that the tests,
    which are code themselves, are free from bugs. (Follow that last one to
    its logical conclusion and you end up caught in an infinite recursion, with tests all the way down.)

    The question is really why anyone thinks this is an argument against automated tests. It's an argument against stupid, bad, mindless automated tests. I love my automated tests, because they help me catch bugs in parts of my codebase that I'm not necessarily thinking about, and might not otherwise notice until the users started complaining.


  • ♿ (Parody)

    @Mason Wheeler said:

    There are some reports of automated testing being genuinely useful in
    cases where stability and reliability are of paramount importance. For
    example, the SQLite database project.

    This was genuinely funny, however.



  • @boomzilla said:

    @Mason Wheeler said:
    There are some reports of automated testing being genuinely useful in cases where stability and reliability are of paramount importance. For example, the SQLite database project.

    This was genuinely funny, however.

     

    I've always been fond of the use of irony as a rhetorical device.  If you're not familiar with SQLite, this makes my point one way.  But if you are, it  underscores the point I'm making even more stongly.


  • Discourse touched me in a no-no place

    @Mason Wheeler said:

    I've always been fond of the use of irony as a rhetorical device. If you're not familiar with SQLite, this makes my point one way. But if you are, it underscores the point I'm making even more stongly.
    So what was your point in the first place? Somewhere in the middle of all the smart-ass commenting, I lost track of it.



  • @Mason Wheeler said:

    I've always been fond of the use of irony as a rhetorical device. If you're not familiar with SQLite, this makes my point one way. But if you are, it underscores the point I'm making even more stongly.

    Any chance you could expand, for those of us unfamiliar with it?



  • @Mason Wheeler said:

    Stuff

    If you change something and have to rewrite 10% of your unit tests in a very large application, your stuff was not sufficiently modular.



  •  Interfaces are a last-century idea.  Have a look at Scala, where you can have traits and structural subtyping.



  • @Mason Wheeler said:

    @ObiWayneKenobi said:

    @Cassidy said:
    @ObiWayneKenobi said:
    The ignorance of so many developers in regards to testing is very depressing.
     

    Did you mean regarding developers performing testing, or just recognising the importance and value of it?

     

    Both.  I have yet to work with any developer(s) that recognized the value of testing (usually I got some kind of "That's a waste of time, we don't do that here" response), let alone actually have things in place to perform testing, let alone knew of or wanted to use TDD/BDD (not that I'm a TDD advocate)

     

    Then you've worked with a lot of smart developers.  Unit testing is a waste of time,

     

    Target blindness, much?

     


  • Discourse touched me in a no-no place

    @configurator said:

    Any chance you could expand, for those of us unfamiliar with it?
    Heck, expanding for those familiar with it would be pretty good too.


  • Discourse touched me in a no-no place

    @Sutherlands said:

    If you change something and have to rewrite 10% of your unit tests in a very large application, your stuff was not sufficiently modular.
    Or it was insufficiently tested. If there were only 10 unit tests for a million-line app, breaking 10% of the tests would be… umm… surprising only in as far as that the test was being affected at all by the change.



  •  @Mason Wheeler said:

     


    To give an example, there's an open-source scripting library that I use whose author boasts of over 90% unit test coverage and 100% coverage in all core functionality. But the issue tracker is almost up to 300 bugs now and they keep coming. I think I found five from the first few days of using it in real-world tasks. (To his credit, the author got them fixed very quickly, and it's a good-quality library overall. But that doesn't change the fact that his "100%" unit tests didn't find these issues, which showed up almost immediately under actual usage.)

    Besides the editor here (firefox spell check does not work, and in general crappy)  the real WTF is the code coverage measure itself. I mean there are probably 1 billion ways to create a stupid test and about 10 for a sane one. i mean you know it but the most common thing is not testing for when things go bad. So just testing that it works when the correct input is provided is pretty much useless and if you have a coverage of 90%, chances are thats exactly whats happening.

     



  • @lscharen said:

    For something this trivial? Nope.

    For actual classes that do things?  Yep.

    It makes writing unit tests much easier since all current mocking frameworks can automatically mock up interfaces for you.

    I couldn't sum this up better myself. Unfortunately our genuises-in-residence don't see this which means I'm fixing the most tightly coupled application you could ever imagine, but the "model" which also contains direct references to the data layer and the GUI implementations does actually implement an interface for it's getters and setters, which is something to be eternally grateful for. We've also got some classes that you have to cast to either a read or a write interface, depending on what you want to do with them, so reading something, altering it and saving it back requires two casts of that class.

     



  • In my humble opinion defining interfaces like this may help to keep sub-project dependencies straight and clean, which in turn makes it easier to maintain proper build order and to prevent chicken/egg problems. Instead of all sub-projects referencing each other in a single, huge solution, one can create smaller, independent and more manageable solutions, which can refer to the .dll defining the interfaces. The interfaces are build on the build server as the first solution, obviously.


  • :belt_onion:

    @boomzilla said:

    @Mason Wheeler said:
    Write good tests, and all your code will magically work.  But as Djikstra explained, that's been known to be a silly idea for decades. Having lots of tests that all pass proves one thing and one thing only: that you have lots of tests, which all pass. It does not prove that what the tests are testing matches the spec. It does not prove that your code is free from errors that you never considered when you wrote the tests. (And the things that you thought to test were the possible issues you were focusing on, so you're likely to have gotten them right anyway!) And last but not least, it does not prove that the tests, which are code themselves, are free from bugs. (Follow that last one to its logical conclusion and you end up caught in an infinite recursion, with tests all the way down.)

    The question is really why anyone thinks this is an argument against automated tests. It's an argument against stupid, bad, mindless automated tests. I love my automated tests, because they help me catch bugs in parts of my codebase that I'm not necessarily thinking about, and might not otherwise notice until the users started complaining.

    And I love going through other people's unit tests because they are usually the best documentation on how you are supposed to use their code



  • @boomzilla said:

    @Mason Wheeler said:
    Write good tests, and all your code will magically work.  But as Djikstra explained, that's been known to be a silly idea for decades. Having lots of tests that all pass proves one thing and one thing only:
    that you have lots of tests, which all pass. It does not prove that what
    the tests are testing matches the spec. It does not prove that your
    code is free from errors that you never considered when you wrote the
    tests. (And the things that you thought to test were the possible
    issues you were focusing on, so you're likely to have gotten them right
    anyway!) And last but not least, it does not prove that the tests,
    which are code themselves, are free from bugs. (Follow that last one to
    its logical conclusion and you end up caught in an infinite recursion, with tests all the way down.)

    The question is really why anyone thinks this is an argument against automated tests. It's an argument against stupid, bad, mindless automated tests. I love my automated tests, because they help me catch bugs in parts of my codebase that I'm not necessarily thinking about, and might not otherwise notice until the users started complaining.

    The argument is a strawman, it reduces down to people write code with bugs in it! This argument or a variation of it is often spouted off as a rebuttal from someone who doesn't understand Test Driven Development (TDD), n.b. I am not talking about the test first implementation of TDD; it doesn't matter if you write your test before or after the code, it's important that tests are used to drive your new code.

    I expect most programmers try to exercise the code they have just written in some way. I have witnessed people change the program flow to drive their code, these changes must be removed after they have served their purpose. The idea with TDD is to drive the code you have just written with a unit test. By doing this you store the work needed to verify that unit of code. It takes a little more time, but most of it is work your doing anyway. When changes are made you can quickly re-do all that work, sure some of it may no longer be relevant, and you may need to add, modify, or delete test code to make it match the new reality.

    The advantages are many:
        Did a change you make just break some other functionality you wrote a year ago?
        The tests are captured as you write the code, before you can forget how it all works.
        Tests become documentation that is easier to keep up to date, (comments like documentation often don't get updated)
        You can often run your application in release mode, since your using tests to drive your code, not a debug build!



  • @beginner_ said:

    @Mason Wheeler said:
    But that doesn't change the fact that his "100%" unit tests didn't find these issues, which showed up almost immediately under actual usage

    And it'd be interesting to see the report explaining why those unit tests failed to identify those issues.

    Just like any idiot can write bad code, any idiot can write bad tests.

    The quality of the product is directly related to the quality of testing performed upon the product, which is influenced by the quality of the actual test cases themselves. Scrimp upon those, and you're letting things slip through the net.



  • @SirHegel77 said:

    In my humble opinion defining interfaces like this may help to keep sub-project dependencies straight and clean, which in turn makes it easier to maintain proper build order and to prevent chicken/egg problems. Instead of all sub-projects referencing each other in a single, huge solution, one can create smaller, independent and more manageable solutions, which can refer to the .dll defining the interfaces. The interfaces are build on the build server as the first solution, obviously.

    I used that design pattern for many years and frankly, I never really saw the value of it. It did make it possible to build small components of the application independently without needing up-to-date DLLs, but it also made the code wicked hard to understand and made debugging tricky because you had to dig through empty interfaces all the time. The only purpose of this design pattern was so the company could hire 200 bad developers and have them work on super-small modules. The end result was nobody knew how the application worked, but almost anybody could step in almost anywhere and attempt a bug fix. That obviously lead to some quality issues, and source control issues, and in general a lot of issues having to do with the fact that the application couldn't be viewed as an atomic thing, and nobody knew how to install all the components and get the whole thing up and running all at once - installations usually took a week, if you could get the right group of people. Oh and the real beauty of it all? The application as whole couldn't be built.



  • @jasmine2501 said:

    I used that design pattern for many years and frankly, I never really saw the value of it. It did make it possible to build small components of the application independently without needing up-to-date DLLs, but it also made the code wicked hard to understand and made debugging tricky because you had to dig through empty interfaces all the time.
     

    My application's code base is roughly 20 million lines of code; a full build takes hours (though we do full builds every night on the build servers).  Anything that cuts down on dependencies helps with development.


  • @jasmine2501 said:

    I used that design pattern for many years and frankly, I never really saw the value of it. It did make it possible to build small components of the application independently without needing up-to-date DLLs, but it also made the code wicked hard to understand and made debugging tricky because you had to dig through empty interfaces all the time.

    Interesting, indeed. I don't know how well this suits to other development environments, but in .Net / Visual Studio this appears to work quite well. In our application we have 13 solutions, of which one defines the relevant interfaces. The solutions are built on CruiseControl.Net server and the correct build order is defined in a .proj file. Some solutions serve as general libraries, which can be shared between multiple applications.

     



  • I figured out what makes the above code a WTF, unless the OP anonymized it: The interface is in a separate DLL. Even if you are using interfaces everywhere, you usually group them in their actual module they belong to, e.g. in this case the interface would still be in the Entities DLL. Having a separate DLL just for interfaces is pretty stupid.



  • @ObiWayneKenobi said:

    I figured out what makes the above code a WTF, unless the OP anonymized it: The interface is in a separate DLL. Even if you are using interfaces everywhere, you usually group them in their actual module they belong to, e.g. in this case the interface would still be in the Entities DLL. Having a separate DLL just for interfaces is pretty stupid.

    Yes, they're in the XXXInterfaces.dll, and entities are in their own DLL file. But I'm pretty sure the way it complects working with the code is much more WTFy than project structure.



  • @configurator said:

    @ObiWayneKenobi said:
    I figured out what makes the above code a WTF, unless the OP anonymized it: The interface is in a separate DLL. Even if you are using interfaces everywhere, you usually group them in their actual module they belong to, e.g. in this case the interface would still be in the Entities DLL. Having a separate DLL just for interfaces is pretty stupid.
    Yes, they're in the XXXInterfaces.dll, and entities are in their own DLL file. But I'm pretty sure the way it complects working with the code is much more WTFy than project structure.
     

     

    Interface is separate class make sense. You can keep that code plain and obfuscate the implementation code. If you're not into covering up your code, then you place it all in a single class. 



  •  @ObiWayneKenobi said:

    Having a separate DLL just for interfaces is pretty stupid.

      I admit being stupid most of the time, but in this case I disagree. In a big project you can simplify your build environment and sub-project cross-references by keeping the interfaces in a separate .dll, which is built first. You can turn your project dependencies from a mesh to eg. a star, which is easier to maintain.

    http://upload.wikimedia.org/wikipedia/commons/9/96/NetworkTopologies.png


Log in to reply