"You wanna abstract as little as possible"



  • Heard from the mouth of our so-called "architect".

    I now return you to your own WTF. 



  •  The Real WTF is there is no WTF

    First? 



  • He'd be at odds with a developer I know, for whom everything is "another layer of abstraction"

     



  • I generally agree.  Abstractions are a double-edged sword.  They can be used to hide away details that a developer may not need to worry about.  At the same time, developers that do need to worry about those details will have more trouble digging through the abstractions to figure it out.  Something like an interface, for example, can make your code mockable and thus unit testable, but at the same time make it unclear where to find the concrete implementation.

     

    I wouldn't say "as little as possible" personally, but rather abstract carefully and intellegently, where it makes sense and will provide benefit.  



  • @Ryde said:

    Something like an interface, for example, can make your code mockable
     

    Of course, most code posted here is posted here precisely because of its mockability. 



  • @Ryde said:

    Something like an interface, for example, can make your code mockable and thus unit testable, but at the same time make it unclear where to find the concrete implementation.

    Yes, because it's so hard to understand "class MuthubitsiCar implents Car"...

    I don't think it's a WTF without it's context. Abstracracting things is usefull and speeds up further development, but you miss your goal when abstracting everything. Thus I stand by the phrase: "Abstract where abstracting is usefull, but do it as little as possible".



  • @dtech said:

    Yes, because it's so hard to understand "class MuthubitsiCar implents Car"...

    That implies that you're already looking at the concrete implementation.  If you're looking at the interface, then to find the implementation you have to look in another file. This could be in another directory, or another module altogether.  Its usually not a big deal but it does take a bit more effort.  The more interfaces, the more effort.

     I could have used other examples.  The architect of the system I'm working on literally built a whole new framework, full of irritating abstractions, on top of .Net just for a web app, and it makes development hell. I just chose a simple and common example trying to illustrate the tradeoff.

     @dtech said:

    "Abstract where abstracting is usefull, but do it as little as possible".

     Thats pretty much what I said.  I don't understand the snark towards me.



  • I guess it's not too funny without context, even though, as somebody with an 'architect' title, you'd think abstraction might be his thing.  The person that quote came from has a lot of sway in the company, and pushed a WTF I was still undoing 9 months later, long after he had patted his own back about being clever and moved on.  One only hopes the poor coworker who actually works for him won't be in too much pain.

    The previous WTF was one with a good number of sub-WTFs, but among them was lack of abstraction.  Basically, he made the decision that a certain database in one app I work on should use XML as a backend, didn't need to be persistently opened (that is; that it's ok if things like delete or reparent leave the object cripped ... you just reload it), and that since it's XML, that users should simply pass it Xpath strings to search for objects inside.  Also, he forbade putting even an interface in front of this concrete, XML wielding machine saying simple "So I'm a luddite.  I can't stand that shit." 

    Well, one weekend later, and many, many modules of our software are infected with uses of this little database and access library.  Then it's "bye-bye ready for my next success!!" from the architect.

    Lacking support for basic functionality like renaming objects and categories and adding new categories any way except editing the XML by hand, user frustration built up pretty quickly.  Eventually, when the coast was clear, I was able to rewrite the little library, but many stains remained.  In the intervening months, the poor interface provided meant that in order to do useful work, other code would either go behind the little database's back or make do (for example, building own indices on the little database based on fields other than name or id), compounding the problems. 

    Of course, if this code had been even a little bit abstracted (just one interface, and a simple search function that hid the underlying XML), then fixing the rest would've been much much easier even had the same woefully incomplete initial implementation fallen at my feet, as other code that I also didn't own wouldn't be broken had I added indexing features, moved data around for easier access, or even used an actual database.

    Sorry, seemed more WTFy at the time, but I think it's still a WTF



  •  Right-click, Go to Inheritor :) I love ReSharper



  • "As little as possible" is IMO wrong, since it's almost always possible to completely avoid any level of abstraction, and it's almost never a good idea.

    My saying would go along the lines of "don't use (the means of) abstraction without really adding a level of abstration".

    E.g. an interface that is modelled after a class and contains all public methods is probably pointless. (Except when needed for purely technical reasons) 



  • @dtech said:

    Abstracracting things is usefull and speeds up further development, but
    you miss your goal when abstracting everything. Thus I stand by the
    phrase: "Abstract where abstracting is usefull, but do it as little as
    possible".

    @ammoQ said:
    E.g. an interface that is modelled after a class and contains all public methods is probably pointless. (Except when needed for purely technical reasons)
     

     I think this is right on the money.  I hate those interfaces that have tens (hundreds?) of methods that are basically getters.  The authors usually do pat themselves on the back and say "wow, now you can swap out implementations" but totally miss the point of interfaces (and abstraction) in the first place.

    Abstraction isn't having all of your classes impelement interfaces, creation via Factory methods, and then "coding to the interface," it's about hiding detais behind a nice, clean API.  According to the OP:

    @arty said:

    The previous WTF was one with a good number of sub-WTFs, but among them
    was lack of abstraction.  Basically, he made the decision that a
    certain database in one app I work on should use XML as a backend,
    didn't need to be persistently opened (that is; that it's ok if things
    like delete or reparent leave the object cripped ... you just reload
    it), and that since it's XML, that users should simply pass it Xpath
    strings to search for objects inside.  Also, he forbade putting even an
    interface in front of this concrete, XML wielding machine saying simple
    "So I'm a luddite.  I can't stand that shit."

    Even if this code did implement some interface, it's API requires the user to pass in these XPath strings, so the interface would have been useless anyway.  It also sounds like it's up to the client code to do some preparation/assembly/cleanup, meaning there is probably a ton of redundancy everywhere and lots of subtle bugs introduced when the client doesn't init or cleanup properly.  To me, this is pretty much the essence of OOP; encapsulate all of these responsibilities and other nonsense inside of that one class (or package or whatever) and it'll be smooth sailing.



  • @Outlaw Programmer said:

    Even if this code did implement some interface, it's API requires the user to pass in these XPath strings, so the interface would have been useless anyway.  It also sounds like it's up to the client code to do some preparation/assembly/cleanup, meaning there is probably a ton of redundancy everywhere and lots of subtle bugs introduced when the client doesn't init or cleanup properly.  To me, this is pretty much the essence of OOP; encapsulate all of these responsibilities and other nonsense inside of that one class (or package or whatever) and it'll be smooth sailing.
     

    All of that is true. 

    Exposing implementation details like the use of XML and the particular document structure used (you need to know this to query with xpath), combined with having to coordinate with other programmers at the company to fix the object's various deficiencies made it difficult. 

    At least in my case, it makes it easier to narrow an object's responsibility when I consider that I might implement another variant and I don't want to make it painful.  I find that thinking of an object as "I want a family of things that provide a service of this kind, one of which I'll implement now, and maybe another if there's a need", rather than "I want one thing that solves this specific problem right now", helps me take a wider view and consider the consequences of what the object exposes, and how one interacts with it. 

    These are important considerations when you might later need to contact other programmers to have them fix code that breaks under a changing object that exposes too much.  This is why I often start with an abstract interface even if it isn't needed right away; I pretend it might be later and it sometimes is.  Also, it costs a very small amount of time compared to making a mistake.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.