Morph, Everything



  • Several guys on our team are writing a graft-on feature for our main product. They recently did a massive checkin, and upon perusing, I found a plethora of WTF.

    First, they realized that they frequently needed to convert from one object to another. Rather than write a whole bunch of adapters, they wrote one generic one...

        // Rather than writing an adapter for each transformation of object A to
        // object B, we do it once generically.
        class Morph {
            public static<t> T morph(Object object, Class<?> cls) throws Exception {
                Class c = cls.forName(cls.getName());
                Object o = c.newInstance();
                // Iterate through every field in the target class; for each one,
                // find a matching field (name and type) in the passed argument, and
                // if found, copy the data for the first match into the new object
                return (T) o;
            }
        }
    </t>

    What happens if you have:

    class A {
      private String date = new SimpleDateFormat("yyyyMMdd").format(new Date());
    }
    class B {
      private String date = new SimpleDateFormat("MM/dd/yyyy").format(new Date());
    }
    

    try {
    A a = new A();
    B b = Morph.morph(a,B.class); // wrong format date gets copied.
    } catch (Exception e) {
    }

    Lots of our data structures have the very descriptive field: String data. Is one absolutely sure that it should be copied? What if you have this:

    class A {
      private String data;
    }
    class B extends A {
      private String data;
    }
    

    Which one will it copy if you try to morph a B?

    -------

    Next up, the Everything interface.

    Interfaces define a contract between the interface and implementor. They have their uses. However, not every class needs to be fronted by an interface; sometimes a POJO is just a POJO and a simple concrete implementation will suffice.

    These guys were refactoring daily, and so decided that every single class would implement an interface comprising all its methods. This may work for simple classes, but as you build compound objects, the number of interfaces they must implement grows rapidly.

    I found 297 interfaces in this graft-on project. Annoying, but tolerable. Then I found the Everything interface:

    public interface Everything extends I1, I2, ..., I297 { ... }

    For kicks, I created a dummy class that implemented this interface, then used Eclipse to generate stubs for all of the unimplemented methods. All 2300+ of them. Given each was a function name, blank "stub" line, closing brace and trailing blank separator line, that's 9200+ LOC without actually writing any implementations.

    I had to know, so I grep'd, and found the main class of the graft-on, which implemented Everything; all 54K LOC in one file.

    And these guys are creating the foundation of the future of this product.

    ----------

    Thanks for listening.

    </rant>



  • I assume the comments are yours, and you just wanted to spare us the horrid details of the morph method?

    I know about duck typing and even use it myself sometimes, but never before have I seen someone try to turn a duck in to a banana.



  • @tdb said:

    I assume the comments are yours, and you just wanted to spare us the horrid details of the morph method?

    I know about duck typing and even use it myself sometimes, but never before have I seen someone try to turn a duck in to a banana.

     

    Ha, I'm sure Mc Gyver did....

    Using duck tape.

     



  • every single class would implement an interface

    Who was the genius who came up with this idea? Shoot him/her in the head! NOW! HEADSHOT!

    public interface Everything extends I1, I2, ..., I297 { ... }

    Please, tell us they aren't named like that.

    Poor soul.



  • No, they aren't. It's just snoofle's way of anonymization. But I still wouldn't like to maintain source code with interfaces representing a collection of 300 other interfaces. Damn!



  • @snoofle said:

    First, they realized that they frequently needed to convert from one object to another.
     

    I believe, my dear sir, that this is the Real Double-yew Tee Eff.

    It reeks, sir.  It reeks.

     

    (I'm assuming by "convert" here this doesn't mean something like a units conversion from Newtons to pounds-force, but something more insidious, like converting a jet turbine into a watermelon.)


  • ♿ (Parody)

    @too_many_usernames said:

    @snoofle said:
    First, they realized that they frequently needed to convert from one object to another.

    I believe, my dear sir, that this is the Real Double-yew Tee Eff.

    It reeks, sir.  It reeks.

    (I'm assuming by "convert" here this doesn't mean something like a units conversion from Newtons to pounds-force, but something more insidious, like converting a jet turbine into a watermelon.)

    I'd assume that it's converting between analogous objects in multiple frameworks / systems. Especially if you have multiple COTS products integrated.


  • So, how well have you shined up your resume ?



  • @boomzilla said:

    I'd assume that it's converting between analogous objects in multiple frameworks / systems. Especially if you have multiple COTS products integrated.
     

    In that case, the proper paradigm is "translate," not "convert" isn't it?  In which case I stand by my "that's the WTF" comment.  Although, I do realize that the typical software term is "data format converter"... I wonder why that terminology was coined instead of "translation"? Nevermind...

    Given your assumption is correct, then the morph method is indeed flawed, because there's no guarantee that just because fields have the same name that they actually mean the same thing; I think you really would need to have a specific conversion method for every type of "language" you want to interpret, wouldn't you? (This is actually an honest question - but I realized it's a doozy: is it possible to create a "universal translator"?)


  • ♿ (Parody)

    @too_many_usernames said:

    Given your assumption is correct, then the morph method is indeed flawed, because there's no guarantee that just because fields have the same name that they actually mean the same thing; I think you really would need to have a specific conversion method for every type of "language" you want to interpret, wouldn't you? (This is actually an honest question - but I realized it's a doozy: is it possible to create a "universal translator"?)

    Yes, the universal won't work, if for no other reason than what snoofle already showed. I may or may not be correct in my assumption, but integration of different systems was the only reason I could think of for wanting to convert stuff. I'm probably just not as imaginative as the WTF devs that snoofle works with, though.

    Of course, the reality is that you're likely to end up with all sorts of different logic for different objects, even within the same system, and especially if there are lots of people working on it. Not to mention all of the special cases that have been crafted to deal with various situations.



  • All:

    Yes, morphing is between objects that sort of mean the same thing in this and other integrated systems.

    The interface names are not i1..i297; they're real names, but of no consequence to the stupidity of it all.

    The body of the morp function is actually fairly decent; there's really only one way to do a member search in a composite object, but they simply forgot the possibility that same-name fields don't always have the same data or mean the same thing. We have monetary amounts as BigDecimals, but depending upon another flag, they may be interpreted as dollars and cents, or dollars and 32nd's of a dollar, so 3.16 really means $3.50. If you copy a dollars-and-32nd's number into a price field that is interpreted as dollars-and-cents...whoops; subtle bug!

     

     



  • @snoofle said:

    We have monetary amounts as BigDecimals, but depending upon another flag, they may be interpreted as dollars and cents, or dollars and 32nd's of a dollar, so 3.16 really means $3.50.

     

    What the..?  I don't know much about Wall Street or financial software, but what's the purpose of having 32nds of a dollar (represented like decimals no less)?  Is that how credit default swaps work or something?



  • @Justice said:

    @snoofle said:

    We have monetary amounts as BigDecimals, but depending upon another flag, they may be interpreted as dollars and cents, or dollars and 32nd's of a dollar, so 3.16 really means $3.50.

     

    What the..?  I don't know much about Wall Street or financial software, but what's the purpose of having 32nds of a dollar (represented like decimals no less)?  Is that how credit default swaps work or something?

    Lots of financial transactions are priced in 32nd's of a dollar instead of dollars and cents. It has nothing to do with the financial instrument or its value; it's just the way it's always been done.

    It might make more sense if we always stored the amounts as dollars and cents and converted to 32nd's just for the front end, BUT the main frame, clearing houses, exchanges, and counterparties (the other person in the trade) all do it this way, so you need to work with it that way too.

    Yes, it's confusing to a newbie, and when you're looking at the db, you need to look at two fields to know what a given column in a given row means, but that's the way of Wall Street.

     



  • @snoofle said:

    @Justice said:

    @snoofle said:

    We have monetary amounts as BigDecimals, but depending upon another flag, they may be interpreted as dollars and cents, or dollars and 32nd's of a dollar, so 3.16 really means $3.50.

     

    What the..?  I don't know much about Wall Street or financial software, but what's the purpose of having 32nds of a dollar (represented like decimals no less)?  Is that how credit default swaps work or something?

    Lots of financial transactions are priced in 32nd's of a dollar instead of dollars and cents. It has nothing to do with the financial instrument or its value; it's just the way it's always been done.

    It might make more sense if we always stored the amounts as dollars and cents and converted to 32nd's just for the front end, BUT the main frame, clearing houses, exchanges, and counterparties (the other person in the trade) all do it this way, so you need to work with it that way too.

    Yes, it's confusing to a newbie, and when you're looking at the db, you need to look at two fields to know what a given column in a given row means, but that's the way of Wall Street.

    1/(2^n) breakdowns ( i.e. by halves) has been around longer than computers (electronic)...most likely well before "Binary" was ever formally defined...



  • @snoofle said:

    @Justice said:

    @snoofle said:

    We have monetary amounts as BigDecimals, but depending upon another flag, they may be interpreted as dollars and cents, or dollars and 32nd's of a dollar, so 3.16 really means $3.50.

     

    What the..?  I don't know much about Wall Street or financial software, but what's the purpose of having 32nds of a dollar (represented like decimals no less)?  Is that how credit default swaps work or something?

    Lots of financial transactions are priced in 32nd's of a dollar instead of dollars and cents. It has nothing to do with the financial instrument or its value; it's just the way it's always been done.

    Presumably related to the fact that one dollar = 8 bits.   Yaaarrrrrr!



Log in to reply