Exception handling in C#



  • Actual C# product code:


    try
    {
    // stuff skipped
    }
    catch (Exception ex)
    {
    throw; // nothing skipped
    }

    Hurrah for exception handling!

    [Moved to Sidebar. -ShadowMod]



  • Yup, seen this in the wild. Some years ago I did a two day long rafactor removing all traces of this "pattern" (and similarly useful ones) from the project.
    The same person who wrote it also on one occasion explained to me that the garbage collector is that thing that collects the exceptions you don't handle.



  • As posted the code (neglecting the comments) is actually a useful construct as it is equivilant to not having a try/catch (which is appropriate if the code can not handle the exception to propogate it back up the call stack) while allowing for a breakpoint to be set at that specific level in the chain. The performance hit of a catch/throw is usually minimal (expecially since it will only happen in excepional circumstaces which are ararely performance sensitive)


  • Garbage Person

    @TheCPUWizard said:

    The performance hit of a catch/throw is usually minimal (expecially since it will only happen in excepional circumstaces which are ararely performance sensitive)
    I'd expect that in a release build, the compiler would optimize it out of existence anyway.



  • @TheCPUWizard said:

    expecially since it will only happen in excepional circumstaces which are ararely performance sensitive
    Except when you get people who seem to think that throwing exceptions is how you handle non-exceptional logic.



  • @Weng said:

    @TheCPUWizard said:

    The performance hit of a catch/throw is usually minimal (expecially since it will only happen in excepional circumstaces which are ararely performance sensitive)
    I'd expect that in a release build, the compiler would optimize it out of existence anyway.

    Microsoft C# v1.0 thru v5.0 does not.



  • Stop defending this. It's not about performance hits or what not, it's about junk code. The benefit of being able to put a breakpoint is useful during early development but this noise code has no place in production. Also, any half decent IDE has a break-on-exception feature, so there are exactly zero reasons to do this.



  • @veggen said:

    Stop defending this. It's not about performance hits or what not, it's about junk code. The benefit of being able to put a breakpoint is useful during early development but this noise code has no place in production. Also, any half decent IDE has a break-on-exception feature, so there are exactly zero reasons to do this.

    I disagree. Lets take the last point first: "any half decent IDE has a break-on-exception feature"... that it does... Now consider a primitive that is called from many  places in the code. This primitive throws exceptions on a fairly regular basis because of real error (e.g. it is a network connection on a sporatic network). There is ONE path where it is problematic and it is about 1/2 way up the call stack. There is no way to use the break on exception feature (in any IDE I know of) so that it breaks on this specific case only.

    Now as to the first part... situations like this DO occur in production, having to make a change to the source to diagnose a problem is a real WTF. If the code is already in place, then simply attaching a (possibly remote) debugger to the running application allows for diagnosis.

    Preparing for potential debug sessions in released code is an important aspect of design (and one that is often overlooked). Unless putting in the "hooks" can be proven to have a negative impact on the running of the application, improving the ability to diagnose potential production problems can significantly reduce downtime.



  • It's not "any IDE" -- it's VisualStudio 2008. This is C#; development and target platform is Windows.
    The code has not been deployed and is not known to have a problem at this point. Note that there is no error logging, just a throw.
    There are no explanatory comments regarding the redundant try/catch/throw. "put breakpoint here to research exception".
    If a problem like this occurred in deployed code, you will probably be examining a crash dump, which will include the entire stack trace.
    In short, as veggen wrote, it's junk code. It appears to have been written by someone not clear on the concept of exception handling.



  • @rstinejr said:

    In short, as veggen wrote, it's junk code. It appears to have been written by someone not clear on the concept of exception handling.

    Especially when you consider the fact that the guy who wrote it thinks the garbage collector is what's handling unhandled exceptions.



  • Of course, TRWTF is that the code generates a compiler warning, "The variable 'ex' is declared but never used." Maybe the programmer is a Java hack. For those unfamiliar with C#, the following is valid, but stupid:

    [code language="C#"]
    try
    {
        // stuff skipped
    }
    catch (Exception)
    {
        throw; // herp derp
    }
    [/code]
    


  • Bingo! Trying to clean up compiler warnings about unused variables is how I found this gem. :-)



  • @TheCPUWizard said:

    @Weng said:

    @TheCPUWizard said:

    The performance hit of a catch/throw is usually minimal (expecially since it will only happen in excepional circumstaces which are ararely performance sensitive)
    I'd expect that in a release build, the compiler would optimize it out of existence anyway.

    Microsoft C# v1.0 thru v5.0 does not.

    I was about to respond with something like "you trust the compiler waaay more than I do." It's nice to see I was right about something.

    The C# compiler doesn't even optimize tail recursive calls... expecting it to optimize away try/catch blocks would be a bit much.



  • @bridget99 said:

    @TheCPUWizard said:
    @Weng said:

    @TheCPUWizard said:

    The performance hit of a catch/throw is usually minimal (expecially since it will only happen in excepional circumstaces which are ararely performance sensitive)
    I'd expect that in a release build, the compiler would optimize it out of existence anyway.

    Microsoft C# v1.0 thru v5.0 does not.

    I was about to respond with something like "you trust the compiler waaay more than I do." It's nice to see I was right about something.

    The C# compiler doesn't even optimize tail recursive calls... expecting it to optimize away try/catch blocks would be a bit much.

    Just a word of caution about making judgements on what the C# compiler does (or, more importantly, does not) optimize....The output of the compiler is MSIL, and the JIT compiler does a tremendous amount of optimization.

    This makes alot of sense, since investments in optimization at JIT time provide benefit to ALL .Net languages (and also because it can make use of significant information, such as possible call stacks) which can not be made at the time an assembly is compiled.



  • @TheCPUWizard said:

    @bridget99 said:
    @TheCPUWizard said:
    @Weng said:

    @TheCPUWizard said:

    The performance hit of a catch/throw is usually minimal (expecially since it will only happen in excepional circumstaces which are ararely performance sensitive)
    I'd expect that in a release build, the compiler would optimize it out of existence anyway.

    Microsoft C# v1.0 thru v5.0 does not.

    I was about to respond with something like "you trust the compiler waaay more than I do." It's nice to see I was right about something.

    The C# compiler doesn't even optimize tail recursive calls... expecting it to optimize away try/catch blocks would be a bit much.

    Just a word of caution about making judgements on what the C# compiler does (or, more importantly, does not) optimize....The output of the compiler is MSIL, and the JIT compiler does a tremendous amount of optimization.

    This makes alot of sense, since investments in optimization at JIT time provide benefit to ALL .Net languages (and also because it can make use of significant information, such as possible call stacks) which can not be made at the time an assembly is compiled.

    That makes sense. What matters in most cases is what the overall tool chain does. Personally, I wouldn't want to rely on assumed optimizations at all. My advice in code is to simply "write what you really mean the computer should do."

    This sounds obvious, but when I hear developers 'think aloud' in a team setting, or when I read what people write online about programming, I see other, very different thought processes at play. Most of these people are either self-taught or were educated to worship false gods (OOP, maintainability), and their inability to master simple legibility should thus not surprise.



  • @bridget99 said:

    That makes sense. What matters in most cases is what the overall tool chain does. Personally, I wouldn't want to rely on assumed optimizations at all. My advice in code is to simply "write what you really mean the computer should do."

    This sounds obvious, but when I hear developers 'think aloud' in a team setting, or when I read what people write online about programming, I see other, very different thought processes at play. Most of these people are either self-taught or were educated to worship false gods (OOP, maintainability), and their inability to master simple legibility should thus not surprise.

    Ive read that 10 times (or more) and still not sure if I agree or not.... In the "good ole days" one had to be very careful to tell the computer exactly HOW to do things in order to meet the limitations of the systems. The result was often necessarily convoluted obtuse code. These days, we do not have to deal with that in general. The primary focus shoud be readabiity by other developers and stakeholders, and "performance" should be a consideration only when a performance specification is violated (or at risk)...

     I think what you said matches that......(I really have to get more than 3 hours sleep a night....)



  • @bridget99 said:

    educated to worship false gods (OOP, maintainability)

    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...



  • @bridget99 said:

    false gods (OOP, maintainability), and their inability to master simple legibility should thus not surprise.
     

    Uh, simple legibility of code is a fundamental part of maintainability, so that's not really a "false god", now is it?



  • @ekolis said:

    @bridget99 said:
    educated to worship false gods (OOP, maintainability)

    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...

    Guys Bridget is the worst programmer on this forum, anything she says is anti-advice. Do the opposite of it.


  • :belt_onion:

    @dhromed said:

    Uh, simple legibility of code is a fundamental part of maintainability, so that's not really a "false god", now is it?

    Stop interrupting the entertaining bridget99 rants with your logic!

     



  • Right. Whatever your methodology, readability, particularly clarity of intent, is an attribute of good code, and enhances maintainability.


    The problem with the code snippet is not its inefficiency -- there is no reason to think that the overhead of try/catch would be excessive. The problem is that the code leaves the reader going WTF? Why did we enter the try block anyhow?


    And for my money, I don't see that adding the try/catch would be big help in diagnostics, particularly since there is no logging, and it's not that often you can hook up a debugger on a program running on the customer's machine.



  • @ekolis said:

    @bridget99 said:
    educated to worship false gods (OOP, maintainability)

    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...

    Fundamentally, "easy to work on later" and "easy to read" do sound like the same thing. Almost inevitably, though, when a programmer is spending time on something that doesn't add any obvious, concrete benefit in terms performance or correctness, "maintainability" will be the explanation that gets invoked to justify this expenditure. Better programmers probably just admit that they are addressing concerns of style, and limit these activities to the secondary role they merit. The young, the ill-educated, and the in-over-their-head paper architects are too good (and too slow) for a quick cleanup, though.



    Stylistic concerns can be damned important, but when "maintainability" is allowed to explain away the addition of whole tools, layers, libraries, and such to a project, or becomes the justification for increasing a project's timeline by whole orders of magnitude, that's a problem. "Maintainability" is indeed the "false god" being served in such situations.



    Legibility is often not served by such undertakings. My favorite example is what happens when code is written around a base class in .NET or Java (polymorphism). One of the best things about an IDE is the "Go To Definition" context menu item (or similar). Often, though, base class definitions are unsatisfactory for a programmer attempting to read through the code. The implementation is really in the derived classes. That is supposed to be the entire point of OOP, and "Go To Definition" is simply a casualty of OOP.



    Setting aside the benefits of OOP, one must admit that breaking "Go To Definition" is a cost of OOP. When this cost is weighed against the benefit of having many base classes, each with their own special little implementation, it is perhaps small. It's not as if one could or would want to even really browse into some procedural alternative; if the procedural alternative is a single function handling work that could be done by many smaller and better methods, then this function might be very unwieldy.



    When there are only a few derived classes involved, this cost/benefit calculation is not so easy. When just one derived class is involved, the cost of polymorphism (breaking "Go To Definition", introducing new files, new statements, and other points of failure, etc.) is completely wasted.



    You may ask "who would ever do that?" but let me assure you that some very smart people do this on occasion, and do similar, slightly less asinine things involving, say, two or three derived classes quite frequently. I once worked with a developer who did this (i.e. broke "Go To Definition" for a single, pointless derived class) who later walked off into the sunset with a very nice job at the SAS Institute. You can probably imagine how far I am from the SAS Institute right now. After all, I'm the "worst developer on this board." But at least I'm not the person who broke "Go To Definition" for his successor for no good reason.



    The reason these smart people make such asinine design decisions is as I described earlier: they have been brought up to worship the false gods of OOP and "maintainability." The perpetrators in such incidents inevitably invoke these two terms in defending their work, and if they are so unfortunate as to do so in my presence, I will without fail resort to mentioning "legibility."



    It's really a bit narcissistic to spend too much time making one's code maintainable. To do so assumes that it's inevitable that the code will actually make it to production, and that users will actually use it and allow for its continued existence. These things are simply not true of most code, and by definition most effort spent on maintainability (even in the realest sense) is thus wasted. Of course, the code most likely to make it to production and get picked up by end users is the code that does what they need it to first, and obsessing over maintainability does not serve that goal.



    The obvious response to what I am saying is that maintainability must be taken in moderation like anything else. But my response to that is that a goal, or a philosophy, ought to be judged by the sorts of though processes it engenders, and that "maintainability" in my experience is often a watchword for hubris and incompetence.



  • @bridget99 said:

    It's really a bit narcissistic to spend too much time making one's code maintainable. To do so assumes that it's inevitable that the code will actually make it to production, and that users will actually use it and allow for its continued existence. These things are simply not true of most code...

    Wow. Well, maybe your code isn't likely to make it to production. For those of us who have jobs doing this, that's just an absurd statement if I've ever seen one. Then again, you did express a preference for FOSS, so maybe that's why you think maintainability is unimportant and that most code isn't going to be used in production..



  • @morbiuswilters said:

    @bridget99 said:
    It's really a bit narcissistic to spend too much time making one's code maintainable. To do so assumes that it's inevitable that the code will actually make it to production, and that users will actually use it and allow for its continued existence. These things are simply not true of most code...

    Wow. Well, maybe your code isn't likely to make it to production. For those of us who have jobs doing this, that's just an absurd statement if I've ever seen one. Then again, you did express a preference for FOSS, so maybe that's why you think maintainability is unimportant and that most code isn't going to be used in production..

    Hey Morbs - we are in agreement on something <grin>

    To Bridgit - Over 98% of projects (intented for production use) have made it to production. Exactly 3 out of 118 did not. The average (median) duration in production use is slightly over 10 years. One system (Naval vessel central control) was designed 1979-1982, first deployed in 1984, is stil in use, but is scheduled to be replaced by 2015. There have been continual updates to this system over the past nearly 30 years. Another system (industrial control) was developed nearly 15 years ago, has been expanded to control other types of systems, yet the core implementation has remained (virtually untouched). Maintainability is 100% the key to success.

     ps: your "admit
    that breaking "Go To Definition" is a cost of OOP.
    " line of thought is clearly an indication of problems with your perceptions. The base class (or interface) IS the definition - there is nothing broken, regardless of the number of derived classes. If you actually need to know the details of a derived class to understand the functionallity when presented with a reference/pointer/etc to a base declaration, then you have already violated LISKOV



  • @morbiuswilters said:

    @bridget99 said:
    It's really a bit narcissistic to spend too much time making one's code maintainable. To do so assumes that it's inevitable that the code will actually make it to production, and that users will actually use it and allow for its continued existence. These things are simply not true of most code...

    Wow. Well, maybe your code isn't likely to make it to production. For those of us who have jobs doing this, that's just an absurd statement if I've ever seen one. Then again, you did express a preference for FOSS, so maybe that's why you think maintainability is unimportant and that most code isn't going to be used in production..

    People have widely varying definitions of the words "used" and "production." There are plenty of good programmers engaged in writing the crapware that ships with laptops, printers, etcetera. Do some people use this junk? I suppose somebody must, but they're probably stupid or using it by accident. Certainly no one is clamoring for Crapware 2.0, to be lovingly crafted atop that original, maintainable Crapware 1.0 design. Why waste time on such a thing? The company selling the laptops, printers, etc. will get rid of the crapware if they're smart, or rewrite it using new technologies if they're dumb. Better to devote one's energy to actually making Crapware 1.0 good, on the slim chance that someone might actually get something useful out of the product. That's what most developers miss... they deliver a functionally lacking product that otherwise conforms to their own petty little notions of good architecture, and when confronted with the real uselessness of their product as released, they spout whiny little platitudes about "doing things right" and about "my failure to plan" not being an emergency. And you can whine about my little "Crapware" example, but I'd be willing to bet that real crapware developers are several wooden heads up on the metaphorical totem pole from you, sir. At least they are writing for an external customer audience and are held to some notion of commercial standards. That puts them in at least the top half of developers prestige-wise, and I still don't see the case for "maintainability," at least not the way most programmers seem to define it.



  • @TheCPUWizard said:

    @morbiuswilters said:

    @bridget99 said:
    It's really a bit narcissistic to spend too much time making one's code maintainable. To do so assumes that it's inevitable that the code will actually make it to production, and that users will actually use it and allow for its continued existence. These things are simply not true of most code...

    Wow. Well, maybe your code isn't likely to make it to production. For those of us who have jobs doing this, that's just an absurd statement if I've ever seen one. Then again, you did express a preference for FOSS, so maybe that's why you think maintainability is unimportant and that most code isn't going to be used in production..

    Hey Morbs - we are in agreement on something <grin>

    To Bridgit - Over 98% of projects (intented for production use) have made it to production. Exactly 3 out of 118 did not. The average (median) duration in production use is slightly over 10 years. One system (Naval vessel central control) was designed 1979-1982, first deployed in 1984, is stil in use, but is scheduled to be replaced by 2015. There have been continual updates to this system over the past nearly 30 years. Another system (industrial control) was developed nearly 15 years ago, has been expanded to control other types of systems, yet the core implementation has remained (virtually untouched). Maintainability is 100% the key to success.

     ps: your "admit
    that breaking "Go To Definition" is a cost of OOP.
    " line of thought is clearly an indication of problems with your perceptions. The base class (or interface) IS the definition - there is nothing broken, regardless of the number of derived classes. If you actually need to know the details of a derived class to understand the functionallity when presented with a reference/pointer/etc to a base declaration, then you have already violated LISKOV

    It's BRIDGET (and then, six words later, it's INTENDED... won't waste any more time on spelling).



    If you really developed things in the early 1980s that lasted 20+ years, I doubt that you did it using OOP, which was one of the two "false gods" that I mentioned. I'd further wager that the original creators of this architecture didn't spend their time adding layers of speculative generality that might be needed later. I doubt they had the time.


    @TheCPUWizard said:

    your "admit
    that breaking "Go To Definition" is a cost of OOP.
    " line of thought is clearly an indication of problems with your perceptions. The base class (or interface) IS the definition - there is nothing broken, regardless of the number of derived classes. If you actually need to know the details of a derived class to understand the functionallity when presented with a reference/pointer/etc to a base declaration, then you have already violated LISKOV




    It's naive to think that I can just look at the names of the methods from the base class and know what they do. I have to look at the implementation. I can look at the implementation of a single function, or of multiple methods; but I have to look at something, and what matters is how easy it is to get to this something. Burying it behind a layer of indirection is purest stupidity. How am I supposed to know what the method implementation(s) actually do? From a document written in a natural language like English or a UML diagram? Surely these dog-and-pony show "artifacts" do not suffice for the actual programmers.



  • @bridget99 said:

    People have widely varying definitions of the words "used" and "production." There are plenty of good programmers engaged in writing the crapware that ships with laptops, printers, etcetera. Do some people use this junk? I suppose somebody must, but they're probably stupid or using it by accident. Certainly no one is clamoring for Crapware 2.0, to be lovingly crafted atop that original, maintainable Crapware 1.0 design. Why waste time on such a thing? The company selling the laptops, printers, etc. will get rid of the crapware if they're smart, or rewrite it using new technologies if they're dumb. Better to devote one's energy to actually making Crapware 1.0 good, on the slim chance that someone might actually get something useful out of the product. That's what most developers miss... they deliver a functionally lacking product that otherwise conforms to their own petty little notions of good architecture, and when confronted with the real uselessness of their product as released, they spout whiny little platitudes about "doing things right" and about "my failure to plan" not being an emergency. And you can whine about my little "Crapware" example, but I'd be willing to bet that real crapware developers are several wooden heads up on the metaphorical totem pole from you, sir. At least they are writing for an external customer audience and are held to some notion of commercial standards. That puts them in at least the top half of developers prestige-wise, and I still don't see the case for "maintainability," at least not the way most programmers seem to define it.

    I'm going to go out on a limb and say the people writing 1 gig bloatware for HP printers aren't in the top half of developers prestige-wise. I imagine most are a lot like Nagesh. Regardless, every project I've ever worked on made it into production, many at quite a large scale.



  • @bridget99 said:

    How am I supposed to know what the method implementation(s) actually do? From a document written in a natural language like English or a UML diagram? Surely these dog-and-pony show "artifacts" do not suffice for the actual programmers.

    Hmm.. I'd imagine the vast majority of developers code against an API using documentation without consulting the implementation. Hell, in many cases you don't even have direct access to the implementation.

    bridget99 is one of my favorite pseudo-trolls, along with zzo38 and Nagesh. I can't tell if they're brilliant performance artists or just really, really dumb.



  • @morbiuswilters said:

    @bridget99 said:
    How am I supposed to know what the method implementation(s) actually do? From a document written in a natural language like English or a UML diagram? Surely these dog-and-pony show "artifacts" do not suffice for the actual programmers.

    Hmm.. I'd imagine the vast majority of developers code against an API using documentation without consulting the implementation. Hell, in many cases you don't even have direct access to the implementation.

    bridget99 is one of my favorite pseudo-trolls, along with zzo38 and Nagesh. I can't tell if they're brilliant performance artists or just really, really dumb.

    Yeah, that's fine if you're writing the .NET Framework or OpenGL. That's not the code I'm working on at my job and those programmers aren't the ones I'm criticizing. If you're writing code at someplace that's not making a library for external developers (as is the case with most people) and you write your code like it's the .NET Framework, then you can expect my derision, and that does not make me a troll. Do you guys not go over these things at OOP bible camp?



    EDIT: I thought about this some more, and even if you're writing the .NET Framework it's stupid to worry about maintainability. Look how much they've deprecated from .NET 1.0: ArrayList; all of WinForms; everything having to do with concurrency. WPF came and is going, and WebForms has been rewritten. They deprecate or rewrite most of version N by version N+2. Come to think of it, my other example, OpenGL, is not that different.



  • @bridget99 said:

    . If you're writing code at someplace that's not making a library for external developers (as is the case with most people).

    I treat EVERY piece of code that I write exactly that way. One goal is re-use, so that means it is a library. Staff changes, so eventually everyone is an "external developer".

    @bridget99 said:

    .If you really developed things in the early 1980s that lasted 20+ years, I doubt that you did it using OOP, which was one of the two "false gods" that I mentioned. I'd further wager that the original creators of this architecture didn't spend their time adding layers of speculative generality that might be needed later. I doubt they had the time.

    I WAS the original creator of the architecture. While the term OOP and the principlets were emerging at the same time, they were not in common usage. The particular system was written in a custom language PL/9 which was similar to "C" but was specifically targeted toweards features of the Motorola 6809 Processor [the language other than the code generator was based largely on MTL, which I had created a few years earlier. PL/9 was licensed to Tektronix in the mid-1980's but changes in both hardware and software caused it to have a very limited life after that, although it is still used today for systems designed during that time].

    However there were many similarities. Every "component" was defined via an API using a descriptor similar to IDL. The compiler converted this into a a set of pointers, and all invocations were via these pointers. The pointers for a given instance could also be updated. The result was very much equivilant to interfaces and virtual methods in modern OOP.

    A large amount of the design time did go into "speculative generality that might be needed later" simply because we knew that various systems on the ship(s) would be changed over the years. If the system was not designed for maintainability in the face of significant changes [for example a mechanical gyro-compass to a GPS system - this occured circa 1989] that could not even be concieved at the time of original implementation, it would be doomed defore it started.

    Also deprecation is not removal, it is merely an indication that something "better" now exists, and that there is the possibility it will be removed in a future version. WPF is still going strong (there is some debate on Silverlight), WebForms are (largely) intact although alternatives have been added to provide options better suited for specific conditions.

     



  • @TheCPUWizard said:

    I treat EVERY piece of code that I write exactly that way. One goal is re-use, so that means it is a library. Staff changes, so eventually everyone is an "external developer".

    @TheCPUWizard said:

    However there were many similarities. Every "component" was defined via an API using a descriptor similar to IDL. The compiler converted this into a a set of pointers, and all invocations were via these pointers. The pointers for a given instance could also be updated. The result was very much equivilant to interfaces and virtual methods in modern OOP.

    I think I would have enjoyed working at a place like that. I've had the misfortune (?) of working at some places that were very disorganized, as well as some places that tried to be organized (and had the very best classification stickers on their boxes) but just didn't cut it in reality. The skills I've developed are largely defensive in nature: coding quickly, working around issues, vaporizing other people's excuses, and being damn sure without really having the time to be.

    There was only one time I ever saw someone write an internal library and actually get it into production (and actually get other developers to use it), and he probably had a little extra pull because of some family connections. This code was actually not that different from what you described... there was a "sensor" superclass that accommodated a range of position, orientation, and speed signals in a marine application. At some higher level I think it even tied in with a "thruster" class. This code wasn't bad at all, but it was no more or less deserving of immortality than any number of other things I've written or seen others write. If we'd lost the code, or if we had been forced to use a new language for some reason, we could have re-created the code fairly quickly. Had the code originally been written to work with, say, only magnetic compasses, plain GPS, and a single rudder, I do not think it would really have been that difficult to expand it nonetheless. There's an argument to be made that it's better to look at and refactor repetitive code after-the-fact than it is to try and anticipate, say, what things are common between each different kind of sensor at a whiteboard session taking place before each thing has really been made to work.




    @TheCPUWizard said:

    Also deprecation is not removal, it is merely an indication that something "better" now exists, and that there is the possibility it will be removed in a future version. WPF is still going strong (there is some debate on Silverlight), WebForms are (largely) intact although alternatives have been added to provide options better suited for specific conditions.


    I can't imagine Microsoft will be making many modifications to any of those technologies, though. The code probably hasn't even changed recently.



  • @bridget99 said:



    Legibility is often not served by such undertakings. My favorite example is what happens when code is written around a base class in .NET or Java (polymorphism). One of the best things about an IDE is the "Go To Definition" context menu item (or similar). Often, though, base class definitions are unsatisfactory for a programmer attempting to read through the code. The implementation is really in the derived classes. That is supposed to be the entire point of OOP, and "Go To Definition" is simply a casualty of OOP.

    Well established OOP design principles advise us to prefer containment to sub-classing if you need to share methods; to use interfaces rather than sub-classing if you have diverse objects to which you want to apply similar actions. Sub-classing should be limited to cases when the "is-a" relationship holds between the subclass and the super-class. @bridget99 said:

    ... "maintainability" in my experience is often a watchword for hubris and incompetence.

    That's a real head-scratcher. Certainly it's not my experience at all.

    For that matter, I don't think we should be moderate about pursuing maintainability. I understand that the first thing the code needs to do is "work", but the old saw that code is read many, many more times than it is written is not any less true because it is repeated so often.


  • BINNED

    @morbiuswilters said:

    bridget99 is one of my favorite pseudo-trolls, along with zzo38 and Nagesh. I can't tell if they're brilliant performance artists or just really, really dumb.
    I'm not sure there is a meaningful difference between trolls and the willfully ignorant.



  • @rstinejr said:

    Well established OOP design principles advise us to prefer containment to sub-classing if you need to share methods; to use interfaces rather than sub-classing if you have diverse objects to which you want to apply similar actions. Sub-classing should be limited to cases when the "is-a" relationship holds between the subclass and the super-class.

    While I dont disagee with the statement, it has nothing to do with the discussion. There is no difference between someone else implementing an interface, and then YOU only having the definition/documentation for the interface available, and someone else subclassing a concrete base class, and then YOU only having the definition/documntation of the base class.

    The root of the issue is two fold.

    1) Is the base (interface or concrete class) sufficiently defined and documented such that one can understand all possible functionallity of any potential derived classed (or interface implementations) without ever needing to see said derived classes and/or implemenations.

    2) Are derived classes and/or interface implementations validated such that they do not violate Liskov substitution principles.



  • @blakeyrat said:

    @ekolis said:
    @bridget99 said:
    educated to worship false gods (OOP, maintainability)
    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...
    Guys Bridget is the worst programmer on this forum, anything she says is anti-advice. Do the opposite of it.

    True that. Bridget refuses to service the message loop in one of her programs, and refuses to fix it because she says Microsoft is to blame.


  • @frits said:

    @blakeyrat said:

    @ekolis said:
    @bridget99 said:
    educated to worship false gods (OOP, maintainability)
    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...
    Guys Bridget is the worst programmer on this forum, anything she says is anti-advice. Do the opposite of it.

    True that. Bridget refuses to service the message loop in one of her programs, and refuses to fix it because she says Microsoft is to blame.

    I think bridget might be a boy. He/she used to go under a manly username.



  • @PedanticCurmudgeon said:

    @morbiuswilters said:
    bridget99 is one of my favorite pseudo-trolls, along with zzo38 and Nagesh. I can't tell if they're brilliant performance artists or just really, really dumb.
    I'm not sure there is a meaningful difference between trolls and the willfully ignorant.

    I disagree. Trolls can make you laugh, make you think; they can illuminate things far more concisely than a straightforward discussion could. I mean look at Nagesh: either he's someone who is using the power of mockery to illustrate the folly of outsourcing, or he's a literal demonstration of the folly of outsourcing.

    The willfully ignorant are just idiots.



  • @frits said:

    @blakeyrat said:

    @ekolis said:
    @bridget99 said:
    educated to worship false gods (OOP, maintainability)
    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...
    Guys Bridget is the worst programmer on this forum, anything she says is anti-advice. Do the opposite of it.

    True that. Bridget refuses to service the message loop in one of her programs, and refuses to fix it because she says Microsoft is to blame.




    I know perfectly well how to continue servicing a message loop while doing other things. What I said on that topic was that the way Microsoft publicly calls out programs that don't do this, in front of the end user, is unwarranted. I'm the one paying these clowns. Displaying a message that says "Not Responding" is an example of a vendor insulting its customer. "Not Responding" should actually read something like "Hard at Work... Please Wait." Would that really be so damn difficult?

    I consider this a breaking change released with Windows XP. Breaking changes are bad, and they were atypical of Microsoft before the release of XP. Beyond all that, rank-and-file programmers writing what amount to GUI-driven batch processes shouldn't have to worry about parallelism just to amuse Microsoft. That's kind of crap betrays the fact that the nerds have taken over at Redmond to an extent that's really detrimental.

    Nowhere in my commentary did I state that I refused to ever service the message loop per Microsoft guidelines. In applications that are developed to commercial standards, I have no choice. I'm not happy about this, but "pumping messsages" at all times is really just one of the many little exercises in pointless wheel-spinning that I, as a Windows developer, must endure.



    This is not some off-the-wall viewpoint. Clearly, the insular little community here at WorseThanFailure.net has no problem allowing their tool vendors to lead them around like bullocks and capons, but the level of frustration with Microsoft's post-XP philosophy is rising among more open and hard-working communities. Take this person's blog as an example:



    "Microsoft UI technologies are, and have been for the last five years, in a state of complete instability… There are lots of negative side-effects to this sort of instability. The one that hits me most is that it creates a battle between maintenance and innovation. Again, speaking of my own project, I have plenty of ideas about how to improve Caliburn.Micro. I think some of them are minor niceties, but others are more along the innovative lines. But, you will never see any of them come to fruition. Why? Because the instability of the underlying platform, the constant release of new platforms…has put my project into a state of perpetual maintenance. I can’t innovate because I’m still trying to deal with the differences in WP7.5 and I’ve got developers (who can blame them?) banging down my door wanting to know when WinRT/Metro version will be available. It’s been discussed much in recent years as to why innovation seams to happen in non-Microsoft open source and on other platforms like Mac…while very little happens in Windows software. Could it be because we are spending our time re-writing everything every two years? and don’t have time to develop anything forward thinking?



    If you don't share his feelings (and mine), i.e. if you don't mind being put on a treadmill by the Guthries, Ozzies, and Ballmers of the world, then you're probably not doing anything terribly worthwhile. Like most .NET developers, you're probably just drawing a paycheck. You probably don't have any great aspirations of developing anything truly innovative or useful.


    @rstinejr said:

    Well established OOP design principles advise us to prefer containment to sub-classing if you need to share methods; to use interfaces rather than sub-classing if you have diverse objects to which you want to apply similar actions. Sub-classing should be limited to cases when the "is-a" relationship holds between the subclass and the super-class.




    In other words, "well-established OOP design principles" tell us to be very wary of actually using OOP. (I don't give OOP one iota of credit for interface-based programming, which predates OOP, and "containment" is just a fancy way of saying "use a variable having an abstract data type." Again, this is not really OOP, it's a much older idea.) It never ceases to amaze me that people try and argue for OOP when they know quite well that OOP in its original, novel, pure form (encapsulation and implementation inheritance) is utter crap.



    @morbiuswilters said:
    I think bridget might be a boy. He/she used to go under a manly username.




    I have explained this before. Bridget99 is the ID of the smartest person on IMDB. My nickname is a tribute originating from a time when I accidentally locked out the other account.



    @TheCPUWizard said:
    Are derived classes and/or interface implementations validated such that they do not violate Liskov substitution principles.




    I'd be very careful with that Liskov Substitution thing. I should be able to substitute instances of any class derived from a base class for any other at will while still preserving "correctness?" I think I get what she meant there, e.g. that the circumstances under which one subclass exhibits undefined or unsafe behavior ought to be the same as for any other derived class. But to hope for "correctness" seems a bit much. It seems doctrinaire to insist that, in your example, the magnetic compass subclass can truly be substituted in for the GPS subclass without destroying correctness. If a magnetic compass is what's actually hooked up, then the GPS class is not correct. Certainly, it should fail gracefully, and I think that's what Liskov meant. But even this small modicum of substitutability is very difficult to codify or ensure in any widely used OO language, and I have a big problem with that. (Java has "throws" but .NET has absolutely nothing, at least nothing that is consistent or mandatory.)



    If this is really such a big deal to good OOP, it ought to have a first-class representation in any real OO language. It shouldn't just be some abstract concept evident only in some lengthy design artifact on a shelf somewhere. Like those "well-established OOP design principles" that rstinejr pointed to in his comment, Liskov substitution is (revealingly) something that popped up only after years of OOP failures. It's not OOP. It's the pretty wallpaper that was created to cover over the cracks in OOP.



    Ultimately, you may say that we just exhibit some semantic differences over what we call OOP and agree on some fairly important things. I say that names matter, and that if we're to agree on anything, we need to get rid of (or at least radically deëmphasize) the terms "OOP" and "maintainability."



  • @TheCPUWizard said:

    1) Is the base (interface or concrete class) sufficiently defined and documented such that one can understand all possible functionallity of any potential derived classed (or interface implementations) without ever needing to see said derived classes and/or implemenations.





    How could this ever possible? If natural language and computer language were really so freely interchangeable, couldn't we all just program from some combination of English and UML diagrams? In reality, natural language is just not particularly helpful as a medium for describing code. Edsger Dijkstra dispensed with that notion years ago.



    Someone raised the example of commercial libraries like .NET, which most people use without seeing the actual implementation. But it's misleading to think that this is a fully-documented library, or even a well-documented library, or even that such a thing is possible.



    No one ever really codes around a library using its documentation. When you learned .NET, or "stdio.h," or whatever library you use, you probably didn't just read the documentation from top to bottom and write your program. You read some, speculated as to what it really meant, tried a few things in code to refine your understanding and repeated this process. By a certain, relatively early point, you were probably coding much more than you were reading. By the time you finished your first real application, your understanding of that supposedly well-documented library had almost certainly changed substantially compared to that point in time when you had only read about it.



    I do respect your expertise and perspective, and documentation is good. I write much more documentation than most developers do. I've written formal proofs about my code, for example, in both academic and professional settings.



    Earlier I posted that I would have enjoyed working at a workplace with the sort of structure and discipline that yours seems to exhibit. This is very true; given a Navy-sized budget, a little monopoly market to sell into, and a well-defined chain-of-command, I'd enjoy nothing better than to construct the sort of enduring, polymorphic edifice that you seem to take pride in having constructed.



    This is not a luxury that most of us have, though, and even in the realms where one would hope to find such a development style (defense, robotics, heavy industry, etc.) people are increasingly drifting away from such formalism. It's just too easy to get undercut in the marketplace. Your ISO-9001 stamp (or SOLAS approval) doesn't have to come from DNV any more. It can come from a Russian or Chinese class society. Your Engineers and PhDs don't have to come from an American university. The competition is fierce. "Agile" programming can mean the difference between actually selling a commercial product, and going back to writing VBA applications for nail parlors and chiropractors. There are still governments, navies, pure science labs, electrical utilities, etc. where it's possible to do things using BDUF (and a heavy, time-consuming emphasis on natural language), but these opportunities are growing fewer in number.



    Please don't interpret any of this to mean that there is not a right way to develop software. I believe in specifications. If they must emerge from an iterative, back-and-forth process, then this process should at least be defined beforehand. I'm a big believer in working at no higher a level of abstraction than that which allows one to actually understand what's going on at runtime.



    At the same time, I think that people here, and many of the powers-that-be in corporate IT, have no idea what they're doing. This doesn't make me a troll, or an idiot, it just means that I can recognize a profession that's still in its barber/surgeon era and have, accordingly, developed a reasonable wariness of its practitioners.



  • @morbiuswilters said:

    @frits said:

    @blakeyrat said:

    @ekolis said:
    @bridget99 said:
    educated to worship false gods (OOP, maintainability)
    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...
    Guys Bridget is the worst programmer on this forum, anything she says is anti-advice. Do the opposite of it.

    True that. Bridget refuses to service the message loop in one of her programs, and refuses to fix it because she says Microsoft is to blame.

    I think bridget might be a boy. He/she used to go under a manly username.

    I think Morbius might be a boy too, though I prefer to think the avatar is accurate



  • @morbiuswilters said:

    I think bridget might be a boy. He/she used to go under a manly username.
    TopC0der?



  • @Jaime said:

    @morbiuswilters said:
    I think bridget might be a boy. He/she used to go under a manly username.
    TopC0der?

    beau-something, I think.



  • @cmccormick said:

    @morbiuswilters said:
    @frits said:

    @blakeyrat said:

    @ekolis said:
    @bridget99 said:
    educated to worship false gods (OOP, maintainability)
    OOP and maintainability are false gods? What, are you a COBOL programmer with 40 years of experience or something? Granted OOP can be overdone, but at the very least surely maintainability is a GOOD thing...
    Guys Bridget is the worst programmer on this forum, anything she says is anti-advice. Do the opposite of it.

    True that. Bridget refuses to service the message loop in one of her programs, and refuses to fix it because she says Microsoft is to blame.

    I think bridget might be a boy. He/she used to go under a manly username.

    I think Morbius might be a boy too, though I prefer to think the avatar is accurate

    "Morbius" is definitely a boy's name. My avatar isn't a photo of me nor is it meant to indicate my gender. But whatever floats your boat..



  • @morbiuswilters said:

    My avatar isn't a photo of me nor is it meant to indicate my gender. But whatever floats your boat..

    Actually I suspect that Morb's is an undercover Na'vi, sent to spy on us...



  • @bridget99 said:

    @rstinejr said:
    Well established OOP design principles advise us to prefer containment to sub-classing if you need to share methods; to use interfaces rather than sub-classing if you have diverse objects to which you want to apply similar actions. Sub-classing should be limited to cases when the "is-a" relationship holds between the subclass and the super-class.




    In other words, "well-established OOP design principles" tell us to be very wary of actually using OOP....

    Hmm. Don't think I'd put it exactly that way. :-)

    In my opinion, perhaps a minority opinion, the most important aspect of OOP is information hiding, so that you can design, implement, debug and test a program in pieces instead of having to keep the entire thing in your head all the time. This is not a bad thing; there is a reason that not all of us use Basic.

    And what problem are you having with understanding "maintainability"? Does this really need to be redefined or even formally defined? It's the simple notion that some skilled programmer who is not an expert in the code at hand can read it an understand it with reasonable effort, particularly if the software needs to be modified.

    I think you've making a straw man argument. You have a private definition of "maintainability" and you are railing against it.



  • @rstinejr said:

    In my opinion, perhaps a minority opinion, the most important aspect of OOP is information hiding, so that you can design, implement, debug and test a program in pieces instead of having to keep the entire thing in your head all the time.
     

    If that is the most important aspect of OOP to you, then that, in fact, sounds to me like a fundamental misunderstanding. It has nothing to do with implementing a program in pieces, since that is a thing that should be done regardless of whether you're in love with OOP.

    I consider OOP to be a style where the language allows a programmer to express the business objects in code in a very direct and explicit way.

    For example, I built this site that listed construction machines, and allowed users to modify pictures and information about these machines. So, naturally, I had a class named "Machine()" around which I built everything. It's a very natural way of structuring code.

    It was indeed a black box, and in that it matches your sentiment of 'information hiding', I suppose. Any other programmer could come in and use my Machine class to do things with machines on some page without having to be familiar with the shitty code that was actually contained by its methods.



  • @morbiuswilters said:

    @Jaime said:
    @morbiuswilters said:
    I think bridget might be a boy. He/she used to go under a manly username.
    TopC0der?

    beau-something, I think.

     

    I think it was beau99


  • BINNED

    @morbiuswilters said:

    [I disagree. Trolls can make you laugh, make you think; they can illuminate things far more concisely than a straightforward discussion could. I mean look at Nagesh: either he's someone who is using the power of mockery to illustrate the folly of outsourcing, or he's a literal demonstration of the folly of outsourcing.

    The willfully ignorant are just idiots.

    Idiots can also do those things, just not intentionally.


Log in to reply