Member-Level SqlConnection



  • This may belong in the How-To section, but we'll start here.

    One of the formal "standards" my team is subjected to (because "that's how we've always done it", rather than for any specific technical reason) involves SqlConnection objects (we're a .NET shop using C#). When designing a class which needs to access the database, we're expected to create a single class-level SqlConnection object and use it for all database calls (no dedicated data-access classes either). Any method requiring database access calls a Connect() method, which resets the connection string and timeout (both retrieved from a config file). The method then checks the connection state, and if not open, opens the connection. It does its data access and closes the connection. This is often done several times within multiple-thousand-line methods (another WTF for another post). Spotty exception handling and complete ignorance of .Dispose() virtually guarantee resource leaks.




    I recently had the opportunity for some rare green-field development to support a new system. I decided to ignore the written standard and implemented a dedicated data-access class. I create connection objects locally via a Using construct:



    [code]
    public string GetStuff()
    {





    using (SqlConnection conn = new SqlConnection())


    {



    ...snip...



    }



    }
    [/code]




    I was called out in a large nasty email today (CCing the boss, of course) by a senior dev for 'defying standards'. He claims that since my new code is radically inconsistent with the rest of the codebase, future maintainers will be confused and hamstrung (his words). I've read in many different places (including The Pragmatic Programmer, Code Complete, Clean Code and several places in the MSDN library) that tightly-scoped locals are often a better idea than member-level objects. The increases in readability and maintainability should be self-evident. I shared some of this wisdom, quoted my sources and offered to show him the pages in said texts which support my approach. His response? "This is the STANDARD. You will comply."




    Before I call this guy on his bullshit, I'm wondering if indeed I am TRWTF here. I've been a professional dev for only 5 years; this guy has 20 years on me. Is it in fact a better approach to have a member-level SqlConnection object which is initialized once, used by 1..n methods and (hopefully) disposed when the class goes out of scope?




    In my mind, the negligible performance cost of the create-and-destroy approach shown above is massively offset by the increased readability and maintainability (enhanced with rigorous application of the Single Responsibility Principle) when designing methods.




    I welcome thoughts and comments.



  • Doing something for reasons limited to "It's the standard" is a WTF unless they can actually back up the stance.



  • ASP.net does connection pooling by default, so it shouldn't matter one way or the other. (Meaning I'd prefer yours, since more local = more better.) Yours just (potentially) adds a few more connections to the pool-- but it won't affect the DB server at all.

    Unless you're turning connection pooling off, then that would be the Real WTF.

    Edit: you didn't say ASP.net, but I meant to type ADO.net so our errors kind of even-out. Rest of the post still applies.



  • @Smitty said:

    This may belong in the How-To section, but we'll start here. One of the formal "standards" my team is subjected to (because "that's how we've always done it", rather than for any specific technical reason) involves SqlConnection objects (we're a .NET shop using C#). When designing a class which needs to access the database, we're expected to create a single class-level SqlConnection object and use it for all database calls (no dedicated data-access classes either). Any method requiring database access calls a Connect() method, which resets the connection string and timeout (both retrieved from a config file). The method then checks the connection state, and if not open, opens the connection. It does its data access and closes the connection. This is often done several times within multiple-thousand-line methods (another WTF for another post). Spotty exception handling and complete ignorance of .Dispose() virtually guarantee resource leaks.

    I recently had the opportunity for some rare green-field development to support a new system. I decided to ignore the written standard and implemented a dedicated data-access class. I create connection objects locally via a Using construct:

    <FONT size=2 face="Lucida Console">public string GetStuff() {


    using (SqlConnection conn = new SqlConnection())
    {

    ...snip...

    }

    } </FONT>

    I was called out in a large nasty email today (CCing the boss, of course) by a senior dev for 'defying standards'. He claims that since my new code is radically inconsistent with the rest of the codebase, future maintainers will be confused and hamstrung (his words). I've read in many different places (including The Pragmatic Programmer, Code Complete, Clean Code and several places in the MSDN library) that tightly-scoped locals are often a better idea than member-level objects. The increases in readability and maintainability should be self-evident. I shared some of this wisdom, quoted my sources and offered to show him the pages in said texts which support my approach. His response? "This is the STANDARD. You will comply."

    Before I call this guy on his bullshit, I'm wondering if indeed I am TRWTF here. I've been a professional dev for only 5 years; this guy has 20 years on me. Is it in fact a better approach to have a member-level SqlConnection object which is initialized once, used by 1..n methods and (hopefully) disposed when the class goes out of scope?

    In my mind, the negligible performance cost of the create-and-destroy approach shown above is massively offset by the increased readability and maintainability (enhanced with rigorous application of the Single Responsibility Principle) when designing methods.

    I welcome thoughts and comments.

    The existing standard was a good idea before about 1995.  It is now unnecessary due to connection pooling.  Even if you create a connection, use it, close it, and dispose it, it will be kept alive and open in a connection pool.  When you acquire the next one, it will simply be pulled from the pool.  All of the work the current team does comes for free with C#.  Your code will not be measurably slower than anybody else's and in some cases may be faster.  I hope your application doesn't have any background threads because the current model will not work in a multi-threaded environment.

    Please have these people stay far away from web applications.



  • @blakeyrat said:

    ASP.net does connection pooling by default, so it shouldn't matter one way or the other. (Meaning I'd prefer yours, since more local = more better.) Yours just (potentially) adds a few more connections to the pool-- but it won't affect the DB server at all.

    Unless you're turning connection pooling off, then that would be the Real WTF.

    Edit: you didn't say ASP.net, but I meant to type ADO.net so our errors kind of even-out. Rest of the post still applies.

    Actually, his might remove some connections from the pool.  Class level connections tend to hang around idle for a lot longer than you might expect.  People do funny stuff like stick an instance of a class in the Session state and suddenly, you have connections with a 0.001% duty cycle.  If you always immediately release connections back to the pool, you are guaranteed to use the fewest number of connections possible.


  • @Jaime said:

    Please have these people stay far away from web applications.

    Wouldn't you know it? Half of my team exists primarily to support an internal web app for agents. Most instances of the fuckhead pattern I described are in ASP.NET code-behinds.



  • My thoughts begin with this: when someone looks at your module, they will see that it is different to the rest of the system.  If I saw something like that, I would immediately start searching for why it is different.  I mean, there has to be a reason, right?

    The senior dev is part of the WTF, for two reasons:
    * Blind adherence to the standard, because it is the standard, rather than for an explained reason
    * He started by calling you out on it in public.

    However, he is only part of the WTF here.  You are the other part, because you just disregarded the standard and went off in your own direction, without consulting anyone first.  There may be a reason for the selection of a particular structure (I'm not qualified to comment on the relative virtue or lack thereof of .NET stuff, sorry), but don't forget the other reason for picking a standardised approach: parts that are, in essence, "noise" (i.e. just implementation details) should be consistent in appearance, so that they can fade into the background, which helps you and your colleagues to concentrate on the stuff that solves the functional requirements.

    Caveat: it's just my opinion.

     



  • I'm afraid that my opinion is as follows: You have misunderstood the purpose of you job - it is not to produce the most efficient, up-to-date and maintainable code, it is to do what your bosses tell you to do. Ideally this should be to produce the most efficient, up-to-date and maintainable code, but then life is rarely like that! You may beleive that your way of coding this is better (and for what it's worth I agree) but the people responsible for paying you money and ensuring you still have a job on Monday morning disagree. As far as they are concerned what you did is about the same as coding your latest contribution in Perl or Java because you beleived they were better languages for this application.

    </$0.02>



  • @Steve The Cynic said:

    My thoughts begin with this: when someone looks at your module, they will see that it is different to the rest of the system.  If I saw something like that, I would immediately start searching for why it is different.  I mean, there has to be a reason, right?

    The senior dev is part of the WTF, for two reasons:
    * Blind adherence to the standard, because it is the standard, rather than for an explained reason
    * He started by calling you out on it in public.

    However, he is only part of the WTF here.  *You* are the other part, because you just disregarded the standard and went off in your own direction, without consulting anyone first.  There may be a reason for the selection of a particular structure (I'm not qualified to comment on the relative virtue or lack thereof of .NET stuff, sorry), but don't forget the other reason for picking a standardised approach: parts that are, in essence, "noise" (i.e. just implementation details) should be consistent in appearance, so that they can fade into the background, which helps you and your colleagues to concentrate on the stuff that solves the functional requirements.

    Caveat: it's just my opinion.

     

    So, all progress is bad?  The current situation exists because the person who sets the standards refuses to even think about changing, even though the tools he is using have changed.  I could understand a few years of sticking with the old standard to avoid confusion, but this change is fifteen years overdue.  The OP mentioned that this is a new project, so there is no potential to have a confusing code base.

    As for the technical details, This is a big deal, especially in a web application.  Holding on to database connections in a web application can go very wrong very fast.  I'm guessing there are a bunch of other standards that only exist to fend off the potential problems that can be caused by the connection methodology.  For "noise" issue, the accepted way of obtaining connections is very simple and clean.

    I've had experiences with people who don't believe in connection pooling, fortunately usually in an educational setting where they were willing to experiment.  All it takes is to build a simple app that loops a few thousand times to demonstrate that connection pooling is as fast as manually managing connections, but relieves you of the burden of managing connections.



  •  I have to echo an agreement with the other posters.  You've run into the dysfunction that plagues so many large companies.  Authoritarian and seemingly arbitrary rules rule over reason and scientific method.   It sucks. I've only been doing this for 7 years and sometimes i surprise myself on how jaded I've gotten.  

     

    I was in a very very similar position that you are in now, only in java.  And it was a far worse wtf.  All db access HAD to happen via one gigantic class where every method was static and had goofy names and handled everything from localization to math utils and everything in between. All db access happened via one method that returned an array of strings that represented your result set.  We were trying to port the app over to a more popular and modern RDBMS.  This was a web application.

    We weren't allowed to use an ORM per the boss.   I made a nice set of DAOs, you could flip a property in a .properties file and go from the old db to the new one (a lot of the SQL syntax was different between the two).  I liked it, the other dev liked it, so we showed the boss. Guess what happened next?

    I got fired.  

    I'd suggest if this senior dev/architect dude has the confidence of The Guys With Long Titles, I'd just do it his way.

    I work for companies like this, i last a few years before the WTFery becomes too great and I move on to browner pastures and experience WTFery of a different variety.  Hang in there, try not to get too frustrated.  If their pattern messes things up and you can PROVE it you MAY be able to shift blame off yourself and your team, but also in my experience that doesn't happen either.  Usually the newest app is blamed for all problems within the IT world whether they are responsible for said problems or not.



  • @Steve The Cynic said:

    My thoughts begin with this: when someone looks at your module, they will see that it is different to the rest of the system.  If I saw something like that, I would immediately start searching for why it is different.  I mean, there has to be a reason, right?

    The senior dev is part of the WTF, for two reasons:
    * Blind adherence to the standard, because it is the standard, rather than for an explained reason
    * He started by calling you out on it in public.

    Your second bullet point is spot-on, I didn't even consider that when I replied... calling you out on it in public is by far the biggest WTF here, and I can't imagine that manager is pleasant to work for.

    @Steve The Cynic said:

    However, he is only part of the WTF here.  You are the other part, because you just disregarded the standard and went off in your own direction, without consulting anyone first.  There may be a reason for the selection of a particular structure (I'm not qualified to comment on the relative virtue or lack thereof of .NET stuff, sorry), but don't forget the other reason for picking a standardised approach: parts that are, in essence, "noise" (i.e. just implementation details) should be consistent in appearance, so that they can fade into the background, which helps you and your colleagues to concentrate on the stuff that solves the functional requirements.

    Except for the current policy is potentially very harmful if, for example, someone puts that class (and therefore DB connection) in the Session, you could end up overflowing your number of allowed DB connections. As Jamie mentioned.

    I think we can universally agree that, all else being equal, the smallest scope for a particular variable is best, right? This coding standard conflicts with that common sense rule without adding any benefit.



  • @CaptainCaveman said:

     I have to echo an agreement with the other posters.  You've run into the dysfunction that plagues so many large companies.  Authoritarian and seemingly arbitrary rules rule over reason and scientific method.   It sucks. I've only been doing this for 7 years and sometimes i surprise myself on how jaded I've gotten.  

     

    I was in a very very similar position that you are in now, only in java.  And it was a far worse wtf.  All db access HAD to happen via one gigantic class where every method was static and had goofy names and handled everything from localization to math utils and everything in between. All db access happened via one method that returned an array of strings that represented your result set.  We were trying to port the app over to a more popular and modern RDBMS.  This was a web application.

    We weren't allowed to use an ORM per the boss.   I made a nice set of DAOs, you could flip a property in a .properties file and go from the old db to the new one (a lot of the SQL syntax was different between the two).  I liked it, the other dev liked it, so we showed the boss. Guess what happened next?

    I got fired.  

    I'd suggest if this senior dev/architect dude has the confidence of The Guys With Long Titles, I'd just do it his way.

    I work for companies like this, i last a few years before the WTFery becomes too great and I move on to browner pastures and experience WTFery of a different variety.  Hang in there, try not to get too frustrated.  If their pattern messes things up and you can PROVE it you MAY be able to shift blame off yourself and your team, but also in my experience that doesn't happen either.  Usually the newest app is blamed for all problems within the IT world whether they are responsible for said problems or not.

    There's a good chance that you did it wrong.  A gigantic class full of static methods is essentially a function library and actually works very efficiently and statelessly.  It's likely that your DAOs were inferior to the existing system in at least one (possibly insignificant) way, and the "Guys With Long Titles" saw that one deficiency as a big deal.  Most likely, somewhere in the presentation of the alternate data access scheme, you either came off as not knowing something fundamental, or you accidentally offended the original author of the current scheme.

    New stuff gets into big companies all the time.  They all seem to love web applications, because web applications reduce the desktop support component of the cost of ownership.  It's not impossible to get these things changed, it just takes a reasonable amount of political finesse.  Unfortunately, developers and office politics often don't mix.



  • Yes. Call him out. Ask him to explain what makes the company standards right when Microsoft - who bloody well INVENTED .NET - says they're wrong. Ask him how many C# books he's read lately. Ask him why well-known and respected authors and figures in the C# community recommend that you do it the right way as opposed to the company's way. Speak to your fellow developers and find out if they support your viewpoint.

    Tell him that his 20-odd years of experience count for nothing when working with .NET because .NET isn't the same as languages that were available in 1995 (when he had 5 years experience). He might have 8 years experience with .NET if he's lucky (Java experience doesn't count because Java is shitty. If he tries to equate Java experience to .NET experience, call him out as a fraud.).

    If that doesn't bring him around to the correct viewpoint, start making noise about so-called "senior" developers who aren't up to scratch with the latest languages... how dead wood should be culled before it kills the whole tree... etc. Make it known to management that you disagree with the current standard and this developer's backing of it, and let them know exactly why your way is correct. (Hyperlinks, blog posts, book citations, not just "I think".)

    I've had this argument before at my current company. I had far less than 5 years' experience at the time but I got the correct standards railroaded through anyway - partly by ignoring the current "standards" and just writing Microsoft-standard code, partly by getting other developers to agree with my way of thinking, and partly through sheer bloody-mindedness and force of will. It did help, however, that I had convinced the head honchos with well-reasoned arguments.

    If all that fails, play the trump card. Ask management this: if the company is hiring developers who can only code to standards (i.e. templates), how do you expect them to produce good code? If there are problems with current applications (and I'm sure there are), is it not perhaps because standards are being applied too rigidly, or that outmoded standards are being used? Perhaps the current standards are not only holding development back, but indirectly costing the company money? (As soon as you mention money, you've got their attention, so you may want to bring this point up first.) Perhaps inflexible standards are causing good developers to not want to work for their company? (If you're feeling really daring at this point, you might hint that you're one such developer.)

    Your argument has got to be watertight, it's got to be punchy, and most of all, it's got to get your senior developer by his balls and skewer them to the wall. No ands and buts, no pussyfooting - you're right, you know you're right, and that's the end of it.

    Finally, let it be noted that I am not against standards, but I am most vehemently against standards for the sake of standards, or standards that are no longer relative to the language or paradigm. Standards have to adapt and change with time, just like everything else in programming, otherwise they become worse than useless: they become a hindrance to talented developers.



  • @Jaime said:

    So, all progress is bad?  The current situation exists because the person who sets the standards refuses to even think about changing, even though the tools he is using have changed.  I could understand a few years of sticking with the old standard to avoid confusion, but this change is fifteen years overdue.  The OP mentioned that this is a new project, so there is no potential to have a confusing code base.
     

    Tut.  I didn't even begin to say that progress is bad.  Progress for the sake of progress is as bad as blindly clinging to the past, but if there is a good reason for it (something better than "there's a new technology, gotta use it", not that that applies here), then go for it.

    But  don't just charge off into the (to others) unknown territory of a new way of doing things without making sure that others are on board.  The dark side of thinking that because it is a new project, there is no issue of a confusing code base is that you make the organisation's collective code base fractured, with one part over here that does it in a completely different way from the rest.  Are you suggesting that the OP's company divides its developers into "old way" and "new way" groups to avoid the issue of stylistic confusion that arises from having distinct ways of doing X?

    Note that I'm not saying that one should never introduce or adopt new ways of doing things, new tools and technologies, etc.  I'm saying that it should be done in a controlled and disciplined way, not by a single developer sneaking under the radar.

    Oh, and my notion of "noise" has been misunderstood.  The choice of methods for managing connections does not contribute to meeting the functional requirements - what reports to produce, what data entry pages exist, etc.  The way you manage connections is an implementation detail (it may have large footprint, mind you) that will impact non-functional requirements like performance and scalability.  For an in-house web application, unless the company is very large, scalability may well not be an issue, so maintenance programming issues become significant.

    All that said, however,  I suspect that the coding / design standard is an office-politics power-play issue rather than a technical issue.



  • @blakeyrat said:

    I think we can universally agree that, all else being equal, the smallest scope for a particular variable is best, right? This coding standard conflicts with that common sense rule without adding any benefit.
     

    In general, I'd agree that we should reduce the scope (and lifetime, a different thing) of variables, subject to the usual caveat of not reducing them too far.  However, knowing how far is too far can be difficult sometimes.

    And the coding standard does have one benefit - a consistent look and feel across the entire codebase.  This is important, because maintenance programming is often done by people who aren't familiar with all details of a particular codebase.  If I know that (picking an example relevant to this discussion) the DB connections are always managed this way, then I can concentrate on understanding the business logic.  If one part of the codebase does it a different way, I then need to expend neuron time understanding that difference as well as the business logic (mostly so I can understand where the differency bit ends and the business logic begins).

    As I've said elsewhere, my objection is not to changing the technological concepts that underpin the codebase.  No, I object to (i.e. I think it is a bad idea) undisciplined inclusion of new (new-to-the-project, not only-just-invented) structures apparently on a whim.  (Yes, I know there are good reasons to prefer the new way in the OP's case, but he approached the introduction of the new way in completely the wrong fashion.)



  • @Jaime said:

    It's likely that your DAOs were inferior to the existing system in at least one (possibly insignificant) way, and the "Guys With Long Titles" saw that one deficiency as a big deal.
     

     Which is why I had the other dev peer review it first.  And as for a presentation, i didn't even get the chance to show any of it off. As soon as the boss saw that I was suggesting we move away from the godawful static class he tossed me out on my can immediately.  He will never know if it was right or wrong, and frankly I don't care (anymore, but I did at the time).  I had ruffled his feathers a month or so beforehand by showing him what sql injection is and how it could be used to gain admin access to the system without valid credentials which gave me access to pretty much all employee information.  He was livid to say the least, and I should have known better than to continue talking to him about db access.

     OP, you can take these guys advice and charge sword drawn into a nasty political battle, but just understand the risk.  Most people in IT have no idea wtf they are doing, and sounds like this guy has an ego that is both tremendous and fragile.  Definitely say something to him in private for the nastygram though, public shame should never be tolerated.

    @Jaime said:

    Unfortunately, developers and office politics often don't mix.

    QFT

     I gotta say this is one area of this field that I may never understand.



  • Rather than pulling quotes from various responses, I'll just say it was really helpful to hear viewpoints from both sides. It has given me some things to think about. Thanks to those who responded.


    Also, I found out that the 'standard' I mentioned in my original post came about because the senior dev decided to write it down several years ago, because that was how he liked to do it at the time, and nobody bothered to question it. There was no discussion or consensus, and it wasn't done for any specific technical reason. I was told this morning that unless I can prove my approach to be 'unequivocally superior', I have to change it to match the 'standard'.



    Of course, I believe that if my code performs an identical function, with similar (or better) performance, and is easier to read and maintain, then unequivocal superiority has been achieved. I think the suggestion that I get the other devs on board with my way of thinking was a good one.



  • @Smitty said:

    I think the suggestion that I get the other devs on board with my way of thinking was a good one.
     

    Yup. Next time, get them on your side first, before introducing the new standard.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.