But it should not be part of the code....



  • I was developing some new functionality and had added a few trace statements along the way. I usually preface them with my initials so they're a bit easier to see in the logs, and when I'm done, a simple search finds everything I want to delete. Also, when testing new update statements, I tend to do stuff like: update ... set ... where 1=2 and rest-of-conditions. This way, I can at least execute through the untested sql without it actually doing anything until I get around to testing that point in the code. Of course, these are only for debugging and always get removed. I also like to make a periodic checkin (when I get things stable) so that the code doesn't get lost if my laptop does.

    One of my team mates was finally promoted to team-lead. What's the first thing he does? He does a massive code review in the development (in-work) branch of the entire system (in and of itself not a bad thing). Then he calls a big meeting and starts complaining about things that are not "right".

    What are these print statements with people's initials on them? This can't be released! Um, they're debug statements and will be removed.

    What are these 1=2 conditions in the sql? You may as well just delete the whole statement and not execute it! Um, it's in the middle of being debugged. If you want to critique stuff, look at the labeled branch.

    But this is in source control! The customers will see it! Um, only if I get fired or die before I'm finished working on this, and whomever picks it up doesn't finish the job... We all debug like this!

    But it should not be part of the code....

    Yeah, this is going to work out well :P

     



  •  For the sake of argument, I'll assume using a debugger is impractical. Maybe you should wrap the print calls with something that only actually writes to logs if the app is running in a dev environment. Maybe also try using a developmen DB so it's ok if you actually execute the sql.



  • This IS being done in a dev environment. We actually have separate dev, demo, qa, pre-prod (for user testing), prod and dr environments with separate file servers and DBs.

    In this case, for a bunch of uninteresting unrelated reasons, using a debugger is impractical. This guy just went off because he saw debugs in code that was still being debugged!

    I don't mind code reviews after I'm done, but while I'm still writing it?

     


  • ♿ (Parody)

     His apparent obvliviousness as a developer makes it clear why management promoted him.



  • You could always do something like:

    printf ("%c%c: [text of log message]", 0x61, 0x62);

    Which would print:

    ab: [text of log message]

    Then you can still search for "0x61, 0x62" (or whatever characters you'd use). You could even make it a preprocessor macro, or something along those line.


  • Discourse touched me in a no-no place

    Is this a new job, or the one you were on about on your last rant?
    @snoofle said:

    This IS being done in a dev environment. We actually have separate dev, demo, qa, pre-prod (for user testing), prod and dr environments with separate file servers and DBs.

    Time to ask your newly appointed mangler for a development server/repository for code where you are allowed to put debug code into.



    Or has he been re-educated yet?



    Or are you expected to host the debug code on your local machine, which could go down with a couple of weeks work on it because you aren't allowed to check in debugs into the dev repository? (Sadly the last case is closest to my own experience, but there is no 'dev' repository (being rectified soon,) it's not normally longer than a week's worth of work, and the guy who bothers reviewing the check-ins isn't too bothered about debug code.)



  • Maybe your teammate is a bit too picky, and I assume you're alone working on this and if someone else does an update it does not affect them.

    However if you have a development DB, there really is no reason to comment code out. It looks like you need to write more unit test.



  • @snoofle said:

    This IS being done in a dev environment. We actually have separate dev, demo, qa, pre-prod (for user testing), prod and dr environments with separate file servers and DBs.

    In this case, for a bunch of uninteresting unrelated reasons, using a debugger is impractical. This guy just went off because he saw debugs in code that was still being debugged!

    I don't mind code reviews after I'm done, but while I'm still writing it?

     

     

    For the vast majority of people, "checked-in to source control" = "done". If you're not done with it, why'd you check it in?

    I also second the "use a debugger" sentiment.



  • @snoofle said:

    What are these print statements with people's initials on them? This can't be released! Um, they're debug statements and will be removed.


    Personally I'd be more partial to making a dprint function that can be enabled/disabled with a config option. Better yet, use the one provided by the language/library.


  • ♿ (Parody)

    @blakeyrat said:

    For the vast majority of people, "checked-in to source control" = "done". If you're not done with it, why'd you check it in?
    That's true, though there's done and then there's Done.  Especially for complex stuff.  I suppose that's one area where DVCSes probably shine.Though you can get a similar resultby generous usage of branches.  Of course, snoofle didn't tell us what they use, so it's hard to say.

     @blakeyrat said:

    I also second the "use a debugger" sentiment.
    Sometimes logging to stdout/err/file is easier or quicker than using a debugger to figure out what's going on.  I've found this to be especially true with 3rd party components, or with stuff like Hibernate, where you end up executing code that doesn't even exist in a file anywhere.  Plus, you can have a more permanent record of what happened, rather than having to remember what the flow is while you watch the debugger...of course, putting gdb output through tee gives both, and maybe you can do similar stuff with other debuggers.



  • A development environment I've see being used more and more since the last two years is to just use git or bzr locally for your development needs.  About the loss of work, make backups. I personally use jungledisk and it costs me ~$5 a month.

    A big advantage of having a revision system locally is that you have a more powerful undo feature ;) and you can branch whenever you want/need. Especially when the priority of feature/bug fixes changes it can be handy to have multiple branches locally at once.

    Also it allows you to work with revision control even when there is no internet, which on trains/flights/car can happen.

     

    Having said that, if the way things work have always been to check in debug code into a dev-version, then he shouldn't bitch about it and either propose to have it changed or stfu.



  • @boomzilla said:

    @blakeyrat said:
    For the vast majority of people, "checked-in to source control" = "done". If you're not done with it, why'd you check it in?
    That's true, though there's done and then there's Done.  Especially for complex stuff.  I suppose that's one area where DVCSes probably shine.Though you can get a similar resultby generous usage of branches.  Of course, snoofle didn't tell us what they use, so it's hard to say.
     

    True. I guess more accurately, I mean:

    "I don't fault him for assuming things checked in to source control are done, as that's the way most projects work."

    But the general point, "he should have talked to you guys before scheduling a big meeting over it" still applies.

    @boomzilla said:

    Sometimes logging to stdout/err/file is easier or quicker than using a debugger to figure out what's going on.  I've found this to be especially true with 3rd party components, or with stuff like Hibernate, where you end up executing code that doesn't even exist in a file anywhere.  Plus, you can have a more permanent record of what happened, rather than having to remember what the flow is while you watch the debugger...of course, putting gdb output through tee gives both, and maybe you can do similar stuff with other debuggers.

    Possibly... to be honest I don't work with compiled languages anymore, so my debugger has pretty free reign wherever. Flash (and many other runtimes/languages) has a handy command "Trace" for exactly this purpose. The first thing I do when starting a new Javascript project is paste in a tiny Trace function. (Firebug has one also, but it only works in Firefox.)


  • Discourse touched me in a no-no place

    @blakeyrat said:

    For the vast majority of people, "checked-in to source control" = "done". If
    you're not done with it, why'd you check it in?
    I suspect this is possibly (unintended) use of a repository as backup media, to which others are objecting.



    Do you religiously check in your work at the end of the day/week? When you've finished $FEATURE, but there's still debug there for $OTHERFEATURE? When there's 4 week's work on your laptop that hasn't been checked in yet?



    Situation appears to be there's a 'dev' repository (in some sort of hierarchy) and someone's checking in code with debug stuff in it. Someone else is complaining about it. Who's 'right' really depends on what the 'dev' repository's for - backup of dev code or sanitised dev code.



  • @PJH said:

    Do you religiously check in your work at the end of the day/week? When you've finished $FEATURE, but there's still debug there for $OTHERFEATURE? When there's 4 week's work on your laptop that hasn't been checked in yet?
     

    Well, I'm pretty "agile" (or whatever), so most features I'm done with in 2-3 days.

    The only time I check-in things based on time instead of based on "done-ness" is when I'm starting a new project. Until it actually runs as version 1.0, there's no point to keeping the repository clean at all times.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    The only time I check-in things based on time instead of based on "done-ness" is
    when I'm starting a new project. Until it actually runs as version 1.0, there's
    no point to keeping the repository clean at all times.
    "Fixing other people's crap?" Or "we have new hardware - it needs to work with it." Happily, my edits tend to be the latter. Unhappily they tend to be for technologies that don't work in my country because they were outdated before we took them up. CDMA, I'm looking at you.



  • Here's seconding local VCS like git, bzr, ...

    Make all the debug commits you want in it (and push to a backup somewhere else) and then commit to the HOLY FINALITY SYSTEM only when you're "done."



  • @tafinucane said:

     For the sake of argument, I'll assume using a debugger is impractical. Maybe you should wrap the print calls with something that only actually writes to logs if the app is running in a dev environment. Maybe also try using a development DB so it's ok if you actually execute the sql.

    If your work environment is like mine, congrats - you've just guaranteed that some debugging statements will live forever.  Some of the original authors won't bother removing them, as they only show up in dev.  If someone else tries to remove them, the dev will complain about someone else futzing with *their* code.  ("As a matter of fact, that line *did* have my name on it.")  Worse, if the author leaves, other devs will be too worried about the effects of removing the debugging statement that the great $programmer left, so will block its removal.  This may be especially true if the debugging statement actually causes a problem.

    Not that I've ever worked in such an environment, of course.  Sigh.  <font color="white">(Of course, by "worked", I mean, "given sufficient time and/or developer process freedom to actually complete any of my tasks properly.")</font>

    I don't know what kind of code snoofle is doing, but many debuggers really have difficulties running on parallel code.  Most developers I know who've written a lot of parallel code eschew the debugger as a matter of course, even when writing serial code.



  • I have to disagree with snoofle... a lot. Debug-print statements in own local code are ok. But in the repo? Just keep a patch queue locally while you're working on something. Everyone says "we'll remove it before shipping". However, almost everyone has "removing debug statements I forgot about before merging" commit in their main branches anyways. (does everyone use the same scheme? do you know all initials? is noone ever going to update from your branch, polluting their own code with your debugs?) Now... if some problem takes long enough to fix that you commit "work in progress", it definitely needs a proper test anyways. If you need to commit a print-debug statement while fixing code which already has a failing test, I think there will be much more wrong about your code (or the test is too general). What about 1=2? Why not mock the database?

    Unless you're working on some prehistoric code that is so tightly coupled, you could never create a test for anything in less than a week. It would make sense in that situation...

    I know that changing what everyone is used to can be annoying... but if the guy doesn't want any broken code in the repos, I can see some good points there. (easier merging, clean change history, easier teamwork, ...) For local work / work in progress, there's always git/hg/bzr/...



  • @superjer said:

    Here's seconding local VCS like git, bzr, ...

    Make all the debug commits you want in it (and push to a backup somewhere else) and then commit to the HOLY FINALITY SYSTEM only when you're "done."

    Thirded.  Also, it doesn't have to be DVCS.  Setting up a local SVN or CVS repo works fine.



  • @viraptor said:

    What about 1=2? Why not mock the database?

    I'm pretty sure 1=2 is mocking the database.


  • :belt_onion:

    @PJH said:

    @blakeyrat said:
    For the vast majority of people, "checked-in to source control" = "done". If you're not done with it, why'd you check it in?
    I suspect this is possibly (unintended) use of a repository as backup media, to which others are objecting.

    Do you religiously check in your work at the end of the day/week? When you've finished $FEATURE, but there's still debug there for $OTHERFEATURE? When there's 4 week's work on your laptop that hasn't been checked in yet?
    Ideally development for $FEATURE and $OTHERFEATURE should happen in separate dev branches. People remove debug code before merging their code to the main dev branch and everybody updates his branch from the latter.


  • :belt_onion:

    @morbiuswilters said:

    @viraptor said:

    What about 1=2? Why not mock the database?

    I'm pretty sure 1=2 is mocking the database.

    I'm pretty sure it's not.

    Instead of putting a hard reference to your data access layer, you configure access to your DAL via your IoC container of choice (I use Unity). So during tests I configure my IoC to use my development DAL which contains code such as: 

    public Person GetPersonById(int id)
    {
    return new Person()
    {
    id = id,
    name = "Morbius",
    lastName = "Wilters",
    email = "MorbiusWilters@theDailyWTF.com"
    };
    }
    instead of going to the database.


  • @snoofle said:

    trace statements /.../ when I'm done, a simple search finds everything I want to delete.

    #ifdef DEBUG
    printf("...");
    #endif

    @snoofle said:

    update ... set ... where 1=2 and rest-of-conditions

    Ever heard of ROLLBACK? That way you can actually see if your update worked as it should have.



  • @bjolling said:

    @morbiuswilters said:

    @viraptor said:

    What about 1=2? Why not mock the database?

    I'm pretty sure 1=2 is mocking the database.

    I'm pretty sure it's not.

    I'm pretty sure he meant "mocking" as in "taunting, making fun of".

    Also, I agree snoofle should use a local repository.

    Even the "local zip file in another directory or drive" method is better than commiting your temporary code to a shared repo.



  • @morbiuswilters said:

    I'm pretty sure 1=2 is mocking the database.
     

    That was the last straw, thought Database.

    "Fuck you all," he said. "I'm returning everything."



  • @blakeyrat said:

    @snoofle said:

    This IS being done in a dev environment.

     

    For the vast majority of people, "checked-in to source control" = "done". If you're not done with it, why'd you check it in?

    I also second the "use a debugger" sentiment.

    Is this being checked into a dev. branch, or trunk?



  • @bjolling said:

    @morbiuswilters said:

    @viraptor said:

    What about 1=2? Why not mock the database?

    I'm pretty sure 1=2 is mocking the database.

    I'm pretty sure it's not.

    Instead of putting a hard reference to your data access layer, you configure access to your DAL via your IoC container of choice (I use Unity). So during tests I configure my IoC to use my development DAL which contains code such as: 

    public Person GetPersonById(int id)
    {
    return new Person()
    {
    id = id,
    name = "Morbius",
    lastName = "Wilters",
    email = "MorbiusWilters@theDailyWTF.com"
    };
    }

    instead of going to the database.

     

    This is a good method, since you should be using stored procs anyway. You can have your dummy DB filled with dummy stored procs that just return the same account each time (like the above example)... then when you move to a testing DB, you don't have to do jack to your code to get it to run.

    Plus you get the 99,999,999 other benefits of using stored procs that most programmers are completely ignorant of and piss me off due to that.



  • @blakeyrat said:

    Plus you get the 99,999,999 other benefits of using stored procs
     

    Like ease of maintenance, and transparency.

    Wait...

     

    I'm a sarcastic pragmatist, thus CASE WHEN (you can give me a good method to achieve those two OR convince me that I don't want those two) AS you're all set!


  • ♿ (Parody)

    @dhromed said:

    @blakeyrat said:

    Plus you get the 99,999,999 other benefits of using stored procs
     

    Like ease of maintenance, and transparency.

    Wait...

    Yeah, stored procedures have a place, but if you're using them for everything (or even most things), then you probably have a pretty small system and a simple database.  Or you're a masochist.



  • @dhromed said:

    @blakeyrat said:

    Plus you get the 99,999,999 other benefits of using stored procs
     

    Like ease of maintenance, and transparency.

    Wait...

     

    I'm a sarcastic pragmatist, thus CASE WHEN (you can give me a good method to achieve those two OR convince me that I don't want those two) AS you're all set!

    What part are you missing now? Running a couple of .sql files during deployment isn't hard, is it? And what's stopping you from looking at the sproc when you need to know what it's doing?

    Unless you have a Nazi DBA in charge, I guess.



  • As a general response...

    • -it's a dev-branch, not trunk: we have 7 different dev branches, one for each ongoing major project
    • we are allowed to check in stuff that doesn't work - to a dev branch - as long as it compiles, and doesn't prevent other functions from executing (eg: no stack dumps)
    • it's java so no #ifdef debug
    • our release cycle is 7-8 weeks of coding before formal testing comes into play
    • for little fixes (1-2 days), we all wait until it's done before committing to the repository
    • for multi-month tasks, we do frequent checkin's
    • several folks are working on the same (large) feature, so it's sort of necessary to check in stubbed out mockups (service interfaces, GUI's, etc) right away to allow for parallel development
    • the new team lead's boss (our common manager) wants us to frequently check stuff into the repository (last year, someone's car was stolen, with laptop in trunk, and 5 weeks of work went with it)
    • debuggers are nice, but if you ave a very long process, the further into it you get, the longer it takes to step through it; sometimes it's just faster if you put log statements at key points so you can "see" the state of things so you can just break at the thing you're debugging
    • we're not allowed to use local vcs - they want everything in the main repository
    • unrelated, but it was mentioned: we're not allowed to use stored procs, so we jump through all sorts of hoops to manage transactions with individual sql statements (don't get me started)
    • the new team lead has been on this team, and working with these rules for nearly 4 years, so he knows the drill; he was just trying to show the upper boss that he could "manage" - got lots of laughs (e.g.: folks saying: get lost) when he pulled this

     



  • @blakeyrat said:

    Plus you get the 99,999,999 other benefits of using stored procs that most programmers are completely ignorant of and piss me off due to that.
    Like their portability to other DBMSes?  I remember that one time we moved from Oracle to Teradata and, thanks to SPs, everything went smoothly.



  • @blakeyrat said:

    @snoofle said:

    This IS being done in a dev environment. We actually have separate dev, demo, qa, pre-prod (for user testing), prod and dr environments with separate file servers and DBs.

    In this case, for a bunch of uninteresting unrelated reasons, using a debugger is impractical. This guy just went off because he saw debugs in code that was still being debugged!

    I don't mind code reviews after I'm done, but while I'm still writing it?

     

     

    For the vast majority of people, "checked-in to source control" = "done". If you're not done with it, why'd you check it in?

    I also second the "use a debugger" sentiment.

    Personally, I tend to check my code in:

    • Right before submitting something to code review (into a dev branch).
    • When my boss says I need to take care of something else first.  If it's "this is your new top priority", I check in what I was working on right then (in a branch, of course).  If I'm told, "Drop everything and do this", I'll check in what I was working when I'm able to get back to the change, before I do anything else.
    • After each feature or significant sub-feature - we're not as agile as I'd like to be; frequently, we're required to roll out multiple related features at once.
    • When I realize I've missed a requirement that'll require a significant re-design, I save and check in (on a branch.)
    • Right before I do a spike.
    • At the end of each spike (for posterity - what did I try?  Sometimes, these come in handy.)
    • Right after running tests and finding out it passes more tests, but not all of them.
    • Right before going on vacation.

    That having been said, snoofle's exact experience couldn't happen where I work, because we have many disparate development branches going on at any given time, and no 'main dev branch'. This is especially true on efforts to clean up legacy code, as they tend to be low priority - one works on them for a bit, is given a new top priority, and then checks in the intermediate state.  (Yes, I know, it *should* be less true in the legacy code cleanup efforts - since it happens over a longer period of time, we have more opportunity to work together.  However, what really happens: Dev4 wrote some really insane code when he first started.  He now has a dev branch to fix it, but he wasn't really getting anywhere, and I didn't like where he was going.  I started a branch to fix it, jumping off the second version of his dev branch.  Some time later, Dev3 looked at the code, saw that neither the head of Dev4's branch nor the head of my branch even compiled (we both had last checked in due to a higher priority being assigned), and decided to make his own - branching off from the main tree head.)

    Note that most of my group's code is for sysadmin, so there's no really huge projects.



  • @tgape said:

    That having been said, snoofle's exact experience couldn't happen where I work, because we have many disparate development branches going on at any given time, and no 'main dev branch'. This is especially true on efforts to clean up legacy code, as they tend to be low priority - one works on them for a bit, is given a new top priority, and then checks in the intermediate state.  (Yes, I know, it *should* be less true in the legacy code cleanup efforts - since it happens over a longer period of time, we have more opportunity to work together.  However, what really happens: Dev4 wrote some really insane code when he first started.  He now has a dev branch to fix it, but he wasn't really getting anywhere, and I didn't like where he was going.  I started a branch to fix it, jumping off the second version of his dev branch.  Some time later, Dev3 looked at the code, saw that neither the head of Dev4's branch nor the head of my branch even compiled (we both had last checked in due to a higher priority being assigned), and decided to make his own - branching off from the main tree head.)
     

    This sounds like a branch strategy from hell. I'm glad I am not the person who has to merge that into anything usefull.



  • @blakeyrat said:

    Plus you get the 99,999,999 other benefits of using stored procs that most programmers are completely ignorant of and piss me off due to that.

     

    Please elaborate.  I've spent a lot of time on a lot of different projects with a lot of different data access philosophies.  Every time someone mentions a benefit that can only be obtained through the use of stored procedures, I usually find that there are twenty other ways to get the same benefit.  The only time I see a stored procedure based solution come out way ahead of a different solution, is when the other solution is either written by a novice or SpectateSwamp.

    For example, I can get every benefit that a traditional stored procedure based solution has by simply encapsulating the data access layer as a web service.  I get the same security and reusability benefits, but I also get to use a sane language, a real dubugger, better configuration management, arrays as arguments, and all of the other things that real languages and runtimes have.  Stored procedures only beat straw man alternatives.



  • @blakeyrat said:

    Running a couple of .sql files during deployment isn't hard, is it?
     

    No, but that's not related to the two points I mentioned above or to SPs in general.

    @blakeyrat said:

    what's stopping you from looking at the sproc when you need to know what it's doing?

    Yes, because that's all you ever need to do: know what it's doing. You never have to develop one, debug it, or edit it a few months later. That never happens.

     

    So, can you give me some more of those 99,999,999 benefits?



  • @tgape said:

    Right before I do a shot.
    This is how I read this.  Doesn't everyone?@tgape said:
    At the end of each shot
    Methinks this might not be as good an idea.

     



  • @dhromed said:

    @blakeyrat said:

    Running a couple of .sql files during deployment isn't hard, is it?
     

    No, but that's not related to the two points I mentioned above or to SPs in general.

     

    Ok, tell ya what. You explain to me how using sprocs affects "ease of maintenance, and transparency" and I'll rattle off a few of its benefits.

    @dhromed said:

    Yes, because that's all you ever need to do: know what it's doing. You never have to develop one, debug it, or edit it a few months later. That never happens.

    And that presents a problem... how?

    Debugging a sproc is the same as debugging any other piece of code. Same with editing it a few months later. I seriously and literally have no idea what difficulties you're referring to here.

    BTW, while I'm at it, one of the benefits I've made use of in the past is patching small bugs directly in the sprocs so that we don't have to redeploy the whole she-bang, just re-deploy a single sproc. Not a great idea from a mainability perspective, but great from a "don't bring down the servers" perspective.



  • Ah, hold on, you keep the SPs outside the DB, in loose .sql files, which you edit in your favourite editor with all the editing and version control benefits this provides, instead of (assuming MSSQL) using that poor excuse for professional software SQL Management Studio Thing?



  • When a "checkin" occurs to the DEVELOPMENT branch, it should be the developer's (the one doing the checkin) BEST POSSIBLE BELIEF that it is 100% functional production code. If it is NOT then it should NOT be exposed to other develpers.



  • @dhromed said:

    Ah, hold on, you keep the SPs outside the DB, in loose .sql files, which you edit in your favourite editor with all the editing and version control benefits this provides, instead of (assuming MSSQL) using that poor excuse for professional software SQL Management Studio Thing?

    Duh? How else would you do it?

    If I'm patching one manually outside of a release, I might use SSMS-- it's quicker. Otherwise, the build process updates the sprocs for me.



  • @TheCPUWizard said:

    When a "checkin" occurs to the DEVELOPMENT branch, it should be the developer's (the one doing the checkin) BEST POSSIBLE BELIEF that it is 100% functional production code. If it is NOT then it should NOT be exposed to other develpers.

    DEVELOPMENT is not the same as QA or PRE-PRODUCTION or PRODUCTION.

    Also:

    @snoofle said:

    • -it's a dev-branch, not trunk: we have 7 different dev branches, one for each ongoing major project
    • we are allowed to check in stuff that doesn't work - to a dev branch - as long as it compiles, and doesn't prevent other functions from executing (eg: no stack dumps)

    Having said that, I would hope each develper kept their own WC* branch (doesn't sound like it, according to snoofle) and merged back to the "main" dev branch once done. Otherwise, it's merge hell, like tgape described.

    *working copy / worthless crap / won't compile



  • @dhromed said:

    So, can you give me some more of those 99,999,999 benefits?

    A bitch ain't one.



  • @blakeyrat said:

    Ok, tell ya what. You explain to me how using sprocs affects "ease of maintenance, and transparency" and I'll rattle off a few of its benefits.

    I'm still waiting for the benefits, so I'll address this.  Assuming MSSQL again, ease of maintenance is harmed by forcing you to use a 20 year old language that only recently got a few modifications to be "sort of" modern.  Exception handling has been added, but there is still no native array type.  Available looping and conditional constructs are minimal.  Pass by reference semantics are not available.  Returning values from one procedure called by another is archaic.  SQL statements debug as one line, but the effects are often complex, making the step feature of the debugger almost useless.

    I'm actually OK with the transparancy part because the whole point of the data access layer is to hide implementation details.  But, remember, choosing not to use stored procedures doesn't automatically mean that data access encapuslation is ignored, it's simply done using a different technology.

    Your turn.



  • @blakeyrat said:

    BTW, while I'm at it, one of the benefits I've made use of in the past is patching small bugs directly in the sprocs so that we don't have to redeploy the whole she-bang, just re-deploy a single sproc. Not a great idea from a mainability perspective, but great from a "don't bring down the servers" perspective.

    When I build the data access layer as web services, I can deploy to a running system.  Remove the second node from the fail-over pair, update the second node, add the second node back to the fail-over pair and mark the first for removal, wait for all sessions to move over to the second node, update first node, re-add first node.  If you aren't using some sort of web farm software, the same thing can be done with two virtual directories on one web server and a little creative scripting.



  • @Jaime said:

    I'm still waiting for the benefits, so I'll address this.  Assuming MSSQL again, ease of maintenance is harmed by forcing you to use a 20 year old language that only recently got a few modifications to be "sort of" modern.  Exception handling has been added, but there is still no native array type.  Available looping and conditional constructs are minimal.  Pass by reference semantics are not available.  Returning values from one procedure called by another is archaic.  SQL statements debug as one line, but the effects are often complex, making the step feature of the debugger almost useless.
     

    It's a query language, not a programming language. If you need a native array type, you're doing something wrong. (If you need to hold intermediete results, use a temp table or a table variable.) If you need looping constructs, outside of a couple very rare cases, you're probably doing something wrong.

    To "debug" SQL, you just read the statement-- if you've written it correctly, you'll get the correct results. If you're talking about optimizing it, then you can look at the execution plan that MS SQL has, which will tell you which portions of the query take the longest.

    I agree that the language is old and uses old-style syntax, but you seem to have SQL confused with a programming language. It's not. Complaining that SQL doesn't have a native array type is like complaining that HTML doesn't have a for() statement, or that Javascript doesn't have a graphics library-- it's missing the point entirely.

    @Jaime said:

    I'm actually OK with the transparancy part because the whole point of the data access layer is to hide implementation details.

    Sprocs aren't any less transparent than the alternatives, anyway.

    @Jaime said:

    But, remember, choosing not to use stored procedures doesn't automatically mean that data access encapuslation is ignored, it's simply done using a different technology.

    Fair enough, but since SQL Server already includes sprocs, it seems goofy to re-invent the wheel.

    @Jaime said:

    Your turn.

    They're pre-compiled, so that SQL Server doesn't have to re-make the execution plan each time they run, that (potentially) saves you a decent chunk of CPU, although it's not nearly as important now as it was a decade ago. (SQL Server caches all execution plans now, IIRC, so even ad-hoc queries get the same benefit, assuming they're similar-enough to a recently-run ad-hoc.)

    They reduce the lines of code in your project by saving you from having to write your own data encapsulation, also coorespondingly reducing the number of potential bugs to solve.

    More finely-grained permissions, most importantly meaning that you can allow a user to run a sproc without allowing them to otherwise access the underlying tables.

    You can return multiple result sets from a single sproc, plus a return value. Although your encapsulation may already allow this, many do not.

    If you're a coder, you can usually off-load sproc work to your DBAs (depends on the asshole-ness of your DBAs and their workload) and save some time.

    Gives you some safety to prevent a coder from screwing up a DB by writing inconsistent data, or data that doesn't make sense in some way if they only have DB access through the sprocs.

    If you don't like sprocs, that's fine. Nobody's holding a gun to your head. But I still think using them should be considered a best practice.



  • @blakeyrat said:

    It's a query language, not a programming language.
    Oracle's isn't.  It's called PL/SQL, and contains a bunch of the things that apparently SQL Server sprocs lack, like an array class.  I'm pretending that a cursor is close enough to an array.@blakeyrat said:
    They're pre-compiled, so that SQL Server doesn't have to re-make the execution plan each time they run, that (potentially) saves you a decent chunk of CPU
    This is minimal.  

    @blakeyrat said:

    More finely-grained permissions, most importantly meaning that you can allow a user to run a sproc without allowing them to otherwise access the underlying tables.
    Views anyone? @blakeyrat said:
    If you're a coder, you can usually off-load sproc work to your DBAs
    HAHAHHAHHHAHHAHAH You make me laugh@blakeyrat said:
    Gives you some safety to prevent a coder from screwing up a DB by writing inconsistent data, or data that doesn't make sense in some way if they only have DB access through the sprocs.
    Obviously no way to get this any other way than a sproc.



  • Yay picking-apart posts!!

    @belgariontheking said:

    Oracle's isn't.  It's called PL/SQL, and contains a bunch of the things that apparently SQL Server sprocs lack, like an array class.  I'm pretending that a cursor is close enough to an array.

    Look, if you want an array that bad, make a table variable and treat it like an array. I don't get the issue here-- what's the practical difference between a 1-column table variable and an array?

    Microsoft's T-SQL also allows looping and such, but that doesn't mean it necessarily should. SQL is a query language, you feed it a description of what data you want, and it returns that data. You're not supposed to know or care how it loops/iterates/sorts/whatever through the data... if you're looking at it at that level, you're probably doing something wrong. Not every language is a programming language, you know.

    (Now, of course, we all live in the real world, and in the real world there are situations where a query involving a loop is an order of magnitude faster than the non-looping version, which is why T-SQL and presumably PL/SQL include looping constructs. But they should only be used rarely.)

    @belgariontheking said:

    @blakeyrat said:
    More finely-grained permissions, most importantly meaning that you can allow a user to run a sproc without allowing them to otherwise access the underlying tables.
    Views anyone?

    So if you have 10 users, you create 10 different views for every table? Or you could use a single sproc with 10 different user permissions attached to it.

    @belgariontheking said:

    @blakeyrat said:
    If you're a coder, you can usually off-load sproc work to your DBAs
    HAHAHHAHHHAHHAHAH You make me laugh

    Works for me.

    @belgariontheking said:

    @blakeyrat said:
    Gives you some safety to prevent a coder from screwing up a DB by writing inconsistent data, or data that doesn't make sense in some way if they only have DB access through the sprocs.
    Obviously no way to get this any other way than a sproc.

    I'm not saying it's the only way to accomplish it, I'm saying it's an additional layer of protection. Yes, you could use Triggers. Yes, you can use Constraints. Yes, you can use specialized Views. You can, additional to those, use sprocs.

    Seriously, were you molested by a sproc as a child? Why are you so opposed to the idea of them existing? Christ.



  • I feel compelled to point out that if you were using a DVCS and your dev repository was all local, then your team lead wouldn't even be able to see any of this, just whatever's been merged.  And if this guy's that anal about it, it'd probably be easy to convince him to switch.

    Disclaimer: I'm a subversion user, not an Hg fanboy, I'm just saying how neatly it would solve this particular problem.



  • @blakeyrat said:

    Look, if you want an array that bad, make a table variable and treat it like an array. I don't get the issue here-- what's the practical difference between a 1-column table variable and an array?

    Microsoft's T-SQL also allows looping and such, but that doesn't mean it necessarily should. SQL is a query language, you feed it a description of what data you want, and it returns that data. You're not supposed to know or care how it loops/iterates/sorts/whatever through the data... if you're looking at it at that level, you're probably doing something wrong. Not every language is a programming language, you know.

    So, you are suggesting that stored procedures should be best practices, but the best you can do is essentially "Sure, T-SQL has a lot of weaknesses, but you won't need those features anyways".  Other models provide exactly the same benefits as stored procedures, but without the apologies.

    I have nothing against stored procedures.  However, in my bag of technologies I might use for a certain problem, I only go for stored procedures when there is a compelling reason to do so.  There rarely is a compelling reason.

    As for the exclusion of an array type, I have a specific example that might help show you why it matters.  Imagine you have a multi-select grid of customers in a user interface.  You would like to search for orders for all selected customers.  With an array type on my data access layer, I can simply pass an array of CustomerIDs into a data layer method and get orders back.  Since stored procedures have no array type, I have to either do a nasty workaround or call the stored procedure that takes a single CustomerID iteratively.  I know this was fixed in SQL 2008 with table-valued parameters, but I'm sure you recommended stored procedures long before SQL 2008 came out.


Log in to reply