It's Time To Get Over That Stored Procedure Aversion You Have



  • @Jaime said:

    This thread is about how the next layer up interacts with the database. How SQL Server internally handles triggers is irrelevant in this context.

    When you start arguing about whether triggers or sprocs are better, such details matter.

    Especially on this forum. :trollface:



  • @abarker said:

    When you start arguing about whether triggers or sprocs are better, such details matter.

    Nope. Since they are both identical internally, the only difference is invocation. If you had mentioned that they were significantly different under the hood, then we might have something to talk about.

    Triggers guarantee that they will always run, stored procedures have to be intentionally run by the caller. That's why triggers are the right answer when you need to modify data "X" every time data "Y" is changed. Sure, you can contrive a scenario where the only way to modify "Y" is to run the set of procedures which all properly modify "X", but it's much easier to make the guarantee with a trigger. Also, the procedure route limits the ways that you can interact with the database. The trigger route doesn't exclude using procedures, but the procedure route excludes the default operation mode of most ORMs.



  • @Jaime said:

    Since they are both identical internally

    See, we agree! They are the same!

    Now you can go back to arguing when to use each.



  • @abarker said:

    See, we agree! They are the same!

    But...... it's irrelevant, as I said several posts ago.

    You said:
    @abarker said:

    When you start arguing about whether triggers or sprocs are better, such details matter.

    "Irrelevant" and "such details matter" are pretty much the opposite of each other. Why do you think that the fact that both do the exact same thing would matter in a discussion about using one in place of the other. If you were informing the group of an important difference, that would be an entirely different thing.



  • Dude, let me have my pedantic joke and move on. Mkay?


  • FoxDev

    does anyone else persistently misread "Aversion" as having that leading "A" swapped out for "Per"?



  • No.


  • FoxDev

    oh. then it is just me.

    carry on.



  • @accalia said:

    does anyone else persistently misread "Aversion" as having that leading "A" swapped out for "Per"?

    No, but I am tempted to create a thread named "It's Time To Get Over That Stored Procedure Fetish You Have"


  • FoxDev

    Fetish, perversion

    tomayto, tomahto.

    😛

    what would you have for content in that thread?



  • @accalia said:

    what would you have for content in that thread?

    Reasons to drop the outdated concept of exclusively using stored procedures for data access and moving to more scalable and maintainable alternatives.



  • @accalia said:

    Fetish, perversion

    tomayto, tomahto.


    Perversion means that you aren't doing it right (at least not right according to current social standards). Fetish means you do something as its own benefit - usually applied to things that most people don't see as a good thing. The second is a better fit.


  • FoxDev

    @Jaime said:

    Reasons to drop the outdated concept of exclusively using stored procedures for data access and moving to more scalable and maintainable alternatives.

    you have my attention. fire away. i'll be over after i finish making some popcorn.


  • FoxDev

    @accalia said:

    does anyone else persistently misread "Aversion" as having that leading "A" swapped out for "Per"?

    …I'm listening… ;)
    @accalia said:
    you have my attention. fire away. i'll be over after i finish making some popcorn.

    Save some for me! 😄



  • @boomzilla said:

    I have an extra layer of review and bureaucracy and deployment. Now configuration control is more fragmented. If a query is inside my code, it's easy to see what's deployed with that code. Not nearly so easy to know what code is living inside a schema.

    The typical way I've seen deployments done is to have a collection of both the binary changes and scripts for DDL. Since you need both for the deployment to be successful, that's going to be the case no matter how much SQL is in the application versus in procs. Even if 100% is in the application, you don't necessarily get to see the columns or constraints on the table to ensure you haven't forgotten a NOT NULL column or if your inserts/updates are respecting table constraints.

    @boomzilla said:

    Now I'm dealing with PL/SQL

    My sympathies. I used to work with a guy who came from Oracle-land and his overall opinion of database programming seemed to become much more positive once he got familiar with SQL Server. His take in retrospect is that you absolutely need a full-time DBA if you're running Oracle.

    @boomzilla said:

    But anything inside the DB is more of a PITA to analyze, not to mention debug. Pass.

    How so? I know that you can't exactly drop breakpoints in the middle of an sproc and inspect variables, but I'm able to identify the root cause of most problems fairly quickly based on error messages.



  • @Jaime said:

    No, all paths within the database are known. You still have the same challenge figuring out which applications call which procedure. Also, sp_depends is notoriously inaccurate since deferred resolution was introduced.

    What's so hard about Ctrl+Shift+F for the procedure name in Visual Studio, followed by

    select *
    from sys.procedures p
    inner join sys.sql_modules m on
    (p.object_id = m.object_id)
    where m.definition like '%procedure_name%'
    

    in SSMS (or Redgate's SQL Search, etc.)?

    If you have a proper data layer, the proc calls are likely going to be in only a handful of source files anyhow.

    @Jaime said:

    Yes. And this is just as true if that interface/abstraction layer is written in Java or C# as it would be if it is written in T-SQL. What is your argument for the T-SQL implementation being superior?

    What's your argument for the Java or C# implementation being superior?

    For one, the application doesn't need to concern itself with joins between tables. If you need to switch from joining on, say, an identity column to a GUID, and the consumed data model is for the most part staying the same, the join changes in the stored procedure and the app is none the wiser. Does the business layer really need to know the join predicate?

    @Jaime said:

    I sometimes implement things as stored procedures specifically so I can mark those procedures as "don't cache".

    WITH RECOMPILE?

    @Jaime said:

    Business logic starts to creep into the database since some security rules can only be expressed in terms of business processes.

    Isn't it equally likely they'll creep the other direction, given there's overlap of concerns?

    @Jaime said:

    Identity has to be pushed to the db layer. This is really bad for web apps and three-tier apps. It disables connection pooling. It makes password management harder. It forces SQL Server to use the less secure SQL Authentication.

    If you can't use Windows Authentication, then why not have a hybrid approach with a users table and only GRANT stored procedures the app needs to run? That adds a layer of defense should there be an SQLi vulnerability in the app.

    @Jaime said:

    It doesn't fix a damn thing. Somebody still needs to implement security properly. The entire list of what the app people could screw up is the same list that the DB people could screw up.

    Security does indeed need to be implemented. It works best in layers.



  • @Groaner said:

    I want to vault every ORM into the heart of the sun or, preferably, go

    I didn't know Go was an ORM!



  • @Jaime said:

    BTW, there are no subtle ways to fire a trigger.

    What about when an INSERT trigger inserts into a table with another INSERT trigger that merges into a rollup table that has a unique key and you're puzzling over why your first insert is causing a key violation?

    Outlandish, you say? It's more common than you think when trigger-happy junior developers put their favorite new toy to use.

    @Jaime said:

    If you use triggers for the tasks they were designed for (denormalization and auditing), it is really simple to thoroughly test them.

    It's harder to make sure that the Law of Unintended Consequences doesn't bite you.

    One of my favorites is how an INSTEAD OF trigger can break MERGE if you forget to implement them for all DML operations.

    @Jaime said:

    Triggers guarantee that they will always run

    Except if they're disabled.

    @Jaime said:

    stored procedures have to be intentionally run by the caller.

    Your DML operation was also intentionally run by the caller. What if the logging process needs access to data that's not in inserted/deleted? SET CONTEXT_INFO or named temp table time?

    @Jaime said:

    Sure, you can contrive a scenario where the only way to modify "Y" is to run the set of procedures which all properly modify "X", but it's much easier to make the guarantee with a trigger.

    The end result of that philosophy, if you're not careful, is the situation above where triggers fire other triggers.

    @Jaime said:

    Also, the procedure route limits the ways that you can interact with the database.

    Having limited code paths to maintain is a BAD thing?

    @Jaime said:

    Reasons to drop the outdated concept of exclusively using stored procedures for data access and moving to more scalable and maintainable alternatives.

    Care to explain how and why they're outdated? Otherwise it's just an appeal to novelty.

    As for scalability and maintainability, we have about 1000 procs across all our applications. If we want to explore the impact of a change, SQL Search or the system catalogs will give the answer, and because we have almost everything touching the database in procs/functions/views, we've been able to deploy some massive schema changes over the past couple years.

    I did used to work for someone who was as adamant about not using stored procedures, and we had a few hundred customized reports that all used embedded SQL with subtle presentation, business and data layer variations between them. If we wanted to refactor any of the base tables (and we did, since a few of the base tables were denormalized and 240+ columns wide), it would have required a couple of man-years (of non-revenue-generating work) fixing those reports. Had these reports been calling procs, we might have been able to refactor the schema in a reasonable amount of time.



  • @Groaner said:

    What's so hard about Ctrl+Shift+F for the procedure name in Visual Studio

    If the best way you can think of to track dependencies is to do a textual search through the code for the name, I can understand why you don't appreciate moving code to environments with better tools.

    @Groaner said:

    What's your argument for the Java or C# implementation being superior?

    This is your thread about how the rest of us are morons for not doing it in stored procedures. You need to show us a reason why, we don't need to show you.

    @Groaner said:

    @Jaime said:
    Business logic starts to creep into the database since some security rules can only be expressed in terms of business processes.

    Isn't it equally likely they'll creep the other direction, given there's overlap of concerns?


    It belongs in the code, so why would there be concern that it will creep back to where it's supposed to be? Even the article you linked admits that business logic doesn't belong in the database.

    @Groaner said:

    If you can't use Windows Authentication, then why not have a hybrid approach with a users table and only GRANT stored procedures the app needs to run?

    Did you miss the part where that isn't feasible when security rules apply to business processes, not data modifications unless the stored procedures are the business layer?
    @Groaner said:
    That adds a layer of defense should there be an SQLi vulnerability in the app.

    You didn't add a layer of defense, you simply moved all security to the database. SQL injection is trivial to solve. Look at the CVE database and you'll see that injection is now one of the least common vulnerabilities. Also, it's just as possible to create a SQL injection vulnerability inside a stored procedure. You have to do the same stupid thing; build SQL with string concatenation. I've seen plenty of stored procedures resort to doing it when a proc need variable sorting.


  • FoxDev

    Looks like a nice little flamewar is smouldering quite nicely here…



  • @Groaner said:

    What about when an INSERT trigger inserts into a table with another INSERT trigger that merges into a rollup table that has a unique key and you're puzzling over why your first insert is causing a key violation?

    If you use triggers sensibly this will never happen. I once knew someone who made this trigger:

    CREATE TRIGGER tr_Paul_Is_An_Idiot
    ON ConfigSettings
    FOR INSERT, UPDATE, DELETE
    AS
      ROLLBACK TRAN
    

    It was hard to troubleshoot and immensely stupid. That isn't a problem with triggers, it's a problem with Paul.

    @Groaner said:

    One of my favorites is how an INSTEAD OF trigger can break MERGE if you forget to implement them for all DML operations.

    Don't use INSTEAD OF triggers. There's no need to use them for the scenarios I mentioned. Pretty much the only time to use them is to make a view pretend to be a table. If you do that wrong, you're going to get exactly the results as the link you provided suggests. Not surprising and not in the acceptable uses of triggers I mentioned earlier.

    @Groaner said:

    Except if they're disabled.

    And web site logging won't log when its disabled. Big deal.

    @Groaner said:

    What if the logging process needs access to data that's not in inserted/deleted? SET CONTEXT_INFO or named temp table time?

    Then you move it to the business layer.

    @Groaner said:

    The end result of that philosophy, if you're not careful, is the situation above where triggers fire other triggers.

    That's quite a leap in logic. I don't believe you.

    @Groaner said:

    @Jaime said:
    Also, the procedure route limits the ways that you can interact with the database.

    Having limited code paths to maintain is a BAD thing?

    It doesn't have anything to do with number of code paths, it has to do with limited types of interactions. Only allowing stored procedures doesn't reduce the number of code paths.

    @Groaner said:

    As for scalability and maintainability, we have about 1000 procs across all our applications. If we want to explore the impact of a change, SQL Search or the system catalogs will give the answer, and because we have almost everything touching the database in procs/functions/views, we've been able to deploy some massive schema changes over the past couple years.

    Yay, you have an anecdote. That only proves that it is possible to have a maintainable solution based on stored procedures, not that stored procedures generally increase maintainability.

    @Groaner said:

    I did used to work for someone who was as adamant about not using stored procedures, and we had a few hundred customized reports that all used embedded SQL with subtle presentation, business and data layer variations between them. If we wanted to refactor any of the base tables (and we did, since a few of the base tables were denormalized and 240+ columns wide), it would have required a couple of man-years (of non-revenue-generating work) fixing those reports. Had these reports been calling procs, we might have been able to refactor the schema in a reasonable amount of time.

    You also have a straw man. Just because some moron doesn't use stored procedures, you can't leap to the conclusion that his lack of use of stored procedures was the cause of the chaos. ORMs were invented to solve exactly the problem your idiot had.

    Also, reporting is a special case. Every reporting tool has its limitations and many leave you no other choice of data layer implementation than to use stored procedures. It is folly to hate stored procedures so much that you would rather have chaos than procs. I'm saying that stored procedures are fine, but better stuff exists for 95% of circumstances.



  • @RaceProUK said:

    Looks like a nice little flamewar is smouldering quite nicely here…

    :-) I can go all day. Fortunately for me, I have a ton of experience with SQL Server (as in, it was the primary thing I did from 1998 until 2005) and I am fairly competent with Oracle. I don't choose not to default to stored procedures out of ignorance, I actually know the facts on both sides very well.


  • ♿ (Parody)

    @Groaner said:

    The typical way I've seen deployments done is to have a collection of both the binary changes and scripts for DDL. Since you need both for the deployment to be successful, that's going to be the case no matter how much SQL is in the application versus in procs. Even if 100% is in the application, you don't necessarily get to see the columns or constraints on the table to ensure you haven't forgotten a NOT NULL column or if your inserts/updates are respecting table constraints.

    But I have an extra layer of procedure just to get that shit out to other developers and testing. As opposed to just committing it and waiting for the next release. And the stuff inside the sprocs isn't DDL. But fuck anyone who thinks it's reasonable to go through all that shit for any updates.

    @Groaner said:

    His take in retrospect is that you absolutely need a full-time DBA if you're running Oracle.

    We have them, but they're responsible for doing stuff with the databases. They don't do anything with queries or anything. I haven't found anything especially difficult about dealing with Oracle generally (materialized views are the exception, but that's a small subset of stuff). I think people like your friend just aren't very good at learning things.

    @Groaner said:

    @boomzilla said:
    But anything inside the DB is more of a PITA to analyze, not to mention debug. Pass.

    How so? I know that you can't exactly drop breakpoints in the middle of an sproc and inspect variables, but I'm able to identify the root cause of most problems fairly quickly based on error messages.

    The error messages suck compared to getting a stack dump. Maybe your stuff is relatively simple. I can't see the justification for the extra layer of shit that sprocs require. Again, this is for doing everything there, not using them judiciously when circumstances make sense, like I already do.


  • Fake News

    Triggers are the AOP of RDBMS's.

    /me drops the mic



  • @Ashley_Sheridan said:

    I think one of the main reasons to use an ORM is protect the DB from shitty developers who write queries full of injection vulnerabilities. Stored procedures don't protect against SQL injection, parameterised queries do, and that's something that the good ORMs can do.

    QFT -- One of my possible WTF-jewels at this job is the sproc I wrote to do inter-schema data copying in Oracle; PL/SQL doesn't support DDL, so you have to EXECUTE IMMEDIATE, and worse yet, Oracle doesn't support parameterized DDL, so guess what? Concatenation time! The only good news is that the source of what I'm plugging into my statements is an Oracle metadata view; nonetheless, there's an anti-Bobby-Tables-table filter in there, just in case.

    @xaade said:

    The problem is that people get rubberbanding from SQL to source and back.ORM isn't the first attempt to make code do relational stuff after constructing data from tons of table gets, and it won't be the last. And I don't know how many times I've looked at a solution that put 90% of its business tier logic in SQL.

    It's quite simple people.

    SQL for getting the data and putting the data, in whatever form you need. SQL for performing baked in operations that will occur every time you call an update, write, read, etc.

    But business logic doesn't belong in an sproc.

    Other than a read or write operation, IMO, doesn't belong in sql.If you need to pass your write sproc a variable other than the data you are writing, like an extra flag to choose an optional operations, then it's not an sproc.If I can't fit my writeTable stored procedure on my vertical oriented monitor, and read it. It's too much sql.


    QFT! SProcs have a place, but there are three things wrong with them:

    1. You can't use them when database portability is actually a requirement. People keep disputing this, but I have first-hand experience with systems that must be cross-database portable, and it's not as impossible as it sounds provided you have a decent framework in place to abstract over SQL dialect differences.
    2. Either you are given a "black box" by your DBA, which can make debugging very hard, or you wind up having to do cross-language debugging and testing with rather limited tools, which also makes debugging rather hard. So, you're hosed either way on that front.
    3. Your error handling sucks all of a sudden because you have this giant seam in your code with different ideas of what an error is and how error handling should work on each side of it.

    Also, sidenote: if you want to see what a good DB framework looks like, check out SQLAlchemy

    @Jaime said:

    That's why the divide really is that sproc people don't agree with you (and me). They think that business logic does belong in the database.

    Business logic in a DB is an open invitation to have that error seam bite you in the rear end, repeatedly.

    @Jaloopa said:

    Use each layer to its strengths - Set based business logic -> sprocSQL, object based business logic -> app code

    Who said that you can't use inline queries for your set-based business logic?

    @Groaner said:

    DB portability is a problem that should be solved when it becomes a problem, because it's invariably going to be a massive undertaking for any non-trivial product.

    When it's in your requirements up-front, then you know it's going to be a problem. I agree that most custom systems don't ever trip over it, but when you're making semi-custom or COTS products, you're in a different boat.

    @boomzilla said:

    have an extra layer of review and bureaucracy and deployment. Now configuration control is more fragmented. If a query is inside my code, it's easy to see what's deployed with that code. Not nearly so easy to know what code is living inside a schema.

    Yeah. Most organizations aren't mature enough to figure out proper stored procedure config control -- never mind that databases have relatively poor support (AFAIK) for attaching custom metadata to sprocs, which'd be a real handy tool to help solve the SProc config control problem, because then you could just call an ALTER PROCEDURE INSERT OR UPDATE ATTRIBUTE on the procedure to shove the version in there.

    @Jaime said:

    The security issues don't change, they just move. Security should be at the business layer, not the data layer. That way, when security gets more sophisticated than "these people have read access to this entity", it doesn't turn all hairy. If you insist on implementing security at the data layer, you get the following effects:

    • Business logic starts to creep into the database since some security rules can only be expressed in terms of business processes.

    • Identity has to be pushed to the db layer. This is really bad for web apps and three-tier apps. It disables connection pooling. It makes password management harder. It forces SQL Server to use the less secure SQL Authentication.

    • It doesn't fix a damn thing. Somebody still needs to implement security properly. The entire list of what the app people could screw up is the same list that the DB people could screw up.

    The first two points lead to scalabilty problems.


    QFT! There is no magic wand you can wave to make these problems go away!

    @Groaner said:

    How so? I know that you can't exactly drop breakpoints in the middle of an sproc and inspect variables, but I'm able to identify the root cause of most problems fairly quickly based on error messages.

    Eh, not with Oracle errors, you can't! Whoever designed Oracle's error handling is a sadist...



  • Oh yes, triggers police themselves.

    Table1InsertTrigger
    Insert into Table2 ...

    Table2InsertTrigger
    Insert into Table1

    oops.

    Of course that's easy to figure out. It's the times where some guy came in and said, "Denormalize", and the gods nodded in approval. Then you end up with 20 some odd tables referring to each other, and then circular triggers.... becomes an easier mistake.



  • @xaade said:

    Oh yes, triggers police themselves.

    Table1InsertTriggerInsert into Table2 ...

    Table2InsertTriggerInsert into Table1

    A. They won't fire recursively.
    B. Who the hell would put a trigger on an audit table that writes back to the table being audited? Garbage-in-garbage-out.



  • @Jaime said:

    A. They won't fire recursively.
    B. Who the hell would put a trigger on an audit table that writes back to the table being audited? Garbage-in-garbage-out.

    A. No, but they'll loop.
    B. Exactly. It doesn't matter what your policy is.
    Does it work?
    Is it efficient?

    Then stick to it.

    It is much easier to commit your team to writing CRUD sprocs, then it is to debug trigger problems.



  • If your code review policy doesn't catch a trigger on an audit table that writes back to the table it's auditing, then there is no hope for you. You are going to spend tons of time and energy debugging myriad stupid problems. Banning triggers won't make your environment noticeably better.

    @xaade said:

    A. No, but they'll loop.

    They won't cause any problem other than being a bug in the system. They won't create an explosion of data, they won't cause noticeable performance problems. It just won't be a big deal. Why ban a useful technology because you're afraid that there might be a bug in it?

    I understand banning business logic in triggers - that avoids an actual nightmare. Banning auditing triggers is throwing out the baby with the bath water.



  • @Jaime said:

    Banning auditing triggers

    I don't remember insisting this.

    It's easier than you think to police calling sprocs.
    Well, one of the reasons is that we have a few tasks that occur on
    every update of a table. So every table has a write/update/view set.

    I think I said this.

    Which means that, we already HAVE write sprocs, that have to be used. And triggers don't work. I can't go into the proprietary reasons why.

    The only challenge with CRUD sprocs is making sure people use them.
    The challenges with triggers include debugging, making sure you don't delete them. Missing them in deployment. Not causing circular logic.

    Some people have no problems using C to write programs. I do.

    Past experience adds a lot to the choices of what technology you use.
    Who is maintaining your product and what kinds of mistakes they make add a lot to the choice of what technology you use.



  • @xaade said:

    The challenges with triggers include debugging

    If they're business logic, then I agree. If you limit to auditing or denormalized data maintenance, then this is a non-issue.
    @xaade said:
    making sure you don't delete them

    You have some issues... Do you drop and recreate tables often? Aren't you more concerned about your data?
    @xaade said:
    Missing them in deployment

    You obviously have process problems. Feel free not to use triggers, but it's not because of any inherent trigger issue.
    @xaade said:
    Not causing circular logic.

    Not a big deal if you don't put business logic in triggers. As I said, no one is going to create an audit trigger that modifies the table it is auditing. This is simply not a problem in the real world.


  • ♿ (Parody)

    @xaade said:

    Oh yes, triggers police themselves.

    Firearms thread is ⬆ ⬅ 🔫



  • Resident DBA Chiming in.

    Jaime, Its not hard to end up in a circular logic situation with triggers. I've seen it many times with development teams. Now that being said the decision to use or not use them should be an architectural one. In other words the team should have standards around there use. I'm strongly against business logic in them though as you also point out.

    As for sprocs, I prefer their use over ad-hoc SQL. It tends to be cleaner. Plus it can be tuned much easier without modifying application code. Plus its one execution plan for the procedure as opposed to one for every variation with ad-hoc. Granted parameter sniffing could be a problem at some point, but that's an easy fix on the DB side.

    I work with developers all the time. We sit down and figure out what logic should be where. Its a give and take. "It depends" is kinda a mantra here.

    That being said ORMs are ok for rapid prototyping of an app or just small apps in general. But they don't really scale well. So if your app will be high transactional volume and/or heavy load, don't use them.



  • @galgorah said:

    Jaime, Its not hard to end up in a circular logic situation with triggers.

    I assure you it's never happened with auditing triggers.

    @galgorah said:

    It tends to be cleaner

    The rationale for almost all modern atrocities.

    @galgorah said:

    Plus it can be tuned much easier without modifying application code.

    Yes, if your pre-condition is that you don't want to modify code, then stored procedures will give you that. But, why would you be hesitant to modify code? Back to your scalability point - the best way to scale is to scale out. Database are really hard to scale out. Moving as much as humanly possible to code allows you to put a cluster of cheap app servers in front of the database server. This also solves your "code problem". Just take a member out of the cluster, update the code, and put it back in.

    @galgorah said:

    So if your app will be high transactional volume and/or heavy load, don't use them

    I've built plenty of high volume apps that don't use stored procedures. Many of the biggest apps on earth don't even use traditional databases, let alone stored procedures. Groaner was on more solid footing than you when he said that data protection is the best reason for procs. Saying use procs for scalability flies in the face of Internet history.

    @galgorah said:

    Plus its one execution plan for the procedure as opposed to one for every variation with ad-hoc.

    You've heard of ad-hoc batch caching, right? That's '90s rationale.


  • ♿ (Parody)

    @Jaime said:

    But, why would you be hesitant to modify code?

    +�

    I'd much rather be in the business of modifying code, where I have a ton of tools that work really well for tracking those changes than mucking with crap in the DB.



  • @boomzilla said:

    I'd much rather be in the business of modifying code, where I have a ton of tools that work really well for tracking those changes than mucking with crap in the DB.

    Database feature suggestion: autoversioning of stored procedures, with atomic update and rollback.

    @galgorah -- thoughts?



  • @galgorah said:

    Resident DBA Chiming in.

    BTW, I'm DBA-ing a hosted application environment with five thousand users and 10TB of SQL Server data. It's not some dating site, it's the type of application where people go out of business if I screw up. You aren't the only one here with experience.



  • @boomzilla said:

    I'd much rather be in the business of modifying code, where I have a ton of tools that work really well for tracking those changes than mucking with crap in the DB.

    Source control the scripts that create the procs. I've had really good luck doing it this way.



  • @Jaime said:

    Source control the scripts that create the procs. I've had really good luck doing it this way.

    It's an 80-20 problem -- 80% of the source control problem is solved by source controlling your creation scripts, but that leaves the 20% of the problem that causes 80% of the bugs, which is "what version do I have in the DB right now?"


  • ♿ (Parody)

    @Jaime said:

    Source control the scripts that create the procs. I've had really good luck doing it this way.

    Yes, but you still have the issue of getting those into a DB. It's not exactly murder, but it's murkier and more difficult than external code. For the stuff I deal with, at least. Also, this:

    @tarunik said:

    "what version do I have in the DB right now?"



  • @tarunik said:

    the 20% of the problem that causes 80% of the bugs, which is "what version do I have in the DB right now?"

    That is what causes problems for you guys? My problem is always "someone did ALTER on the SP directly in the DB with no records".



  • @locallunatic said:

    That is what causes problems for you guys? My problem is always "someone did ALTER on the SP directly in the DB with no records".

    It's the same problem, really :)



  • @Jaime said:

    You obviously have process problems. Feel free not to use triggers, but it's not because of any inherent trigger issue.

    Same argument you used against sprocs.

    I'm not saying to not use triggers, just that they aren't inherently better than sprocs.

    And just in case you miss it again.

    #I'm not saying to not use triggers


  • I survived the hour long Uno hand

    @Jaime said:

    Source control the scripts

    Our team versions procs explicitly, as in, a service will call FindProducts_2 or FindProducts_3.

    Then they take an SQL dump using Redgate and put THAT in source control. Never mind that the DB can't be deployed from source control. Promoting code to higher environments is done using a SQL Compare tool and migrating the changes that way.

    Of course, since you're never sure when a proc version isn't used anymore, all schema changes have to be backwards-compatible..



  • @Jaime said:

    BTW, I'm DBA-ing a hosted application environment with five thousand users and 10TB of SQL Server data. It's not some dating site, it's the type of application where people go out of business if I screw up. You aren't the only one here with experience.

    I have 10TB+ in one table alone 😛 on our production SQL Server. We actually have several like that and its growing rapidly. I work in Silicon Valley so yes I see and play with a lot of tech. For us performance is as critical as it gets. So yeah I work for the type of company, with lots and lots of data, you reference.

    By the way, I never questioned your experience. We just have different view points on certain things.

    As for my earlier comment about modifying application code. I was referring to the queries themselves. Its much easier if its in a procedure. I make my tweak and alter the procedure. Ideally also checking in the procedure update to source control.

    A separation of concerns is a good thing. Let the database do what its good at. If most of your concern is about data dependencies or moving said data around, then that likely belongs in the DB. That being said I understand quite well the need for a business logic layer and the benefits it provides. Do your sorts outside of SQL Server. Feel free to manipulate it there. But don't discount stored procedures. They don't as you say create atrocities. In fact I've fixed a lot of issues with performance and maintenance over the years moving sets of queries to procedures.


  • FoxDev

    @Yamikuronue said:

    Our team versions procs explicitly, as in, a service will call FindProducts_2 or FindProducts_3.

    Ew.
    @Yamikuronue said:
    Of course, since you're never sure when a proc version isn't used anymore, all schema changes have to be backwards-compatible..

    Makes me wonder why bother versioning the sprocs at all 😕



  • @locallunatic said:

    That is what causes problems for you guys? My problem is always "someone did ALTER on the SP directly in the DB with no records".

    I used to have that problem.

    Since I source control schema scripts, I have a program that extracts scripts from a live database to get me started when I inherit someone's mess. So, when I have the problem you reference I do this:

    • Generate a set of scripts and check them into a source control repository.
    • Write a script that re-generates the set of scripts from the database and checks in any changes
    • Put the above script on a timer, usually every few hours

    Shortly after someone makes a live change to the database, I get a checkin notification. Then I can do one of the following:

    • Take the change that the maintainers did and back port it into source control. This works well for index changes or emergency bug fixes.
    • Talk to the moron who changed something in production without authorization.

    I do this for dev databases too in order to train new developers to remember to check their changes into source control. If they are too thick-headed, then I take their write permission away from the dev database and put in a source control hook that runs any script that they check in. That way, the only way to make a change in the database is to check a script in.



  • @tarunik said:

    but that leaves the 20% of the problem that causes 80% of the bugs, which is "what version do I have in the DB right now?"

    That's easy.

    Let the dev team do their thing on the dev database. Maybe they'll check everything in, maybe they won't. When testing time comes, reset the QA server by restoring from production. Then run all of the checked-in scripts on QA. Anything the developers forgot to check in becomes a test failure. If the tests pass, then you know that the set of scripts you have in source control are the right ones.

    BTW, in the above methodology, it's very important to not let anyone "fix" the QA database. My blood pressure raises fifty points when I overhear this conversation:

    Tester: Page X won't load.
    Developer: Hold on....
    Developer: OK, try it now.

    That's when I have to remind everyone that the point of testing isn't to get the tests to pass, it's to assure us that we know how to roll correct software out to production.



  • @Jaime said:

    the dev database

    Easier said than done when you have a myriad of dev and test environments roaming around running different builds.

    Also, the next person who thinks having only one dev DB is a good idea will join Ellison on the coal recipient list. IMO, per-developer development environments are essential to productivity, even.



  • @tarunik said:

    Easier said than done when you have a myriad of dev and test environments roaming around running different builds.

    Also, the next person who thinks having only one dev DB is a good idea will join Ellison on the coal recipient list. IMO, per-developer development environments are essential to productivity, even.

    I agree. Its much better than having Developers stepping on each others toes.


Log in to reply