Poll: Integration test database cleanup



  • So let's say I've got an integration test suite, testing something that interacts with a database (whether in-memory or real isn't, I don't think, all that important).

    Should

    1. each integration test in the suite clean up exactly and only the data that it thinks was created/changed/etc for that test (ie by doing direct DELETE FROM foo WHERE id = <id I used> statements for each ID.
    2. an "afterEach" or "beforeEach" handler clean things up globally by dropping all the data

    And similarly, how do y'all feel about the idea of having test-suite-created data and non-test-suite-created data (such as stuff coming from a person interacting with the UI normally, albeit in a development/pre-production environment) commingled in a single database as the result of using the pre-production/dev environment database as the host for the integration tests?

    My opinionsTaking the two out of order--I'm queasy at the idea of commingling data. Tests that *must* hit an actual database should be as isolated as possible from "real" usage, ideally by having a separate test database that only the automated tests hit (possibly containerized).

    Given that, I much prefer #2, since it's way simpler and more robust. Each test knows that there will only be the expected data in the system when it takes its turn. And there's no chance that a TRUNCATE TABLE foo will leave data behind, while I'm going to miss things if I have to remember each id that I insert *including the ones inserted/changed as a side effect * (ie mapping tables, etc) and clean those up manually. And I won't know that I missed something until other tests start failing. Plus, it's extra friction, and writing tests is already a source of (useful, beneficial) friction in the development process. So adding more friction just increases the chance that fewer tests will be written.


  • Java Dev

    @Benjamin-Hall We use different schemas for each automated test, as well as for the dev just doing stuff with his local install. Upon starting a test, the schema is dropped if it already exists then re-created.

    We do not drop the schema when the test completes. This allows analysing why the test fails, and can provide a baseline for manual testing which requires a specific setup.



  • @PleegWat said in Poll: Integration test database cleanup:

    @Benjamin-Hall We use different schemas for each automated test, as well as for the dev just doing stuff with his local install. Upon starting a test, the schema is dropped if it already exists then re-created.

    We do not drop the schema when the test completes. This allows analysing why the test fails, and can provide a baseline for manual testing which requires a specific setup.

    So each of your automated tests blow away the entire database and rebuild it. Is that per test run or per test suite or worse, per individual test case. If the latter, that's more expensive (in time per test) than I'm contemplating. My current plan is that each test suite (for an individual class that touches the database, ie a repository-like thing (no ORMs here)) knows what tables that repository handles and truncates those tables before each test (as part of resetting the repository) so each test case sees fresh clean slate. Only thing that happens after each test is that the connection pool is closed out so it can be reset by the next one (otherwise the last test leaves it hanging and it doesn't know it's completed because it's stupid).

    And I agree that being able to look at a failed test's data and see what happened is important.



  • @Benjamin-Hall said in Poll: Integration test database cleanup:

    I think option 2 is cleaner in the long run, option 1 justs gets messy and you are going to miss something eventually.

    Taking the two out of order--I'm queasy at the idea of commingling data. Tests that must hit an actual database should be as isolated as possible from "real" usage, ideally by having a separate test database that only the automated tests hit (possibly containerized).

    What makes you queasy? What do you see as going wrong?



  • @Dragoon said in Poll: Integration test database cleanup:

    @Benjamin-Hall said in Poll: Integration test database cleanup:

    I think option 2 is cleaner in the long run, option 1 justs gets messy and you are going to miss something eventually.

    Taking the two out of order--I'm queasy at the idea of commingling data. Tests that must hit an actual database should be as isolated as possible from "real" usage, ideally by having a separate test database that only the automated tests hit (possibly containerized).

    What makes you queasy? What do you see as going wrong?

    Predictability. So I can always be sure I can insert a thing with a known id and know that only data I inserted is there. It makes writing the asserts much easier and less fragile. Being able to guarantee that the tests will only see test data and the tests will never stomp on anyone else's data feels safer. And in a world with auto-increment PKs...being able to deterministically assert against a known, test-set ID without someone else accidentally inserting a row with that same id or pushing the auto-increment marker around makes me feel better.

    My understanding is that the ideal is test isolation--each test can act as if it's the only test anywhere. No test depends on or will break if any other test has data and no other test can stomp on someone else's data. And separating the databases seems the cleanest, easiest way to guarantee that.


  • Considered Harmful

    @Benjamin-Hall has your framework got something like Spring Boot Test @Rollback? If you're staying on the rez that works. If you're going off the res and managing your own connections, I'd grab up the IDs from test with an interceptor for deletion, or figure out a rollback vs commit for that one too.


  • Java Dev

    @Benjamin-Hall said in Poll: Integration test database cleanup:

    So each of your automated tests blow away the entire database and rebuild it. Is that per test run or per test suite or worse, per individual test case.

    We're calling it per test, which is usually the smallest unit you can invoke directly. Most of what we do is integration tests though - 4-5 minutes runtime is typical. If we're testing a web service, all methods on the endpoint are covered with a single test which then does have its later steps depend on the earlier ones succeeding.


  • Discourse touched me in a no-no place

    @Benjamin-Hall I'd have tests never go against a production database (except in the case where the database in question is being used in its production capacity). I'd also want to make sure that the database is in a known state at the start of each test, with no cross contamination between tests. In particular, carrying out the tests in a different order or only running a subset of them should never change the outcome of the tests! Cross contamination is an awful thing in a test suite; you don't want it.

    How you go about achieving that is a less important thing. In my case, I prefer run the tests against the database engine in memory-only mode so that I know the results will never persist, but that doesn't work for all the test cases I have; some require the coordination of multiple DB connections and so on, so actually have to hit the disk, and in those cases I'll be testing against a special DB instance spun up for that test file. Because I'm using SQLite and my test data isn't very large, the cost of working this way is quite acceptable (the spinning up of the Spring environment several times is by the slowest part of those tests). I guess that if you're using Postgres then you'd choose something else, but how to hold to the principles is and should be a flexible thing.

    Our deep integration tests replicate everything except the services that manage the custom hardware sharing (which are in production mode; we just grab slices of production hardware for the tests because that's what we care about testing anyway). Those are allowed to take 8 hours to run though, mostly dominated by a mixture of the cost of network I/O and manipulating matrices with billions of elements. (A colleague of mine nearly has those optimised to run in a few minutes; he's in the process of debugging that code at the moment.)



  • @dkf said in Poll: Integration test database cleanup:

    @Benjamin-Hall I'd have tests never go against a production database (except in the case where the database in question is being used in its production capacity). I'd also want to make sure that the database is in a known state at the start of each test, with no cross contamination between tests. In particular, carrying out the tests in a different order or only running a subset of them should never change the outcome of the tests! Cross contamination is an awful thing in a test suite; you don't want it.

    I agree. Which is why I'm so unnerved by the idea that we need to take special care to only remove the data the test added. Or should have added, because broken tests or broken code could screw that up bigly. Tests should be able to assume that there's nothing else in the database.

    How you go about achieving that is a less important thing. In my case, I prefer run the tests against the database engine in memory-only mode so that I know the results will never persist, but that doesn't work for all the test cases I have; some require the coordination of multiple DB connections and so on, so actually have to hit the disk, and in those cases I'll be testing against a special DB instance spun up for that test file. Because I'm using SQLite and my test data isn't very large, the cost of working this way is quite acceptable (the spinning up of the Spring environment several times is by the slowest part of those tests). I guess that if you're using Postgres then you'd choose something else, but how to hold to the principles is and should be a flexible thing.

    Yeah. Can't do in-memory because there are subtle but critical distinctions between in-memory mode and real mode. Like in-memory mode doesn't support some of the same functions (for WTF mysql reasons) that we use heavily.

    For us, the big cost would be re-establishing the schema. That's a minute or so for the one database I have to care about; for the other (which involves seeding things in and a bunch of legacy crap) it's about 5 minutes. Doing that per test means that running the test suite is something you can't do on the fly.

    I'd be fine with a pool of database containers, each set up to current mainline. The test suite would pull one of these containers, apply any schema changes for that particular test, run the tests (truncating the tables in between to clear data) and then reset it when the suite is over (or not, if it failed, keeping it out of the reuse pool).

    Our deep integration tests replicate everything except the services that manage the custom hardware sharing (which are in production mode; we just grab slices of production hardware for the tests because that's what we care about testing anyway). Those are allowed to take 8 hours to run though, mostly dominated by a mixture of the cost of network I/O and manipulating matrices with billions of elements. (A colleague of mine nearly has those optimised to run in a few minutes; he's in the process of debugging that code at the moment.)

    I'm not going that far, we're pretty heavy into the "Bounded Context" thing and the "deep integration tests" we do involve someone poking at the UI (sometimes with automated UI testing tools), driving it like a customer would. So a full integration test would require all the network infrastructure and some way to mimic customer involvement in a repeatable fashion. Way outside of my current scope.

    What I'm mainly going for here is "hey idiot programmer, your SQL and supporting code doesn't do exactly what you think it does", especially making sure that behavior doesn't change unless we want it to. Catching more things at develop time, not once it's into the hands of QA (because them kicking it back has major costs).


  • Discourse touched me in a no-no place

    @Benjamin-Hall said in Poll: Integration test database cleanup:

    What I'm mainly going for here is "hey idiot programmer, your SQL and supporting code doesn't do exactly what you think it does", especially making sure that behavior doesn't change unless we want it to

    👍 That's why I've got a lot of tests for that stuff as well. I'd rather know early if I screw up and make the tables and columns in my queries not match the schema. I my case, those tests run against empty in-memory databases that are loaded with the schema only. That's a really fast option that's suitable for me. I know it doesn't catch all the problems… but it really handles a lot of them and has saved my ass a lot.



  • @dkf said in Poll: Integration test database cleanup:

    @Benjamin-Hall said in Poll: Integration test database cleanup:

    What I'm mainly going for here is "hey idiot programmer, your SQL and supporting code doesn't do exactly what you think it does", especially making sure that behavior doesn't change unless we want it to

    👍 That's why I've got a lot of tests for that stuff as well. I'd rather know early if I screw up and make the tables and columns in my queries not match the schema. I my case, those tests run against empty in-memory databases that are loaded with the schema only. That's a really fast option that's suitable for me. I know it doesn't catch all the problems… but it really handles a lot of them and has saved my ass a lot.

    The things I've caught are stuff like "you idiot, when editing this thing you're setting that foreign key (which updates each time) to the old row's id not the new one. Which is fine as far as the schema is concerned (both exist), but makes weird behavior on the front end.


  • Considered Harmful

    @Benjamin-Hall said in Poll: Integration test database cleanup:

    Tests should be able to assume that there's nothing else in the database.

    That's going a bit too far, if you have much shared-data at all in your environment. You're better off see that crud in tests vs prod.


  • Considered Harmful

    @dkf said in Poll: Integration test database cleanup:

    my case, those tests run against empty in-memory databases that are loaded with the schema only.

    I usually throw in some starting data but that's b/c I'm still mostly staring down vs stabbing my current shared data dragon.



  • This has got me wondering how transactions stack. Like would begin begin action commit rollback rollback everything or would the commit override it?


  • Discourse touched me in a no-no place

    @Zenith said in Poll: Integration test database cleanup:

    This has got me wondering how transactions stack.

    Normally they don't at all; it's an error to BEGIN when in a transaction.


  • Java Dev

    @dkf Oracle (and possibly others) do allow named savepoints, which you can create at any time and roll back to at will.



  • @dkf said in Poll: Integration test database cleanup:

    @Zenith said in Poll: Integration test database cleanup:

    This has got me wondering how transactions stack.

    Normally they don't at all; it's an error to BEGIN when in a transaction.

    I give you: Microsoft Sql Server: https://docs.microsoft.com/en-us/sql/t-sql/language-elements/rollback-transaction-transact-sql?view=sql-server-ver15#general-remarks
    "A transaction cannot be rolled back after a COMMIT TRANSACTION statement is executed, except when the COMMIT TRANSACTION is associated with a nested transaction that is contained within the transaction being rolled back. In this instance, the nested transaction is rolled back, even if you have issued a COMMIT TRANSACTION for it."



  • @dkf MySQL has an implicit commit when you execute begin. At least per connection.


  • Discourse touched me in a no-no place

    @PleegWat said in Poll: Integration test database cleanup:

    Oracle (and possibly others) do allow named savepoints

    Yes, I wasn't including those. (SQLite is another DB that supports them.)



  • @robo2 said in Poll: Integration test database cleanup:

    @dkf said in Poll: Integration test database cleanup:

    @Zenith said in Poll: Integration test database cleanup:

    This has got me wondering how transactions stack.

    Normally they don't at all; it's an error to BEGIN when in a transaction.

    I give you: Microsoft Sql Server: https://docs.microsoft.com/en-us/sql/t-sql/language-elements/rollback-transaction-transact-sql?view=sql-server-ver15#general-remarks
    "A transaction cannot be rolled back after a COMMIT TRANSACTION statement is executed, except when the COMMIT TRANSACTION is associated with a nested transaction that is contained within the transaction being rolled back. In this instance, the nested transaction is rolled back, even if you have issued a COMMIT TRANSACTION for it."

    This has bitten me before. I've gotten emails or DMs from the DBAs asking if there's a reason I still have a transaction open, when I thought I had either committed or rolled back all of them, but somehow I had missed one. So now I've taken to trying to execute either commit or rollback multiple times after my data management tasks until I get a "transaction not open" error message in SSMS.



  • @robo2 MSSQL for the win!



  • Having test in one transaction to rollback at the end is all fine and dandy, but you also need test cases that actually do several transaction and check that everything that should be committed is actually committed. You also want test cases where some stuff is committed, while other stuff is rolled back (either explicitly or implicitly, like in the case of constraint violations). You also want to cover special handling required by the database you're using - for example, MVCC-based engine (like Postgresql) pretty much requires automatic retry of each transaction. That is, of course, not relevant to Mysql...

    So in the end, there should be a test running with a real instance of the correct database engine. You can set up new instance for each test suite run and then tear it down, but it is also sufficient just to have special QA instance and delete the testing data when the suite finishes. The latter case, however, require special design of the schema (ie there must be a way to properly identify and separate data from each testing run).


  • Discourse touched me in a no-no place

    @Kamil-Podlesak said in Poll: Integration test database cleanup:

    You can set up new instance for each test suite run and then tear it down, but it is also sufficient just to have special QA instance and delete the testing data when the suite finishes.

    If that's expensive to set up, it might be something that's only done in integration testing. If it's cheap/quick, then it can be part of the mocking for a unit test. (The cost and speed will depend on the DB in use; I use one where spitting out a new DB can be done rapidly so I do so in my unit tests. That wouldn't work so well if I used Postgres.)


  • ♿ (Parody)

    @Kamil-Podlesak said in Poll: Integration test database cleanup:

    Having test in one transaction to rollback at the end is all fine and dandy, but you also need test cases that actually do several transaction and check that everything that should be committed is actually committed.

    Are you manually managing your transactions? Ours are automatically handled for us per request by our framework. Uncaught exceptions result in an automatic rollback.

    We run our tests on sanitized copies of production data and rollback after each test. I couldn't imagine having to recreate all the data for each test. I would probably revolt and refuse to write tests.


  • Discourse touched me in a no-no place

    @boomzilla said in Poll: Integration test database cleanup:

    Are you manually managing your transactions? Ours are automatically handled for us per request by our framework.

    I prefer manual transaction management, except it's not very manual: I have a transaction() method that takes code to run (usually as a lambda) and does the retry/rollback stuff for me. It works really well, and it is useful to have the control because some code has to be outside a transaction.

    It's not a good idea to have calls into bcrypt inside transactions that the database believes should be exclusive. BTDT.


  • ♿ (Parody)

    @dkf said in Poll: Integration test database cleanup:

    I prefer manual transaction management,

    :vomit:

    I have a transaction() method that takes code to run (usually as a lambda) and does the retry/rollback stuff for me. It works really well, and it is useful to have the control because some code has to be outside a transaction.

    It's not a good idea to have calls into bcrypt inside transactions that the database believes should be exclusive. BTDT.

    Actually, I guess I should add that we use Quartz to schedule stuff and there transactions are manual because it executes outside of our framework. But those are special cases. There's generally no good reason to do any transaction management and most of those scheduled tasks use a super class that pulls in the framework's lifecycle management stuff so we don't have to worry about it.


  • Discourse touched me in a no-no place

    @boomzilla Part of the reason I do manual transaction management is that the default transaction management in the framework tends to deadlock. There's something really subtle going on with differences in semantics and I've given up on trying to figure out WTF is going on and switched everything into DIY mode since then I could get it right. (I think it's a subtlety of how JDBC has specified auto-commit mode coupled with how exactly the driver chose to actually do it. I've given up caring.)

    The main thing is not to have to write out the whole dance with manual committing and rolling back and restarting every time, because that's enough code that you want to get it right, once, and not repeat it all over. (I've used framework aspect injection to do that in the past; there's a bunch of subtleties in it and Java 8 onwards gives you the tools to not need such contortions.)


  • ♿ (Parody)

    @dkf said in Poll: Integration test database cleanup:

    Part of the reason I do manual transaction management is that the default transaction management in the framework tends to deadlock

    What DB are you using? Is this the control software for your crazy brain simulator?



  • @boomzilla said in Poll: Integration test database cleanup:

    @Kamil-Podlesak said in Poll: Integration test database cleanup:

    Having test in one transaction to rollback at the end is all fine and dandy, but you also need test cases that actually do several transactions and check that everything that should be committed is actually committed.

    Are you manually managing your transactions? Ours are automatically handled for us per request by our framework. Uncaught exceptions result in an automatic rollback.

    That actually does not matter. Most non-trivial applications have some logic that sits above the transaction level, usually doing several transactions in sequence.

    I am, of course, talking about relatively high-level integration tests. In accordance with the pyramid model, there is quite a small number of them necessary - but non-zero number.

    Alternative is just to trust that level to always work. Which might be OK, as long as there's at least a proper monitoring in place.

    We run our tests on sanitized copies of production data and rollback after each test. I couldn't imagine having to recreate all the data for each test. I would probably revolt and refuse to write tests.

    That is, of course, the best solution. It's just that it is useful to have a test level where "rollback" == "restore database from backup"

    Yes, as you might deduce, I have seen a bug where application worked perfectly well, except that it never actually committed anything. Which is usually considered a flaw, in a data-entry application. Also, the developer's comment "according to modern interpretation of quantum physics, information is never destroyed" did not sit well with the customer...


  • ♿ (Parody)

    @Kamil-Podlesak said in Poll: Integration test database cleanup:

    Alternative is just to trust that level to always work.

    I'm OK with this. It's difficult enough to test my stuff.

    It's just that it is useful to have a test level where "rollback" == "restore database from backup"

    That's way beyond the scope of anything I do. In any case, we have stuff like that happening daily.

    Yes, as you might deduce, I have seen a bug where application worked perfectly well, except that it never actually committed anything. Which is usually considered a flaw, in a data-entry application. Also, the developer's comment "according to modern interpretation of quantum physics, information is never destroyed" did not sit well with the customer...

    Again, sounds like manual transaction management. At which point I can understand testing that.


  • Discourse touched me in a no-no place

    @boomzilla said in Poll: Integration test database cleanup:

    What DB are you using? Is this the control software for your crazy brain simulator?

    Yes. This is SQLite. It works just fine (in WAL mode) with multiple threads, provided I can persuade the Xerial driver to not BEGIN transactions immediately after the last one commits. (There's no per-row locking so upgrading locks is extremely dangerous.)

    By doing all that work, I've got a working connection pool that operates correctly with a thread pool and lets me get really good throughput. Only a few queries take more than a few microseconds, and for the most part they're the ones that are supposed to do so (such as the core of the allocation algorithm, which I've expressed in SQL because that's less ghastly than the alternative, which is what I'm replacing). The standard connection pools don't quite get this right; they tend to not give you close enough control over what happens at transaction start and are also keen on sharing connections between threads (not a good plan with SQLite).

    I could hoist the whole thing up to the framework level, and I've done that sort of thing in the past. I just don't think there's a real benefit to that any more; the frameworks like to implement all that with AOP (or lots of compile-time codegen) and that's pretty horrid and more than a bit fragile.


    In practice, it looks rather like this:

    conn.transaction(() -> {
        return statement.call(argument1, argument2).map(ResultRecord::new);
    });
    

    Yes, I've left out the declaration of the connection and the prepared statement.


  • ♿ (Parody)

    @dkf said in Poll: Integration test database cleanup:

    Yes. This is SQLite. It works just fine (in WAL mode) with multiple threads, provided I can persuade the Xerial driver to not BEGIN transactions immediately after the last one commits. (There's no per-row locking so upgrading locks is extremely dangerous.)

    Oh, right. Sometimes Oracle doesn't actually look so bad.



  • @Kamil-Podlesak said in Poll: Integration test database cleanup:

    I have seen a bug where application worked perfectly well, except that it never actually committed anything.

    In my case, that'd be caught by the asserts during the test itself (because those actually query the database to see if the thing is there).

    I'm not doing "full" integration tests of the entire API on down--those are done differently and require at least some manual involvement due to the nature of the beast. I'm doing "how can I catch a screwup in my SQL faster so it doesn't bounce off of testing and cause delays or reworks" testing. Just the database interaction code and the database itself.

    Of course, our current transaction handling is a huge :wtf: in and of itself, but that's because we're in a mixed old and new state and the old state was painful to manage. For my purposes, the only transactions I care about are the single-call ones when I tell it to insert something, rather than anything bridging multiple calls. Our eventual design is to have all the things that could cause errors other than "the database is dead" outside the database, so those don't need any kind of transactional isolation and once we're sure that's ok, just do a small transaction to modify the database.



  • @boomzilla said in Poll: Integration test database cleanup:

    Oh, right. Sometimes Oracle doesn't actually look so bad.

    Lies and heresy.


  • ♿ (Parody)

    @Arantor I blame @dkf.



  • @Benjamin-Hall said in Poll: Integration test database cleanup:

    Like in-memory mode doesn't support some of the same functions (for WTF mysql reasons) that we use heavily.

    For us, the big cost would be re-establishing the schema. That's a minute or so for the one database I have to care about; for the other (which involves seeding things in and a bunch of legacy crap) it's about 5 minutes. Doing that per test means that running the test suite is something you can't do on the fly.
    I'd be fine with a pool of database containers, each set up to current mainline. The test suite would pull one of these containers, apply any schema changes for that particular test, run the tests (truncating the tables in between to clear data) and then reset it when the suite is over (or not, if it failed, keeping it out of the reuse pool).

    Did I hear MySQL? It might have changed in the decade or so, but does that still have a pretty clear database<->file mapping? (I know PostgreSQL doesn't, naming its files after OIDs (for the most part)).

    You could take a "before" image of the database at the filesystem level, and between tests (or group of tests) copy it back over whatever was left behind. You'd have a before image anyway so that the next round of testing has a known starting point, so this is just doing that more frequently.

    It's still a cost, but overwriting files ought to be faster than going through the server to rebuild everything.



  • @Watson I don't actually know how it stores files these days. And "soon" we're moving to AWS Aurora in Mysql mode, so...



  • @Watson said in Poll: Integration test database cleanup:

    but does that still have a pretty clear database<->file mapping?

    The answer is it depends.

    If using MyISAM, yes, it's a 1:1 (one file for a table's definition, one for its data, one for its indexes). If using InnoDB the answer is 'it depends' because there are two modes it can operate it, one where each table is fully stored in discrete files and one where the ib_data files contain lots of things.



  • @Arantor said in Poll: Integration test database cleanup:

    @Watson said in Poll: Integration test database cleanup:

    but does that still have a pretty clear database<->file mapping?

    The answer is it depends.

    If using MyISAM, yes, it's a 1:1 (one file for a table's definition, one for its data, one for its indexes). If using InnoDB the answer is 'it depends' because there are two modes it can operate it, one where each table is fully stored in discrete files and one where the ib_data files contain lots of things.

    We're in InnoDB all the way, I believe. <fake edit> Yeah, just checked. All the relevant tables are InnoDB. Have no clue about mode or how to check that.



  • @Benjamin-Hall fairly easy SQL query.

    SHOW VARIABLES LIKE 'innodb_file_per_table'
    

    If the answer is ON, you have files per table, if OFF you have the data in the ibdata monolith files.


  • Discourse touched me in a no-no place

    @boomzilla said in Poll: Integration test database cleanup:

    @Arantor I blame @dkf.

    https://youtu.be/bOR38552MJA

    (Argh; iframely must be 🍁! Blame them!)



  • @Arantor said in Poll: Integration test database cleanup:

    @Benjamin-Hall fairly easy SQL query.

    SHOW VARIABLES LIKE 'innodb_file_per_table'
    

    If the answer is ON, you have files per table, if OFF you have the data in the ibdata monolith files.

    mysql> SHOW VARIABLES LIKE 'innodb_file_per_table';
    +-----------------------+-------+
    | Variable_name         | Value |
    +-----------------------+-------+
    | innodb_file_per_table | OFF   |
    +-----------------------+-------+
    1 row in set (0.01 sec)
    

    Well, that's that.


Log in to reply