When Beethoven (the sequence) rolled over a while ago...



  • So, in the app I work on, there's a set of sequences that are used (in a rather :wtf:-at-first-glance way, but that's neither here nor there) to produce unique IDs for various types of objects that must be statically loaded into the system it generates configuration for at startup time.

    At our company, we use Optim (from IBM) as the "enterprise" solution for getting data downloaded from production to various test and development environments (so we can have, ya' know, real data to test on instead of developer-created fake data, as the latter just won't cut the mustard with a dataset of the complexity we're dealing with). Thing is, Optim doesn't know what to do with Oracle sequence objects, so our Optim caretakers have created some custom plumbing that can take the maximum value from a column in production, add a number to it, and set the associated sequence in test or dev to that value, so that your test system won't break because of duplicate IDs.

    Thing is, this approach doesn't work after a sequence rolls over, as the answer it puts in test gets "stuck" at the sequence's MAXVALUE. We have a manual (as in coded-into-the-app) function that reads the sequence's current value from prod and then bumps the sequence in test to match, but there's no good way to plumb it into the Optim infrastructure (the app's a custom thick client written in C++/MFC using plain ol' ODBC for database connectivity, and doesn't (yet) have a functioning automation interface of any sort other than the hardcoded automatic functions), so unless we can both get the functionality reimplemented on the Linux side of the fence (which is in the cards in the distant future, but likely won't happen until we get the overall system into production and thus aren't putting as much stress on the people responsible for managing this steaming heap o' configuration data), or at a bare minimum put a decent automation interface on it, it's not going to get reused for this purpose any time soon.

    What am I, the app dev, supposed to do about this? (Preferably without asking our Optim team to re-plumb their custom support, since that's unlikely to happen. Extracting the functionality of our hardcoded sequence-setter into something that can run on a Linux box would be more likely, but that'd still require non-standard plumbing on their end.)



  • I'm trying to wrap my head around this problem, and I'm not understanding what you're asking.

    So you have some IDs that roll over and you need to make sure the test data you bring down from prod doesn't cause collisions when used in dev.

    In the app, when the sequence rolls over, you do some vudu and it rolls over fine, but when [enterprise thing] gets data from prod and bumps the number, it can get stuck at SEQUENCE_MAX instead of rolling over the sequence properly when it grabs stuff.


  • Discourse touched me in a no-no place

    @tarunik said:

    What am I, the app dev, supposed to do about this?

    Use bigger fields for sequence numbers? 😃


  • Discourse touched me in a no-no place

    2 posts were split to a new topic: How to fix MAXVAL problems...



  • @rad131304 said:

    In the app, when the sequence rolls over, you do some vudu and it rolls over fine, but when [enterprise thing] gets data from prod and bumps the number, it can get stuck at SEQUENCE_MAX instead of rolling over the sequence properly when it grabs stuff.

    Pretty darn much. We're only dealing with it now because one of the IDs was underprovisioned and rolled over already. :D



  • @tarunik said:

    @rad131304 said:
    In the app, when the sequence rolls over, you do some vudu and it rolls over fine, but when [enterprise thing] gets data from prod and bumps the number, it can get stuck at SEQUENCE_MAX instead of rolling over the sequence properly when it grabs stuff.

    Pretty darn much. We're only dealing with it now because one of the IDs was underprovisioned and rolled over already. :D

    Could you go full :wooden_table: on it and use [enterprise thingy] to write CSVs and then just run a script against them for importing/cluebatting?



  • @rad131304 said:

    Could you go full :wooden_table: on it and use [enterprise thingy] to write CSVs and then just run a script against them for importing/cluebatting?

    That's what we've been doing in the past -- albeit not with [enterprise thingy], but with our DB-dev tooling. But, a confluence of unhappy circumstances breaks the import/export process for all three formats said DB-dev tool supports when doing a cross-database-user export/import.



  • @tarunik said:

    @rad131304 said:
    Could you go full :wooden_table: on it and use [enterprise thingy] to write CSVs and then just run a script against them for importing/cluebatting?

    That's what we've been doing in the past -- albeit not with [enterprise thingy], but with our DB-dev tooling. But, a confluence of unhappy circumstances breaks the import/export process for all three formats said DB-dev tool supports when doing a cross-database-user export/import.

    So if you tell [enterprise thingy] that I want X number of records from prod starting at record Y, is there any way you can you calculate that it would need to roll over, grab up to MAXVAL for your first import, and then do a second import starting from MINVAL to get a total of X records?



  • @rad131304 said:

    So if you tell [enterprise thingy] that I want X number of records from prod starting at record Y, is there any way you can you calculate that it would need to roll over, grab up to MAXVAL for your first import, and then do a second import starting from MINVAL to get a total of X records?

    Well, figuring out that it rolled over isn't the hard part -- you can answer that question by asking yourself if there's a record in the table with a serial number greater than CURVAL provided the sequence is the only source of IDs. It's munging the IDs on that second import pass that's the problem -- keep in mind that we're doing a truncate-and-load here as trying to upsert stuff would get mighty confusing quickly (some of the data we deal with was designed in an insane way, but that's not really material to this issue).


Log in to reply