Adventures in implementing a MySQL dump/restore progress bar



  • An application I'm (re)developing at work uses a MySQL database to store hundreds of kajillions of floats which represent graph data. One of the requirements is to be able to dump the contents of the database to a file every so often to keep things running smoothly and because data more than a fuel cycle old (nuclear power plant) is generally worthless. Just in case the data is needed in the future, it's also required that these files can be read back in as well. MySQL provides mysqldump.exe for the former task and the "source" command for the latter. However, users of the software have long desired a progress bar so they can gauge approximately how long the dumps and restores will take. Since I'm rebuilding the application anyway, I figured I would see what I could do.

    I first stumbled across this MySQL patch, which outputs to the console every so many lines. Not perfect, but I can read from the output stream and...wait, what?

    mysqldump --verbose --show_progress_size > C:\thingy.sql
    mysqldump: unknown option '--show_progress_size'

    So it turns out my assumptions regarding patches to open-source projects were incorrect. Namely that if someone had a good idea for a useful feature and submitted the patch A YEAR AND A HALF AGO the devs would have surely gotten it into the main branch by now. Those damn assumptions. According to the mysqldump docs, there's no way to get any sort of progress reports out of the thing, probably thanks to the Unix mentality of "if it outputs nothing, it's still going fine...even if it takes days and you don't know if it just froze up for some reason!" So I was forced to try another route.

    I noticed that if I left off the windows-output-rerouting ">" from the example above, mysqldump would print everything to the command window. So I thought I would grab the output byte by byte and write it to a file myself. But I still can't implement an accurate progress bar because mysqldump doesn't tell you about how large the output file will be, nor is there any documented correlation between the size of the database before dumping and the size of the output file after dumping. So I have to make a rough guess based on a limited number of data sets. Not to mention that the requirements call for truncating each table in the database immediately after dumping, which also has no documented progress notifications and often takes several times longer than the dumping process itself.

    As for restoring, the best thing I've figured out is counting the number of INSERTs in the file (which takes forever on 3GB+ files), then compare to the current number of outputs while running the source command on the script. But that means running through the file twice, lengthening the whole process. By the time I got to this point I was so frustrated that I just set all the progress bars to marquee and informed the users that any progress bar would either be incorrect or increase the time of the operation.



  • @lettucemode said:

    I first stumbled across this MySQL patch, which outputs to the console every so many lines. Not perfect, but I can read from the output stream and...wait, what?
    mysqldump --verbose --show_progress_size > C:\thingy.sql
    mysqldump: unknown option '--show_progress_size'

    According to that link, the option is --show-progress-size, not --show_progress_size.



  • @Cassidy said:

    @lettucemode said:

    I first stumbled across this MySQL patch, which outputs to the console every so many lines. Not perfect, but I can read from the output stream and...wait, what?

    mysqldump --verbose --show_progress_size > C:\thingy.sql
    mysqldump: unknown option '--show_progress_size'

    According to that link, the option is --show-progress-size, not --show_progress_size.

    Blargh typo. It still doesn't work - the option isn't in the docs anywhere either.


  • Discourse touched me in a no-no place

    @Cassidy said:

    According to that link, the option is --show-progress-size, not --show_progress_size.
    Makes little difference - the patch still isn't in the code (as of 5.5.20 (released 2012/1/10, the patch was submitted 2010/3/27) anyway.):

    [pjh@sofa mysql-5.5.20]$ find -name *.[ch] | xargs grep show.progress
    [pjh@sofa mysql-5.5.20]$ find . | xargs grep show.progress
    [pjh@sofa mysql-5.5.20]$
    

  • :belt_onion:

    But... if the patch has already been written, why not patch and recompile the code yourself? It's usually as simple as patch < progress.patch; ./configure; make. (Then make install if you want to update the system copy and not just keep a local copy for yourself that the package manager won't stomp on.)



  • @lettucemode said:

    One of the requirements is to be able to dump the contents of the database to a file every so often to keep things running smoothly and because data more than a fuel cycle old (nuclear power plant) is generally worthless.
     

    ** Disclaimer:  I have not used the MySQL dump functionality **

    Is it possible for you to use the dump function to dump a subset of the data at a time and wrap the whole thing in a batch file or similar.

    E.g. If there are 10 tables to be dumped and truncated you could...

    • output "Dumping table 1 of 10"
    • dump table 1
    • truncate table 1
    • repeat

    It's obviously not very granular but it would at least show that progress is happening.

     



  • Discourse touched me in a no-no place

    @heterodox said:

    But... if the patch has already been written, why not patch and recompile the code yourself? It's usually as simple as patch < progress.patch; ./configure; make. (Then make install if you want to update the system copy and not just keep a local copy for yourself that the package manager won't stomp on.)
    In case you hadn't noticed, the OP is using Windows, and I'm guessing that they won't have a suitable build environment (and won't be too happy to compile from source anyway,) and are instead relying on the pre-compiled binaries.


  • :belt_onion:

    @PJH said:

    In case you hadn't noticed, the OP is using Windows, and I'm guessing that they won't have a suitable build environment (and won't be too happy to compile from source anyway,) and are instead relying on the pre-compiled binaries.

    Ah. Indeed I had not noticed; my apologies. I missed the mysqldump".exe" part.

    I suppose that makes sense then, though if users really want that feature it's worth creating a build environment for (and it's a good thing to know how to do for the future). But OP probably has better things to be doing and more feature requests than that.



  • Take a guess at how long the 'average' operation takes.

    Divide that time by 100.

    Each time interval, move the progress bar up 1% of the 'remaining' time.

    Yes, I've done this, and only feel slightly guilty about it. (It was a 'placeholder' UI while trying to figure out how to properly estimate the time an operation would take...)



  • @bgodot said:

    Take a guess at how long the 'average' operation takes.

    Divide that time by 100.

    Each time interval, move the progress bar up 1% of the 'remaining' time.

    So 50% of the time, it will be incomplete at 100% completion.

    I'd throw some statistics at it to get the maximum length of time for 75% (or 90%, depending on accuracy needs) of the dumps to date, and use that value for the first 90% of the progress bar.
    Then if its still running by the time it reaches 90%, it slowly creeps up to 99%, where it sits and waits until it is finished.
    Bonus points if you make it re-calibrate itself for future runs with the data it just spat out!



  • @lettucemode said:

    Not to mention that the requirements call for truncating each table in the database immediately after dumping, which ... often takes several times longer than the dumping process itself.

    I spotted the hidden mega-WTF.



  • @lettucemode said:

    So I thought I would grab the output byte by byte and write it to a file myself. But I still can't implement an accurate progress bar because mysqldump doesn't tell you about how large the output file will be

    Assuming mysqldump prints one record per line, you could SELECT COUNT(*) first and then count line breaks in the output. Ought to be good enough for a progress indicator.

    @lettucemode said:

    Not to mention that the requirements call for truncating each table in the database immediately after dumping, which also has no documented progress notifications and often takes several times longer than the dumping process itself.

    Wait, what?

    @lettucemode said:

    As for restoring, the best thing I've figured out is counting the number of INSERTs in the file (which takes forever on 3GB+ files)

    I suppose you can count bytes if you read the file yourself and pipe it to mysql.



  • @heterodox said:

    But... if the patch has already been written, why not patch and recompile the code yourself? It's usually as simple as patch < progress.patch; ./configure; make. (Then make install if you want to update the system copy and not just keep a local copy for yourself that the package manager won't stomp on.)

    As already mentioned, I am working in a Windows environment. Also, the nuclear industry (as far as I can tell) is the most regulated thing in all of existence forever. Doing what you suggested wouldn't take much time, true, but the weeks of red tape to get the change approved....

    @fatbull said:

    @lettucemode said:

    So I thought I would grab the output byte by byte and write it to a file myself. But I still can't implement an accurate progress bar because mysqldump doesn't tell you about how large the output file will be

    Assuming mysqldump prints one record per line, you could SELECT COUNT(*) first and then count line breaks in the output. Ought to be good enough for a progress indicator.

    Unfortunately, the newline character shows up relatively often in the BLOB-typed columns.

    @fatbull said:

    @lettucemode said:

    Not to mention that the requirements call for truncating each table in the database immediately after dumping, which also has no documented progress notifications and often takes several times longer than the dumping process itself.

    Wait, what?

    There are two tables in the database. One contains information about a given entry - id, timestamp, so on - and the other contains the data itself. The data table accounts for 99% of the database size (the total size of one row is a dozen KB). The information table cascades whatever happens to it (i.e. deletes) to the data table. (Is that correct terminology?) Running the command TRUNCATE TABLE information_table; can take 10-15 minutes with about 4GB of data in the database. If there's a better command to run I'm all ears. And no, I didn't create the database nor do I have control over its schema.

    As for the requirement, the value of the information decreases rapidly as time goes on. It's not customer data.

    @fatbull said:

    @lettucemode said:

    As for restoring, the best thing I've figured out is counting the number of INSERTs in the file (which takes forever on 3GB+ files)

    I suppose you can count bytes if you read the file yourself and pipe it to mysql.

    Thanks for the suggestion, I'll try that if I ever get around to this again.



  • @lettucemode said:

    but the weeks of red tape to get the change approved....

    Might be easier to just buy MySQL Sun Oracle and tell them to patch it. ;)

    @lettucemode said:

    Unfortunately, the newline character shows up relatively often in the BLOB-typed columns.

    Search for "\nINSERT" instead? Yes, it's an ugly hack, but since it's only for a progress bar...

    @lettucemode said:

    The information table cascades whatever happens to it (i.e. deletes) to the data table. (Is that correct terminology?) Running the command TRUNCATE TABLE information_table; can take 10-15 minutes with about 4GB of data in the database.

    Truncating the info table empties the data table as well, doesn't it? What if you truncate the data table before you truncate the info table?



  • @Salamander said:

    I'd throw some statistics at it to get the maximum length of time for 75% (or 90%, depending on accuracy needs) of the dumps to date, and use that value for the first 90% of the progress bar.
    Then if its still running by the time it reaches 90%, it slowly creeps up to 99%, where it sits and waits until it is finished.
    Bonus points if you make it re-calibrate itself for future runs with the data it just spat out!
    Or you just let the progress bar grow according to a suitable function, like 1-e-x or 1-1/(x+1)...



  • Not to suggest you re-invent the wheel, but could you not simply write your own version of the mysql dump? Or at least wrap it in its own script? At least then you'll be able to do a count on the number of records being dumped from each table and compare that to a counter as you iterate over each record.



  • @lettucemode said:

    So I thought I would grab the output byte by byte and write it to a file myself. But I still can't implement an accurate progress bar because mysqldump doesn't tell you about how large the output file will be

    There's got to be some correlation between the size of the database binary files and the mysql dump, so I'm guessing examining the size of the dump at a specific point in time and comparing it to the estimated size (extrapolated from binary files) ought to give some idea of progress. No expert, mind.

    @lettucemode said:
    Not to mention that the requirements call for truncating each table in the database immediately after dumping, which also has no documented progress notifications and often takes several times longer than the dumping process itself.

    I don't understand this either. You're doing a dump (almost taking a backup) then scrubbing the data...?

    Anyways... FWIW, applications that offer no real-time feedback like this are annoying, especially when wget/rpm and others of that ilk can show progress quite happily. I don't do dumps/restores with datasets as big as yours, but even so having that patch included in the source would be dead funky.



  • I understand your frustration, perhaps http://linux.die.net/man/1/pv could help? Surely you can't be using win for your nuclear power plant, since that is forbidden according to the EULA? ;)



  • The real WTF is that you clearly didn't read the EULA for MySQL, specifically the part that says:

    Product is not specifically designed, manufactured or intended for use in the planning, construction, maintenance, control or direct operation of nuclear facilities



  • I'm also totally surprised that your client is a nuclear power plant and you're relying on open source software on windows. I guess your client doesn't really care for the data. Bitching that some patch hasn't been include is not appropriate, especially if you misspell it. You've chosen open source, so wy don't you fix it yourself?

    Anyway, you've got to like this forum: there are quite a few suggestions, although I think that counting INSERTs in the dump is a bit heavy on the I/O. I'd personally go for an estimate of the dump size based on the number of records in the table and a history of dump sizes, and just provide an approximation. If you explain it properly, nobody cares if 100% is truly 100%. They just want to know how long they have to wait, more or less.


  • Discourse touched me in a no-no place

    @Obfuscator said:

    I understand your frustration, perhaps http://linux.die.net/man/1/pv could help?
    I doubt it somehow...



  • @ASheridan said:

    The real WTF is that you clearly didn't read the EULA for MySQL, specifically the part that says:

    Product is not specifically designed, manufactured or intended for use in the planning, construction, maintenance, control or direct operation of nuclear facilities

    .. or that he did, and MySQL isn't actually used for that purpose but simply to hold data for analysis and reporting.

    I didn't get the impression from the first post MySQL was being used for anything mission-critical, but that doesn't mean it isn't. If it is, then.. erm.. holy heck..


  • Java Dev

    A solution I've seen in the past, on a linux platform, consisted of:

    • LOCK TABLE
    • Physically copy the data files
    • UNLOCK TABLES

    Restore worked similarly. Performance was apparently pretty good.



  •  Even so, I read that statement in the EULA to mean it's not suitable for this particular task, which I would assume falls under maintenance and planning. Either way, it's not something one can bitch about if the agreement clearly says it's not intended for that. It's open source, and a patch exists. Is it that difficult to recompile? It doesn't even sound like any level of coding is required here, just build the project with the new patch. IMHO, that is the real WTF. Sure, it would be nice to build things like that into the core, but maybe there was a good reason for it? Maybe it's only available via some enterprise route that Oracle offers (they've got to get the bread to the table somehow after all)


  • ♿ (Parody)

    @ASheridan said:

    Even so, I read that statement in the EULA to mean it's not suitable for this particular task, which I would assume falls under maintenance and planning. Either way, it's not something one can bitch about if the agreement clearly says it's not intended for that.

    It's pretty common to disclaim that software is fit for any particular pupose. Here's a high profile version:
    @Firefox EULA said:

    4. DISCLAIMER OF WARRANTY. THE PRODUCT IS PROVIDED "AS IS" WITH ALL FAULTS. TO THE EXTENT PERMITTED BY LAW, MOZILLA AND MOZILLA'S DISTRIBUTORS, LICENSORS HEREBY DISCLAIM ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION WARRANTIES THAT THE PRODUCT IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE AND NON-INFRINGING. YOU BEAR ENTIRE RISK AS TO SELECTING THE PRODUCT FOR YOUR PURPOSES AND AS TO THE QUALITY AND PERFORMANCE OF THE PRODUCT. THIS LIMITATION WILL APPLY NOTWITHSTANDING THE FAILURE OF ESSENTIAL PURPOSE OF ANY REMEDY. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF IMPLIED WARRANTIES, SO THIS DISCLAIMER MAY NOT APPLY TO YOU.

    But I think they generally do this. I always thought the nuclear thing was more or less a joke. Or maybe there's a court case out there where a software company got sued, and it's now a standard CYA clause. Either way, it's not really a statement about what it can or cannot, or should or should not do, but just that the owners of the code aren't going to be responsible if something bad happens when you use the software.



  • @PJH said:

    @Obfuscator said:
    I understand your frustration, perhaps http://linux.die.net/man/1/pv could help?
    I doubt it somehow...

    Huh? Care to elaborate? I'm thinking something along of the lines of

    "mysqldump | pv -s [exact or estimated file size] > target"


  • :belt_onion:

    @ASheridan said:

    The real WTF is that you clearly didn't read the EULA for MySQL, specifically the part that says:

    The real WTF is that you think that's his job...

     


  • ♿ (Parody)

    @Obfuscator said:

    @PJH said:
    @Obfuscator said:
    I understand your frustration, perhaps http://linux.die.net/man/1/pv could help?
    I doubt it somehow...

    Huh? Care to elaborate? I'm thinking something along of the lines of

    "mysqldump | pv -s [exact or estimated file size] > target"

    I suspect his being on a Windows platform will reduce the utility of this. Installing cygwin or something is probably beyond the scope of what he'll be allowed or want to do.


  • Discourse touched me in a no-no place

    @Obfuscator said:

    @PJH said:
    @Obfuscator said:
    I understand your frustration, perhaps http://linux.die.net/man/1/pv could help?
    I doubt it somehow...

    Huh? Care to elaborate? I'm thinking something along of the lines of

    "mysqldump | pv -s [exact or estimated file size] > target"

    Hint: You forgot the '.exe 'on the end of mysqldump.



  • @PJH said:

    @Obfuscator said:
    @PJH said:
    @Obfuscator said:
    I understand your frustration, perhaps http://linux.die.net/man/1/pv could help?
    I doubt it somehow...

    Huh? Care to elaborate? I'm thinking something along of the lines of

    "mysqldump | pv -s [exact or estimated file size] > target"

    Hint: You forgot the '.exe 'on the end of mysqldump.

    Aha, but that's why I added the 'your are probably not on win due to the EULA' joke at the end. Oh well..



  • You could print out the file size as it grows instead of trying to guess percentage.



  • @PleegWat said:

    A solution I've seen in the past, on a linux platform, consisted of:

    • LOCK TABLE
    • Physically copy the data files
    • UNLOCK TABLES

    Restore worked similarly. Performance was apparently pretty good.

    But not guaranteed - seems caching issues may cause a discrepency between what MySQL knows and what the datafiles say. There's also the issue of having to use the same platform/version/etc for a restore during a DR situation, whereas a mysqldump file tolerates variations in versions (to some degree) and platform.

    @dtfinch said:

    You could print out the file size as it grows instead of trying to guess percentage.

    That would only show how big the file is getting, rather than provide an estimation of completion times. But I guess part of the picture is better than none at all.



  • maybe you could do a show tables, then select count(*) from each one (instant answer as long as you have a pk), put that data aside then mysdump one table at at time and show the user "processed tables X/Y, processed records W/Z, currently dumping table table_name ( table_name_count records )"

     


  • Discourse touched me in a no-no place

    @MustBeUsersFault said:

    maybe you could do a show tables, then select count(*) from each one (instant answer as long as you have a pk)
    .. and it's not innodb.



  • @PJH said:

    . and it's not innodb.
     

    true, and he is probably using innodb, that would explain the 'slow'' truncate as well.



  • @MustBeUsersFault said:

    @PJH said:
    . and it's not innodb.
    true, and he is probably using innodb, that would explain the 'slow'' truncate as well.

    In that case, I recommend a shrimp fork to the eyeball. It would be much less painful than working with InnoDB.



  • @blakeyrat said:

    @MustBeUsersFault said:
    @PJH said:
    . and it's not innodb.
    true, and he is probably using innodb, that would explain the 'slow'' truncate as well.

    In that case, I recommend a shrimp fork to the eyeball. It would be much less painful than working with InnoDB.

    Yes, because the frequent corrupted tables and lack of foreign keys of MyISAM are soooo much better



  • @dtech said:

    @blakeyrat said:
    @MustBeUsersFault said:
    @PJH said:
    . and it's not innodb.
    true, and he is probably using innodb, that would explain the 'slow'' truncate as well.

    In that case, I recommend a shrimp fork to the eyeball. It would be much less painful than working with InnoDB.

    Yes, because the frequent corrupted tables and lack of foreign keys of MyISAM are soooo much better

    Oh wait I thought InnoDB was the bad one. It's the good one? MyISAM is the bad one? Because it just wouldn't make sense if MySQL just picked the best storage engine they could write and used that one all the time.

    Sorry if I got confused.



  • @PJH said:

    @MustBeUsersFault said:
    maybe you could do a show tables, then select count(*) from each one (instant answer as long as you have a pk)
    .. and it's not innodb.

    Or you can use show table status instead of select count(*) to get the row count, which is instant even on InnoDB.  It's not an accurate count but for a progress bar it's likely good enough.



  • @lettucemode said:

    So it turns out my assumptions regarding patches to open-source projects were incorrect. Namely that if someone had a good idea for a useful feature and submitted the patch A YEAR AND A HALF AGO the devs would have surely gotten it into the main branch by now.
    Where in the world did you get that idea?  You obviously haven't been paying attention.  If you think a year-and-a-half is bad, you'll find this page entertaining.  A quick scan of that page turns up several bugs that were just recently fixed but were originally submitted as far back as December 2000.



  • @blakeyrat said:

    @dtech said:
    @blakeyrat said:
    I recommend a shrimp fork to the eyeball. It would be much less painful than working with InnoDB.

    Yes, because the frequent corrupted tables and lack of foreign keys of MyISAM are soooo much better

    Oh wait I thought InnoDB was the bad one. It's the good one? MyISAM is the bad one?

    Sorry if I got confused.

    No, you were correct the first time: a shrimp fork to the eyeball is less painful than working with InnoDB.

    Additionally, a shrimp fork to the [i]nuts[/i] is less painful than working with MyISAM.

     



  • @boog said:

    @blakeyrat said:

    @dtech said:
    @blakeyrat said:
    I recommend a shrimp fork to the eyeball. It would be much less painful than working with InnoDB.

    Yes, because the frequent corrupted tables and lack of foreign keys of MyISAM are soooo much better

    Oh wait I thought InnoDB was the bad one. It's the good one? MyISAM is the bad one?

    Sorry if I got confused.

    No, you were correct the first time: a shrimp fork to the eyeball is less painful than working with InnoDB.

    Additionally, a shrimp fork to the nuts is less painful than working with MyISAM.

     

    No no no, to get the true experience of working with MySQL, you must cover your nuts in honey then teabag an anthill for a day.



  • @The_Assimilator said:

    No no no, to get the true experience of working with MySQL, you must cover your nuts in honey then teabag an anthill for a day.

    Is working with MySQL a requirement for such practice, or am I permitted to indulge without the database constraint?



  • @Cassidy said:

    @The_Assimilator said:

    No no no, to get the true experience of working with MySQL, you must cover your nuts in honey then teabag an anthill for a day.

    Is working with MySQL a requirement for such practice, or am I permitted to indulge without the database constraint?

    Was that a database pun?



  • An attempt, yes. A weak one, mind - but it was an attempt, nonetheless.

    A piece of SQL walks into a bar and approaches two tables, asking them "mind if I join you?"



  • @El_Heffe said:

    If you think a year-and-a-half is bad, you'll find this page entertaining.  A quick scan of that page turns up several bugs that were just recently fixed but were originally submitted as far back as December 2000.


    My personal record for a bug report submitted (with a suggested patch) but bug not yet fixed is is 9.3 years and counting. Although that isn't an open source project.



  • @fatbull said:

    Truncating the info table empties the data table as well, doesn't it? What if you truncate the data table before you truncate the info table?

    Instant truncation! Thanks for the tip. Note to self: InnoDB hates cascading deletes, or something.

    @El_Heffe said:

    Where in the world did you get that idea?  You obviously haven't been paying attention.  If you think a year-and-a-half is bad, you'll find this page entertaining.  A quick scan of that page turns up several bugs that were just recently fixed but were originally submitted as far back as December 2000.

    Good god

    @The_Assimilator said:

    No no no, to get the true experience of working with MySQL, you must cover your nuts in honey then teabag an anthill for a day.

    You've very accurately summed up my experience getting this thing to work :D thanks for the laugh!

    I had some spare time yesterday and came back to this problem. I managed to get a reasonably accurate progress bar for dumps and an accurate progress bar for restores. For dumps, I compared the size of the database to the size of the output dump file for several data sets, getting the (approximate) equation dumpFileSize = databaseSizeInBytes / (3.51 + .014 * databaseSizeInGBRoundedUp). So the code has a magic number in it but I explain it with comments :P better than nothing. I pipe the mysqldump output to a file, keeping track of how many bytes have passed through so far and incrementing the progress bar accordingly. For restores, I ask Windows for the file size of the dump file, pipe it to MySQL line by line, and again track the bytes and up the the progress bar.

    Thanks to everyone who demonstrated Blakeyrat's Second Law by posting possible solutions. Though I think we can agree that none of the suggested workarounds and approximations would be necessary if MySQL had gotten their shit right in the first place. Maybe they just expect people to find workarounds to this problem, therefore fixing it isn't an issue.



  • @lettucemode said:

    Though I think we can agree that none of the suggested workarounds and approximations would be necessary if MySQL had gotten their shit right in the first place.

    *nods*

    @lettucemode said:

    Maybe they just expect people to find workarounds to this problem, therefore fixing it isn't an issue.

    Perhaps Sun/Horricle haven't considered MySQL to be used with datasets as large as yours, hence mysqldump etc runs fairly quickly for Johnny Homeowner and his lil LAMP blog so showing progress is unnecessary.... although I can half-hear Ellisonites pushing "you should be using Oracle not MySQL for those kind of data volumes!"

    If you're willing to publish your script(s) somewhere as a tried-n-tested method, I'd be interested in them. The lack of progress thing during backups has annoyed me at times (although I haven't had to wait long, it's just the "silent treatment" that fuels my impatience).



  • @pjt33 said:

    @El_Heffe said:

    If you think a year-and-a-half is bad, you'll find this page entertaining.  A quick scan of that page turns up several bugs that were just recently fixed but were originally submitted as far back as December 2000.


    My personal record for a bug report submitted (with a suggested patch) but bug not yet fixed is 9.3 years and counting. Although that isn't an open source project.

    Java (or at least, most of it) is open source nowadays. As for Sun not fixing a simple bug for a decade... I guess we know why they're no longer in existence. Here's hoping Oracle goes the same way!

    edit: OMFG you were using WINDOWS 98 at the time you submitted that! Wow.



  • @The_Assimilator said:

    Java (or at least, most of it) is open source nowadays.


    Sort-of - Oracle's distribution of it isn't, but Sun forked out OpenJDK before going under. I suppose I could try filing a bug report with OpenJDK, because they probably didn't copy the old Bug Parade into their tracker when the fork happened.


Log in to reply