Apparently, the Linux kernel isn't important enough to bother with a backup



  • Maybe I'm weird, but I always have a couple of spare hard drives and backups of all my porn important files just in case something ike this happens.



  • You do realize that Linux is in a git repository, and it's probably the most cloned git repository in existence*.

    If your computer died, would you be able to keep developing whatever software you were developing even before you got a replacement?

    *this is probably wrong.



  • @Ben L. said:

    If your computer died, would you be able to keep developing whatever software you were developing even before you got a replacement?
    (1) Remove dead hard drive

    (b) Insert spare hard drive that i keep sitting around just in case something like this happens

    (3) Restore from backup

    Total elapsed time:  ~15 minutes

    @Ben L. said:

    You do realize that Linux is in a git
    repository, and it's probably the most cloned git repository in
    existence*.

    *this is probably wrong.

    It's probably pretty close to being true.  TRWTF is all the headlines "OH NOES!! LINUX TORVALDS HAD TO STOP DEVELOPMENT OF TEH LINUX COLONEL!!"

  • Discourse touched me in a no-no place

    @El_Heffe said:

    TEH LINUX COLONEL
    Waiting for the Windows General.



  • @El_Heffe said:

    (3) Restore from backup

    Total elapsed time:  ~15 minutes

    Do you do a constant shadow copy sort of backup? I know that I'm horrible about backups, but I always put my seat belt on in the car.

    @El_Heffe said:

    TRWTF is all the headlines "OH NOES!! LINUX TORVALDS HAD TO STOP DEVELOPMENT OF TEH LINUX COLONEL!!"

    Yeah, it's a temporary setback, but not a big one. Mostly a PITA. Although, what would happen if Linus got hit by a bus? (Not rhetorical, but please answer this in the voice of blakeyrat.)



  • @boomzilla said:

    @El_Heffe said:

    (3) Restore from backup

    Total elapsed time:  ~15 minutes

    Do you do a constant shadow copy sort of backup? I know that I'm horrible about backups, but I always put my seat belt on in the car.

    Either Apple's TimeMachine or the "new" Windows Backup system works that way. I don't have experience with the former but with the latter you simply point it at some folder (local or network, doesn't matter) and you'll get incremental backups of the folders you want (like the "My Documents" folder).

    Pretty sure there are similar mechanisms under Linux.



  • @Rhywden said:

    Either Apple's TimeMachine or the "new" Windows Backup system works that way. I don't have experience with the former but with the latter you simply point it at some folder (local or network, doesn't matter) and you'll get incremental backups of the folders you want (like the "My Documents" folder).

    Pretty sure there are similar mechanisms under Linux.

    No doubt. I just doubt that many use them. Though I admit my carelessness may be coloring my impression. OTOH, if I were doing something on my machine on the order of what Linus does on his I might take a different view of the matter.



  •  I have a Samba server (Ubuntu) with RAID 1 on two 1TB drives. So that I could just use my work files from any computer in the house.

    I rarely ever use it, because I don't have a server room and the front 120mm fan sounds like a vacuum cleaner or a leaf blower. I.e. way too loud. But I've been too lazy to replace it. Then again, I'm also afraid of the power bill if I leave it on day and night. I built this machine from the least power consuming parts available at the time (2009 ?), but it is still an x86 machine.

    But with my actual continuous back-up needs, I could probably get away with a RaspberryPi and 2 x 32GB memory sticks (in RAID 1  🙂 ).

    In a one-man operation, it's a chore to keep back-ups.

    Therefore, if your work actually gets backed up regularly anyway (like with the Linux git clones), just make your primary workstation boot from RAID 1. With all major OSes supporting boot from RAID 1, there is really no reason not to use it. ...This instruction presumes that you have your working files and boot sector on the same physical disk normally.



  • @OldCrow said:

    Then again, I'm also afraid of the power bill if I leave it on day and night.

    I live in California and get power from PG&E. I leave my gaming PC and monitor on day and night and spend 10USD more per month than if I only turned it on for two or three hours per day.



  • @Rhywden said:

    Either Apple's TimeMachine or the "new" Windows Backup system works that way. I don't have experience with the former but with the latter you simply point it at some folder (local or network, doesn't matter) and you'll get incremental backups of the folders you want (like the "My Documents" folder).


    Time Machine backs up pretty much every file that's changed on your hard drive (except in folders you specifically tell it not to) once an hour, then deletes backups as they get older, so that you have hourly backups of the last 24 hours, daily ones of the last couple of weeks, etc.



  • Re: Apparently, the Linus' current work on the Linux kernel isn't important enough to bother with a coherent backup that frequently

     Frequent incremental backups are important and easy....but they do not make all of the problems completely go away. Typically, they are not synchronized (between files in the same backup). For example, you have a document with a list of files to change. You change a file and update the document. The machine crashes. It is possible that only one of these items will have been backed up. Thus you end up with a document that says a file HAS been changed, but not the latest file - or the reverse.

     

    If you are working with hundred or thousands of files (as you may be during a merge of a SCM), the time is likely to be much more than 15 minutes..



  • There's also online backup services like BackBlaze, Carbonite, etc.

    But all of that doesn't matter since Linus should be using Git, since, you know, he invented it.

    It's a bit disconcerting that people on a development forum are saying "swap out the hard drive and restore from backup" when the faster, more developer-centric solution, is to just go to another computer and clone the repo. Then you can keep working while you fix the other machine.



  • @TheCPUWizard said:

    If you are working with hundred or thousands of files (as you may be during a merge of a SCM), the time is likely to be much more than 15 minutes..

    Only if you're using VSS or SVN. On distributed version control systems branching and merging happen frequently and without needing to get all the developers together in the same room like it's 1995. That's the whole reason Git was developed; Its developers are spread across the globe in different time zones.



  • @spamcourt said:

    I leave my gaming PC and monitor on day and night and spend 10USD more per month than if I only turned it on for two or three hours per day.

    Shame on you. For those $10/month instead you could sponsor Little Mulubwa and help him go to school.






    Or you could get 200GB of Google Drive storage.


  • Discourse touched me in a no-no place

    @Ronald said:

    For those $10/month instead you could sponsor Little Mulubwa and help him go to school.

    Or you could get 200GB of Google Drive storage.

    Or you could get the Google Drive storage and donate it to Little Mulubwa so he can be his own entrepreneur.



  • I actually had an SSD die about a month and a half ago - turned on the computer in the morning, and instead of booting Windows, it went straight into BIOS (where the SSD wasn't visible anymore). I do daily backups of my system drive, so I didn't lose anything, but the restore to a temporary hard drive took around 5 hours (at the time I was using Acronis TrueImage, since it's much faster than Windows Backup [at least when creating backups - around 40 minutes for a full backup, 5-10 for incremental; I have no idea why it's so slow when restoring], and lets you store multiple backups on network - Windows built-in backup doesn't let you keep history on a network drive, that only works when backing up to a local disk).



  • @ender said:

    I actually had an SSD die about a month and a half ago - turned on the computer in the morning, and instead of booting Windows, it went straight into BIOS (where the SSD wasn't visible anymore). I do daily backups of my system drive, so I didn't lose anything, but the restore to a temporary hard drive took around 5 hours (at the time I was using Acronis TrueImage, since it's much faster than Windows Backup [at least when creating backups - around 40 minutes for a full backup, 5-10 for incremental; I have no idea why it's so slow when restoring], and lets you store multiple backups on network - Windows built-in backup doesn't let you keep history on a network drive, that only works when backing up to a local disk).

    The slower restore is often caused by file system allocation and sparse files. If you want to speed things up you can boost the cluster size; this will waste a bit of space (especially if you keep cookies and temporary files on the system drive, which you really shouldn't) but write speed will increase significantly. Note that if you have a bigger cluster size the Windows file encryption may be disabled (but again, if you encrypt files using EFS on the system drive there is something wrong with your setup).



  • @Ronald said:

    The slower restore is often caused by file system allocation and sparse files.
    I'm not so sure - running Process Explorer while Acronis is restoring always shows an I/O graph shaped like this: /_/_/_/_/_/_ - basically, a spike every 30 seconds or so, and zero activity otherwise.



  • @Soviut said:

    @TheCPUWizard said:
    If you are working with hundred or thousands of files (as you may be during a merge of a SCM), the time is likely to be much more than 15 minutes..

    Only if you're using VSS or SVN. On distributed version control systems branching and merging happen frequently and without needing to get all the developers together in the same room like it's 1995. That's the whole reason Git was developed; Its developers are spread across the globe in different time zones.

     

    BULL... I am using GIT...I had 50 developers each with a laptop with local repro's that has not been connected a common repository because they have all be traveling in desolate locations for a year with no internet capability. When they all got back in town, 50 man years of work had to be merged.

     OK, that's not true, but neither is the opinion that DVCS inherently means merges are frequent, nor does a CSCM [Centralized Source Control Management] system mean that they are Not. It is all on how the tam uses the tool.



  •  I want to insert a Torvalds-style rant here about risking (even temporarily)  something like the Linux kernel on unreliable tech, as a reflection of the scorn poured on ARM SOC developers.  but I will leave that to the readers imagination.I do not trust SSDs .

    I use one as a swap drive on a too-small dev machine and it falls over repeatedly if used too much for read/write operations. 

    But I have worn out spinning disk drives in under a year overheating them in too-small boxes. 

    I once dropped my main backup drive 20cm onto a wooden floor while it was running and enquired into the cost of recovery, so I got interested in backups and RAID arrays.

    As far as backups go - why bother with RAID 1 when for 50% more cost you get RAID5 ?

    At home I have a 3TB RAID5 server with a separate external 3TB incremental backup drive made out of an old made-in-good-ol-USA dual-CPU Dell Poweredge 600  (really thick sheetmetal, clip on PSU, lots of big slow fans) chassis which began life about 10 years ago on a City trading floor. It eats a moderate amount of power but keeps the living room warm.

    I was given the PC, the RAID controller cost $50 on eBAY and the three 1.5TB disks cost about $500 at the time.

    My kids are always amazed that nobody else at school has a home server with all the media and documents stored centrally. 



Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.