Backup advice - Linux server



  • at work, we are currently migrating from shared host to our own new and shiny VPS.

    among other things I need to put a backup strategy.

    on the production server I'm using whm. and the plan it's to backup it into a server on our network which isn't visible from outside.

    I'm thinking of doing a backup on the production server and then with a cronjob + rsync copy it into the backup server.

    is there anything i'm missing?
    any tool to make this a little more maintainable?


  • FoxDev

    Most VPS providers offer an integrated volume backup solution. I recommend you make use of it to back up the entire server, I also recommend you periodically check the backups that it creates to make sure they restore cleanly.

    You'll still want an offsite backup solution, so this isn't a complete replacement, but it's a relatively cheap backup solution that lets you get a system back online quickly after a catastrophic event. (such as rm -rf --no-preserve-root /)


  • 🚽 Regular

    I'm in the process of setting up Bacula to replace the cron+rsync script I have currently.

    It seems to do everything I want, might be suitable for you too: http://www.bacula.org/



  • What's your budget?

    What about security concerns?

    If you need an offsite backup, Amazon has some excellent services for that. Glacier is cheap and reliable.



  • @blakeyrat said:

    What's your budget?

    it's "you've got a vps, and servers on our network, stop making me spend money"

    @blakeyrat said:

    Glacier is cheap and reliable.

    maybe in the future. right now it's not gonna happen =/

    @accalia said:

    Most VPS providers offer an integrated volume backup solution. I recommend you make use of it to back up the entire server

    my bad, we do have an entire server backup. but it's slow.
    I was thinking more of a backup which coul let me restore our systems. it will only hold the databases and the files uploaded by our customers( ~3gb per month approx).
    the other things can be rebuilt from our VCS.

    @Cursorkeys said:

    I'm in the process of setting up Bacula to replace the cron+rsync script I have currently.

    It seems to do everything I want, might be suitable for you too: http://www.bacula.org/

    seems to do everything i want too. thanks!



  • @Jarry said:

    he other things can be rebuilt from our VCS.

    Please convince the pointy haired people that this is not worth the savings. One hour of recovery is more expensive than backing up for like 5 years.
    If customers pay money for this service you can most definitely afford to pay someone to store a backup for a few dollars a month.


  • Discourse touched me in a no-no place

    @swayde said:

    Please convince the pointy haired people that this is not worth the savings.

    Unless the whole rest of the system can be rebuilt from a standard image by scripts. If you can press a button and get a rebuild in a few minutes, keeping just the data parts may make sense (since they're also the parts that you'll have to synch to your “normal” backup deployment).


  • 🚽 Regular

    @dkf said:

    Unless the whole rest of the system can be rebuilt from a standard image by scripts. If you can press a button and get a rebuild in a few minutes, keeping just the data parts may make sense (since they're also the parts that you'll have to synch to your “normal” backup deployment).

    There is nearly always some obscure thing you've had to do to the box that takes ages to get right again. I had configured vsftp's virtual users in an unusual (but handy) way. It took days to get the bloody thing working as it had before and it wasn't much use until it did as other established processes on other machines depended on that.

    Bare metal restore is pretty much a necessity unless the system is genuinely non-critical.


  • Discourse touched me in a no-no place

    @Cursorkeys said:

    There is nearly always some obscure thing you've had to do to the box that takes ages to get right again.

    That's why it's uber-useful to keep backups of those specific bits. We've done things like having puppet scripts to build a new virtual server instance from scratch (I think; I was busy writing the software deployed on the images) so that all special customisations were documented off the system. I believe it mostly worked well, except when someone who should have known better either fucked up the script or decided to push updates in without using the script. :rolleyes:


  • 🚽 Regular

    @dkf said:

    puppet

    I've been meaning to actually do something with Puppet after registering. I guess that serves as documentation as well for the configuration. I currently keep text files of the config steps but it's very easy to leave something off.


  • I survived the hour long Uno hand

    @Cursorkeys said:

    There is nearly always some obscure thing you've had to do to the box

    This is one of those situations where, if it hurts, do it more :) If you frequently test your deploy scripts, and in fact use them to handle things like upgrades to the machine (just wipe and re-deploy), you'll soon have all those things encapsulated in your deploy scripts so you never forget them again.



  • @Jarry said:

    I'm thinking of doing a backup on the production server and then with a cronjob + rsync copy it into the backup server.

    That's what I do. I dump the databases to plain SQL in a known location. The backup server calls the web host to pull all the changed data across with rsync, and then I use rdiff-backup to take a snapshot.

    Web host can't contact backup server even if hacked. Differential backups let me rewind if a problem isn't noticed for a while. Nothing special needs to be installed on the web host.

    I'm not a fan of backup systems which use custom data formats, which Bacula's wiki says it does.


Log in to reply