Prev: GRE Keys
Next: Spanning-tree Protocol implementation
From: Grant on 27 Oct 2009 15:23 On Tue, 27 Oct 2009 11:04:15 -0700, Keith Keller <kkeller-usenet(a)wombat.san-francisco.ca.us> wrote: >On 2009-10-27, David Brown <david(a)westcontrol.removethisbit.com> wrote: >> .... >> If you don't >> write much to your database, your chances of a corrupt copy become >> smaller (and then you just use the previous backup instead). If you are >> using a system like rsnapshot with multiple backup copies that are >> hardlinked when the files are unchanged, you get very small incremental >> costs per backup. I'm using rsync and hard-linked backups run from a cron job each 30 min, with another daily cron job eating the backup tail -- I think at age 90 days. Only problem I had was slocate filling the /var partition -- so I uninstalled slocate, I don't use it anyway. .... >> It's all a matter of balancing your needs. >> Whatever method you choose, do a practice restore to make sure you can >> recover your data! > >Double-plus-yes to the above! In addition, do a practice restore to a >different machine if at all possible. Yes, test the restore ability _before_ you need it, way back I discovered a booboo in a backup script -- it looked normal but was overwriting files with altered most recent, instead of saving the altered file -- easy to fix. But the backup was bust at the time I needed a file and found the buglet instead. Grant. -- http://bugsplatter.id.au
From: David Brown on 27 Oct 2009 17:00 Keith Keller wrote: > On 2009-10-27, David Brown <david(a)westcontrol.removethisbit.com> wrote: >> It depends somewhat on your needs, your database usage, and your >> database server. I don't know about MySQL, but postgresql seems to be >> good at working with such straight file copy backups. > > I don't know if this is true with newer versions of postgresql, but the > versions I've used, the data store was not portable across different > architectures. Indeed, there was no guarantee of portability even > across different machines on the same architecture, even using the same > version of PostgreSQL! > That's a good point - I didn't know about such limitations. It worked for me in the tests I did, but it certainly emphasises the point that backups are only as good as the recovery procedure, and that you have to test recovery. > (I think that I've done it, but that would have been a long time ago. > I did get caught by this issue, though: my main home server's power supply > died, and the only other machine I had available was a ppc box. My naive > solution, simply using the data store from the old machine's hard drive, > didn't work, and I had to resort to an old dump (because I'd recently > broken my backup script). Fortunately this was not a huge problem for > me, but I could imagine it being a big problem if you had no dumps at > all.) > > So I think it's not wise to depend on this mode as a primary backup--it > could serve as a desperation backup, but the primary backup should be a > proper dump (or mysqlhotcopy if you can and desire). > > For MyISAM tables, MySQL works fine with filesystem backups; I believe > these files are portable across architectures, but don't quote me on > that. I don't know how portable filesystem snapshots are for InnoDB > tables. > >> If you don't >> write much to your database, your chances of a corrupt copy become >> smaller (and then you just use the previous backup instead). If you are >> using a system like rsnapshot with multiple backup copies that are >> hardlinked when the files are unchanged, you get very small incremental >> costs per backup. With monolithic dump files, even a single change to >> the database means that the whole file must be saved for each backup >> (though rsync will still minimise the traffic transferred in the copy). > > This is all true, but I don't see a way to reliably get a good backup > otherwise. And you'll only have real difficulties if your database is > quite large--most typical dbs should not create enormous dump files > anyway. > All I'm really saying is that file-based copies are better than nothing, and that different backup strategies have different pros and cons. But you are right to mention other issues with file-copy backups. >> If you are already doing an >> rsync backup of everything else on the machine, including the database >> files is very simple. It may be good enough for you. Doing database >> dumps is the "right" way to do the backups, so that should be the method >> to use unless you have good reason not to. But wanting a simple and >> easy solution without having to learn about dumps may well count as a >> good enough reason. > > Well...I think that anyone intent on using a database regularly should > not allow themselves to get lazy and rely on filesystem backups. They > should start the ''right'' way, and only use a filesystem backup if they > really don't care all that much about their databases in the first > place (or can recreate it quickly from what they already have on their > filesystem). > That's a reasonable summary, and matches my backup strategy. I've got dump-based backups on a couple of important databases, and filecopy backups on a couple of others that could be recreated if necessary. Remember, "lazy" is not necessarily bad - it's better to have a quick but poor backup system, than to plan a perfect system but not get the time to implement it! >> It's all a matter of balancing your needs. >> Whatever method you choose, do a practice restore to make sure you can >> recover your data! > > Double-plus-yes to the above! In addition, do a practice restore to a > different machine if at all possible. > > --keith >
From: Robert Nichols on 27 Oct 2009 22:40 In article <4ae532d0$0$70417$c30e37c6(a)exi-reader.telstra.net>, Gabriel Knight <fakeemail(a)hotmail.com> wrote: :Hi all I need a free program to backup a ubuntu server for my school class, :it has to be as good or better than Rdiff and Rsync the server will use SSH, :MYSql and be a file and web server and do a couple of other things. I need :it to be either a gui or text box program. If you want something like rdiff and rsync, then rdiff-backup might be a good match. It combines the features of a mirror and an incremental backup, so you can restore to any previous backup point. The increments are kept as a series of reverse diffs from the current mirror. Syntax is similar to rsync, and librsync is used to generate efficient reverse diffs. Biggest disadvantage is the difficulty of deleting that multi-gigabyte file that accidentally got included in your regular backup. Learning curve can be a bit steep at first. http://www.nongnu.org/rdiff-backup/ -- Bob Nichols AT comcast.net I am "RNichols42"
From: terryc on 27 Oct 2009 23:39 On Mon, 26 Oct 2009 20:52:26 +0100, David Brown wrote: > Gabriel Knight wrote: >> Hi all I need a free program to backup a ubuntu server for my school >> class, it has to be as good or better than Rdiff and Rsync the server >> will use SSH, MYSql and be a file and web server and do a couple of >> other things. I need it to be either a gui or text box program. >> >> > As others have said, the obvious choice here would be ... rsync. *sync isn't a abckup system. It is just a copy system.
From: Grant on 27 Oct 2009 23:57
On Wed, 28 Oct 2009 03:39:59 +0000 (UTC), terryc <newsninespam-spam(a)woa.com.au> wrote: >On Mon, 26 Oct 2009 20:52:26 +0100, David Brown wrote: > >> Gabriel Knight wrote: >>> Hi all I need a free program to backup a ubuntu server for my school >>> class, it has to be as good or better than Rdiff and Rsync the server >>> will use SSH, MYSql and be a file and web server and do a couple of >>> other things. I need it to be either a gui or text box program. >>> >>> >> As others have said, the obvious choice here would be ... rsync. > >*sync isn't a abckup system. It is just a copy system. And a backup is not a copy? Grant. -- http://bugsplatter.id.au |