On Mon, Feb 25, 2008 at 01:30:14 +0000 (+0000), Andy Smith wrote:
[snip]
> I've had a brief look at backuppc which offers a similar way of
> working, but stores file data and metadata separately so that
> duplicate files from anywhere on any host are only ever stored once.
> It may well scale further, at the cost of introducing its own
> storage format that is not as simple as a normal filesystem
> directory.
I've been using backuppc for about 2 years now and I've very happy
with it.  It does have significant problems, just fewer than other
systems :-)
All files are stored in a "pool" area and hardlinked from:
pc/<hostname>/<backup>/(copy of filesystem)
Pros:
- de-duplication - identical files on multiple boxes are only stored
  once - great if you have a lot of machines
- very friendly web interface
- configurable
- using scripts you can selectively remove old files from pc/
  subdirectory (e.g. logs). space will be reclaimed during nightly
  cleanup
- automatic compression
Cons:
- massively hardlinked tree means that backing up backuppc (e.g. to a
  remote site for DR) is a royal PITA
- pc/<hostname>/<backup> tree is a mangled unix tree
  (fusr/fshare/fdoc/ etc)
- files are compressed using a backuppc tool (as are logs)
Adrian
-- 
Email: adrian@???  -*-  GPG key available on public key servers
Debian GNU/Linux - the maintainable distribution   -*-  
www.debian.org