Let's distinguish between a backup for your files and
a backup for your hardware.
If you want to backup the hardware it's very important
to keep the files in sync at all times.
Having things sort of a little bit in sync some of
the time leads to all kinds of trouble.
That does mean, though, that if your files
are lost or corrupted due to a hacker, an error, or whatever you'll
have two copies of the same bad files.
You'd then want a seperate backup copy
of the files that represent the state of the system at
a particular point in time.
The file backup should have two copies of the files always.
If you run it daily you'd have last night's backup
and the previous one at all times.
This is in case a hacker wipes out your system
5 minutes before the backup runs.
Again you don't want to just have a backup copy
of garbage.
NFS, AFS or some other network file system
as suggested by cd34 is one of the best ways to do a cluster
where you have two identical servers.
It may take a little tuning if the files change often
(many times per second), but it can be a solid,
high performance solution.
Another way is to use rsync
with fam
to keep them synced in real time.
Do NOT rsync "occasionally" on a cluster.
That will create lots off problems.
On a cluster you keep them in sync at all
times, which is why you'd drive rsync with fam.
You could of course use rsync to create an occasional
offline backup, which should be treated just
like a backup tape.
Here's how to do a cluster with fam and rsync:
http://www.tldp.org/linuxfocus/Engli...ticle199.shtml
then what you can do is have a script on
the standby server check periodically to
see if the main server is responding.
If not, it sends a couple of gratuitous ARPs
to take over the IP address used for the web site.
Of course you'll want a different IP available on that
machine so that you can still try to SSH in and fix it.