|
|
|
|
|
|
|
![]() |
#1 |
a.k.a. Sparky
Join Date: Sep 2004
Location: West Palm Beach, FL, USA
Posts: 2,396
|
You could rsync on a regular basis, and run mysql with replication. If someone is going to be there to swap the IP addresses, that's a pretty inexpensive way to do that.
You could run Coda or AFS or one of the other distributed filesystems (GFS from Sistina was purchased by RedHat and later released opensource in the RedHat packages). This would replicate the filesystems. http://www.it.uc3m.es/ptb/fr1/ http://opengfs.sourceforge.net/ http://www.cubit.at/index.epl?cms_ol...ernkompetenzen You could buy some ISCSI cards and have a shared storage machine that both shared, and turn off locking in mysql so that you could run mysql at the same time on both, and then flip the switch. You could set up an NFS shared storage machine (watch out for apache and its mmapped lock files if you use NFS) Adaptec sells ISCSI cards that are fairly well supported in Linux. Once you have the machines synced so that the data is identical on both, you could put a hardware load balancer in the front. If you wanted something a little more automatic, http://www.linux-ha.org/ http://www.linuxvirtualserver.org/ https://mcg.motorola.com/cfm/templat...&ProductID=202 Many different ways to do it.
__________________
SnapReplay.com a different way to share photos - iPhone & Android |
![]() |
![]() |
![]() |
#2 |
The only guys who wear Hawaiian shirts are gay guys and big fat party animals
|
Let's distinguish between a backup for your files and
a backup for your hardware. If you want to backup the hardware it's very important to keep the files in sync at all times. Having things sort of a little bit in sync some of the time leads to all kinds of trouble. That does mean, though, that if your files are lost or corrupted due to a hacker, an error, or whatever you'll have two copies of the same bad files. You'd then want a seperate backup copy of the files that represent the state of the system at a particular point in time. The file backup should have two copies of the files always. If you run it daily you'd have last night's backup and the previous one at all times. This is in case a hacker wipes out your system 5 minutes before the backup runs. Again you don't want to just have a backup copy of garbage. NFS, AFS or some other network file system as suggested by cd34 is one of the best ways to do a cluster where you have two identical servers. It may take a little tuning if the files change often (many times per second), but it can be a solid, high performance solution. Another way is to use rsync with fam to keep them synced in real time. Do NOT rsync "occasionally" on a cluster. That will create lots off problems. On a cluster you keep them in sync at all times, which is why you'd drive rsync with fam. You could of course use rsync to create an occasional offline backup, which should be treated just like a backup tape. Here's how to do a cluster with fam and rsync: http://www.tldp.org/linuxfocus/Engli...ticle199.shtml then what you can do is have a script on the standby server check periodically to see if the main server is responding. If not, it sends a couple of gratuitous ARPs to take over the IP address used for the web site. Of course you'll want a different IP available on that machine so that you can still try to SSH in and fix it. |
![]() |
![]() |
![]() |
|
|