Since I’m already being paranoid with my backups, I figured a little additional offsite backup for the really critical stuff couldn’t hurt.
The simplest solution is often the best:
- Amazon S3
I’m storing a copy of my important MySQL data (my Tasks Pro™ data, the data for this site, the King Design site, etc.) and my SVN repository – looks like it will likely be a few bucks a month ($.07 bill for transfer so far). Totally worth it for the peace of mind.
UPDATE: A full transfer later, my bill is up to $.29. Awesome. 🙂
I am currently using the following setup:
– 250GB internal for primary storage
– 250GB external for backup synced via SyncBack
– Amazon S3 + Jungle Disk + NetDrive for offsite backup synced via SyncBack
So far works pretty well except for the upload to S3 is pretty slow and a side-effect is that Jungle Disk keeps a local cache so I end up with another copy of data.
While the S3 prices are extremely reasonable, because of the slowness, I may implement offsite backup by adding another external hard drive and rotating it offsite weekly (most likely taking it to work).
Note: NetDrive and SyncBack are Windoze only apps.
Do you think you could maybe detail out just a little bit more what you’re doing here with this setup?
I’d like to get something going offsite myself, but I don’t know (and am a little worried about getting right) the options to rsync to make a proper copy. =)
I found a good example in the JungleDisk forums (Perhaps you’ve heard of Google? Very helpful for this sort of thing. 😉 ) – basically this:
rsync -r -t -v –progress –inplace –size-only /Path/to/source/ /Volumes/localhost/destination
And always test first with a source and destination that aren’t the real thing so you can see how it will actually react.
I currently use Mozy for my windows box and I’m very satisfied. They have free storage up to 1 GB which is all I need for critical files. They don’t have a Mac solution yet, but they seem to be working at it: https://mozy.com/support/macmozy
Some of the things I like about S3 over other solutions:
1. Cheap per usage fees – no :scare: cliffs :/scare: , no freebies
2. Solid company behind it – I have faith it will be around
3. Distributed/replicated data
4. Lots of tools growing up around it, and lots of ways to access the data from any platform
S3 is great, as is JungleDisc. Supposedly the next version of JungleDisc will have backup tools built-in. I was personally looking for a way to use S3 to share files online, and JungleDisc isn’t good for that. After looking at a bunch of tools, I settled on filicio.us, which is a web-front end to S3 with tagging and uploading. Ideally I’d like to be able to run something like this on my own server and use JungleDisk for managing the shared files, but this works for now. The S3 Firefox extension also offers folder syncing and shared links, but no way to easily share a whole folder as in Filicio.us.
There are better choices than rsync over jungledisk for, like my app: S3 Backup http://s3bk.com/
The point is that it’s a backup solution only changed files are uploaded and such. There will also be an option for keeping multiple previous files too. It’s really much neater than jungledisk because it doesn’t hide the reality of what S3 is under webdav abstraction (which doesn’t really fit)
I looked at your app – it looks like it could be nice. If your Mac version was available maybe I’d have used it. Since the Mac version isn’t available, I found something else and I’m quite happy with JungleDisk.
And may I add, disparaging your competition only serves to make you look low-brow and unprofessional.
Besides I don’t need a versioned backup (that’s what SVN is for), I need a current copy and a system (rsync) to only mirror the changed files.
Alex, I agree my comments on JungleDisk were kinda arrogant, sorry about that, but the point holds true.
Sergey: When you find yourself in a hole, stop digging. I would have chalked up your previous comments to being overly enthusiastic about being better than your competition, but now your non-apology just makes you look like an ass.
Like Alex, I’m a Mac user and can’t use your product, but you’ve guaranteed that I won’t use anything you do now. I hardly think that I’m the only person thinking the same thing right now.
I wish you the best in the future, and that involves learning when to keep your mouth shut—probably the most important lesson to learn in business. [One I’m still learning every day, in some small way.]
Geof, tell us: when does Sergey learn to stop masquerading as a successful person by ceasing to be a condescending, petty asshole with an obvious inferiority complex?
Cool tips, pity about the comments. I’m throwing anything of value over FTP at the moment which is less than convenient.
– mirror local clone nightly using SuperDuper
– mirror clone over local network weekly with SuperDuper
– mirror local drive monthly using SuperDuper and transport off-site
Questionable and need to implement, but not certain how:
– mySQL dumps to S3 on remote Unix hosted server (websites, etc.)
– rsync to S3 How?
– rotational backups using S3 with possibly rsync
My problem is I have almost 112 GB of files on a linux server in a datacenter that I don’t have root access to that I want to get to my s3 bucket. Downloading to home and then re-upping using JungleDisk would take ages and I haven’t found any solution Googling yet. Anyone have a script to use rsync to S3 via the CLI yet?
[…] you’re using JungleDisk on Mac OS X, double check that you’ve un-mounted (ejected) your JungleDisk volume […]
[…] excited about my recent experience backing up some critical files to Amazon S3, I asked Adam Tow if he’d considered using S3 for his photo backups. Adam and I have talked […]
Using “–inplace” is a bad idea. It mean much more bandwidth and slower backup.
Form the rsync man page:
–inplace meaning that the rsync algorithm can’t extract the full
amount of network reduction it might otherwise.
This option is useful systems that are disk bound, not network bound.
Dose any one try s3rsync.com ?
They claim to provide full rsync on top of s3.
Is it read work?
@addady: –inplace *is* required for S3. it is because the underlying storage does not actually support move. The webdav abstraction (as pointed out – arrogantly 🙂 – before) sort of hides this problem. But omitting –inplace would get you to upload the file *twice* – see the proper doc by JungleDisk
@Michaelb: i suppose that you *do* have access to the data you want to up-and-store at e.g. S3? In that case: no root access is required to run the linux commandline version of jungledisk (just single binaries. Obvious dependencies: libz, libm, libgcc, libpthread – nothing out of the ordinary. I say: download and have at it!)
s3rsync.com works quite reliably and it is true rsync. See my post for details: http://www.niquille.[...]sync-backup/
I’ve had a bad experience trying to use s3rsync.
They dont publish details of their how to use th service upfront – you get that only when you pay up. And then because of the way they’ve setup their command line, you’re basically limited to use the service only if you’re running a full blown linux box where you are the admin and have root access to setup things to work with their service. That rules out any appliances, nas boxes or other sytems connecting to their service. Unfortunately, to top it off, customer support is not very responsive. They also claim not to get an email if you write to them via the contact form on their own website! Wonder why its there.