A couple of months ago, I was poking around and noticed that online backup options were rather cheaper than I’d thought. Amazon S3 is 15 cents / month / GB; that, unfortunately, has a bit of an impedance mismatch with my previous backup-via-ssh strategy. But my home directory on my home server is only a few GB, small enough to fit into the smallest Rackspace Cloud Server configuration; so, for $11/month, I could have a backup that would be ready to go live without too much work if my home server crashed, which seemed like an eminently reasonable price to me.
So I put “set up a cloud server backup” on my next action list, where it languished for several weeks. But, this weekend, Miranda was out of town with her grandparents, leaving me with a bit more free time than normal, so I figured I’d take the time to set it up. And, within 15 minutes, I had a new server to play with.
At which point I went to write down some notes on the process in the reference folder on my home server; strangely, though, my ssh connection had hung. I went upstairs and noted that it was unresponsive; I rebooted it, but two minutes later, it had crashed again. Oops.
Fortunately, I did have a previous backup strategy in place; it wasn’t quite complete, but I hoped that it was pretty good. And my experience this weekend has shown that, yes, it really was pretty good. (Protip: rsync’s –exclude-from flag skips all matching files, not just matching files at the top level of the directory hierarchy you’re backing up.) It was remarkably easy to get most of my files transferred over there (nice to be transferring gigabytes of files between two computers with good internet connections instead of over a connection including home wifi and Comcast); I had my share of moments wondering what was going on when dealing with Apache configurations, but all in all I had the new server serving all of my sites approximately 24 hours after the old one died, despite sleeping in most of the morning and having friends over for bridge and dinner.
I’m still fiddling with details here and there, but I think the new server is almost completely working. I was always planning to bump up the server’s memory usage if I had to use it for real, but I was surprised at how well it was holding up with only 256MB of memory; even so, I thought the better part of valor was to bump it up to 512MB, and the oom-killer reared its head several times today despite that. I’ve played around with Apache and Passenger configurations this evening, hoping that I’ll still be able to fit under that memory limit: Miranda and I are the only people who use the Rails application running there, so I certainly don’t need many Passenger processes, and this blog and my other web sites certainly aren’t very popular, either. So, if I can stick with 512MB, that would be great; if I have to bump it up to 1GB, I’ll grumble a bit but will live with it, given that it’s cheaper than buying a physical server and that I’m sure prices will fall. (My home server had 2GB of memory, and the prices for such configurations are expensive enough to make me think twice, but I’m pretty sure I’ll be able to avoid that.) Also, one nice aspect of running in the cloud is that, if I inadvertently post something popular here and notice it quickly enough, I can beef up my server for a week with only a few minutes of downtime and only paying the extra money for that time period.
So, while I wish that my home server hadn’t died quite so soon (at the very least, lasting another 24 hours would have been nice), all in all I’m happy with how things have worked out. It’s nice to be reassured that my sysadmin skills haven’t atrophied too terribly, I’m sure future guests will appreciate not having to listen to that machine’s fans, and I’m quite pleased with my Rackspace experience so far.
Post Revisions:
This post has not been revised since publication.
I use VPS Link and I have a 20 GB 512 MB VPS (Ubuntu) for $ 18 a month. Initially, I was running Apache, MySQL, CGI(PHP/Ruby), Exim and Courier IMAP. I was constantly running out of memory! I had to tweak some MySQL settings, move from Apache to lighttpd, use FastCGI for PHP and Ruby, move from Exim to Postfix and Courier to Dovecot to make it play well with the memory! From your requirements, I would recommend you to try lighttpd/FastCGI. That will make a lot of difference from Apache! I noticed that my Rails apps were very fast compared to Apache deployment. I also notice severe memory leaks with php-cgi. I haven’t addressed this problem yet. I restart lighttpd roughly twice a month to work around this issue.
7/23/2010 @ 6:54 pm
Forgot to mention that after moving to Google Apps for my new domain, I plan to discontinue running my e-mail solution (hosting my old domain); Postfix (SMTP), Dovecot (IMAP), SpamAssasin, ClamAV and Amavis. That will free up a lot of memory for me.
7/23/2010 @ 6:57 pm
Thanks for the comments. Yeah, I think running your own mail is a real headache – we did that on a shared site I was using, and it caused a lot of problems.
As for running Rails – when you were doing that on Apache, were you using Passenger or something else? I’ve been okay with the performance so far, once I bumped the number of threads down, but it’s still using more memory than I’d like. But I don’t have a feel for how much of that is due to general Ruby/Rails issues and how much has to do with the specifics of Passenger / Apache, so certainly lighttpd sounds like it would be worth looking into.
7/24/2010 @ 12:59 pm
As I finished my server, you determined not to have one. Ironic. At least mine is internal only.
7/24/2010 @ 5:31 pm
[…] the reboot; repeating the attempt showed that this was not a one-time coincidence. Whoops; I have bad luck with computers these days, it […]
1/11/2011 @ 8:03 am