About 7 months ago, we decided that we needed to hire someone to focus on making our infrastructure better.
And we not only needed a better version of what we had, but we also needed to create a better architecture for the future, one in which Chargify and our merchants’ businesses could rely on multiple data centers in geographically diverse locations.
We hired Drew Blas (@drewblas) in July, 2012, and he’s been working on these goals ever since.
With his work and help from much of our tech team, the system has gotten a lot better (faster, more stable) than it was a year ago.
And then a few days ago, we took a big step forward: we moved Chargify to new data centers, and we added the first layer of geographic diversity in a plan that includes more in the coming months.
It’s all part of our long-term plan to support our merchants and their customers with systems that run more efficiently and with less risk from data center problems and natural disasters.
I’m very happy to say that this big Saturday change was very uneventful (which is not an easy task – thanks Drew, Michael, Nathan, Kori, and everyone who was part of it in the wee hours Saturday morning).
Here’s a summary of the past, present, and future:
Phase 0: 2009 to Now
For the past few years, the Chargify app and database have been hosted at a PCI-specialized hosting provider in Kansas City. This system includes multiple web servers, “job” servers (like emails and webhooks), and database servers (master & replica).
This system is still running, but we moved all active processing to new systems on Saturday, December 1st.
Phase 1: December 1, 2012
The Chargify app and database are now hosted at 2 locations within Amazon’s AWS hosting service.
Each installation of our system includes the same number of web servers and job servers as our Phase 0 system.
Basically, we doubled everything and spread it across 2 locations. This reduces the risk inherent in depending on 1 installation and 1 data center (almost every data center has failures now and then).
We’re in 1 AWS “Region” (US-West-2, Oregon)
Within that region, we’re installed in 2 “Availability Zones”, each of which has its own complete Chargify installation, along with completely separate data center facilities (power, cooling, networking, and connection to the world).
These installations may be in the same building or in separate buildings, but even if separate buildings, we believe that Amazon uses locations that are close to each other. We see this as lower risk than where we used to be (in 1 building), but still prone to risk from a natural disaster in the general area.
Regarding our database systems, each new Chargify installation has 1 database server (no replica), but each installation acts as a backup or replica for the other installations. My old friend and Engine Yard co-founder, Tom Mornini (@tmornini) told us about something called Continuent “Tungsten” earlier this year, and it makes a lot of database magic possible. That’s about all I know!
Phase 2: January, 2013
We’ll add a 3rd Chargify installation in another Availability Zone (still in the same Region as Phase 1). This will be an incremental improvement – better, but still exposed to risk from a large disaster in Amazon’s US-West-2 Region (ie, a widespread and lasting power outage). Certainly unlikely, but not a risk we want to accept for very long.
Phase 3: February, 2013
We’ll repeat Phases 1 & 2 in another AWS Region (probably US-East-1, Virginia). Then we’ll really feel safe! We’ll have 6 copies of Chargify running: 3 on the US West Coast and 3 on the US East Coast.
Not only will this offer really high redundancy in case of failure or disaster somewhere, but it will also allow us to do more interesting things, like sending web traffic to the closest data center, which should make your and your customers’ experience a little bit better.
We’re pretty excited about this, because even though it’s not visible and perhaps it’s even “boring”, it’s boring like a good water system is boring – boring is good!
And like a water system, this kind of thing is an investment. It’s one of the reasons we raised prices earlier this year.
We’ve released sexier, more visible things recently, and more are in the works, but solid infrastructure is one of the things we think our merchants will really appreciate over the long term.
If you have any questions or you notice something not working right that might be related to our move, please contact our Support team. We definitely want to work out any kinks or bugs that may arise from the move.
—- Lance Walley, co-founder/CEO