You must have Adobe Flash installed

Blog by Microphone

<< back to article list

"The Ubertor Story - Evolution of a Scale Web Application" by Piotr Banasik @ Metro Manila Cloud Web Developer Group June Meetup in July, July 2, 2010

Check out how Ubertor rose from employing just one server to having about 20 million users today. Piotr Banasik discussed the challenges of making their applications scalable, and employing Amazon E2.

Welcome to the Meetup!
So, today's talk is about The Evolution of a Scale Web Application, or basically the Ubertor story. The application that we've been developing now for over ten years. It's in active use, and basically we've probably got over 20 million users.

Can you tell us about Ubertor's beginnings?

In the beginning, there was just one server. Basically, we're hosting, shared hosting reseller. Really, that's how we started off. We need to switch providers. So we went from one shared hosting provider to rending a whole server at another shared hosting provider. This time, we had the whole server, and we, we're dividing it up however we wanted ourselves. We had more capacity, more stability. And again, things were good. We're still at one server, though. So, nothing much has changed.

Did you encounter and problems?

Then, things broke. We needed help from the provider, and well, they weren't helpful as we hoped. So, we're out shopping for another provider again.(Laughs) That's our story! For a while, we're actually switching providers like every other year. Big migration and all that clients moving over to a new provider.

What were your next measures?

So, our next step was get our own gear. And we collude it over to a new facility that was over 2 hours away from where we lived. Now, we have a total of five servers just running the clients' side, and there was a bunch of other servers dealing with additionals. We don't have anything to deal with that because that was all, still, it was a bunch of individual sites sitting on individual servers. So, people got different performance at different times of the days.

Then there were hardware problems because it was our own hardware. It was fun driving out there in the middle of the night to fix problems. (Laughs) So we hired a consultant and rent a bunch of servers thousands of kilometers away. We have no physical access to them whatsoever. And basically sort of home-brewed a cloud of sorts before cloud became a buzz word.

What is it like with your servers now?

But now we have a bunch of big servers, and they've been virtualized into a bunch of smaller servers. MySQL, and load balancers and application servers, and all of that stuff. So there was stuff that we never have to deal with before.

Before it was easy. One server, we put all the client files into one folder. I mean into a bunch of folders, one for each side. But that was on a shared storage server, and then all the applications, we just connect one into that. That was kind of our first step.     

Then, we figured it's actually a real pain in the hind to deal with the actual individual sites because if I wanna add or remove a site, we'd have to reconfigure and patch a new site, right?

So we found Apache's dynamic virtual host stuff, which basically lets you set up a pattern, or if you make a folder with the domain name, it will automatically serve that. So that worked fairly well. Just have a big folder with a whole bunch of domain names in it, and they're assembling over to some individual folders to go with each site. So that basically makes the application servers really hands-off. Any new sites that get added that only needs to get done on the file server and the database servers.     

What bottlenecks did you run into?

But the applications servers don't really care. They just serve what needs to be served. And we're starting to run into multiple bottlenecks. 'Cause, of course, you can't run into little bottlenecks if you got everything going into the same file server. So that was our first bottleneck: the file server. 'Cause all the application codes are there, and all the sites' actual media files were there too, so everything goes into the site. Peoples' pictures and whatnot.

And we put them all at Amazon S3. So it took a whole bunch of load off off the file server, 'cause that stuff is no longer served out of there. It's now served directly under (Amazon) S3. So, that helped, for a while at least. Long enough to give us time to come up with a plan to move again.

And we moved to (Amazon) EC2. Well, it wasn't that easy. (Laughs) So we couldn't use the shared file trick anymore 'cause you can't really do that with EC2. We couldn't have persistent storage, that was before EBS (Elastic Block Store) was added.

How does the latest reincarnation of your former app work?

On a good note, we did already have the files over that made the sites unique up on S3 already. So at least, that was nice. But we need to tweak the code so we need not to deal with Apache of, even thinking of separate sites. So the latest reincarnation of what we've got of the app was born. Basically every request now goes to the same folder, same entry point in the code. And through a shared, like a central database between all the sites, it maps the request domain name. And it goes 'Oh. You're that site. You need to be using that database and that's your S3 prefix,' 'That's where we put all of your files and...' So basically everything goes to one entry point and shared same code, and it just kinda works.