You must have Adobe Flash installed

Blog by Microphone

<< back to article list

"Why more people go to the Cloud" by Piotr Banasik @ Metro Manila Cloud Web Developer Group Meetup, July 29, 2010

Why do people go to the Cloud? Piotr Banasik enlightens us on how attractive Cloud is to its users.



You go to the cloud 'cause, well, why do you go to the cloud?

You go to the cloud 'cause somebody told you to? Well, no. 'Cause it's cool? Maybe.

But mainly you go to the cloud 'cause it's scalable. You can easily get, spin out and shut down new instances and you don't actually need to pre-purchase hardware and pre-lease servers.

So that's what makes cloud attractive to most people. So, what does it mean to be scalable?

Well, for a web application, that means being able to well handle a whole bunch of requests. You might have some caching involve, which again, helps to handle lots of requests. You probably have a bunch of application servers involved, which lets you spread out the load, again, handle more requests. And if you don't need those servers, you wanna be able to shut them down. Or if you need more servers, you will be able to build up new ones so that you can accommodate whatever load is coming.

One of the ways to deal with handling lots of traffic is caching. If your servers don't need to be processing something, don't process it. Quit it out of the cache. The other aspect of being able to scale is being able to deploy on many different servers. I mean, when you start out an application, a project, you often just deploy it on a single server. It's your development setup. And then it grows, you get some clients, and then you go, 'Okay. My little server is dying now. I need more power.' So, you need to be able to spread it out across different servers.

If you're running something across different servers, things are gonna get a little bit complicated because you can no longer rely on simple things like a file being there. Just because you put it there last request doesn't mean it's gonna be there next request because the next request can be handled by a different server now. So the first server will say you put a file in the tenth folder, the next request, you hit on a different server so it's not there anymore because it's on the other server. So, that's something you need to be aware of when you're growing up an application from like, development mode when you have one server, to actual deployment, we have multiple servers dealing with all these stuff.

So, I was talking about sessions. Rails has this construct called 'The Active Record Store'. Essentially it uses Rails' own main way of handling database records to store sessions. For internal caching, there's an application called Memcached. Essentially, what it does is it takes one chunk of RAM and it reserves it as a cached space. Nginx is a lot leaner server. It's really fast in serving static content. It integrates with Rails pretty well. There's something called Passenger. It's essentially a module for Apache and Nginx which lets you, basically it does all the work of managing Rails instances for you.     So that's the easy option. And if you want to have a little bit more control over what goes on, you can just manually set up Nginx to proxy, to your own Mongrel instances.

Deck of load balancers. Well, in my experience, I haven't really remembered running into so much balancer issues because there's so much fairly low,low maintenance boxes. Really, they don't get stressed. They just have requests going through them. And they forward it right over to somewhere else. They don't really do processing. They just do routing. Personally, I never haven't really run any problems going things out of the blue. The only reason they probably would is because of hardware problems. But still, if you want that extra level of safety, you can put up a backup deck of load balancers. If you have a single deck of load balancer, if that goes down, your whole app goes down 'cause that's how all your traffic is routed. So without that, your app doesn't exist.