One of the best things about cloud computing — as both a business model and an architectural principle — is that hardware really doesn’t matter. By and large, as long as applications and systems management software are intelligent enough to run the show, servers, switches and hard drives just need to show up with minimal competency and stay out of the way. If you don’t believe me, just ask Backblaze … or VMware … or Facebook.
Smart companies trying to deliver services over the web realize that they’re not in the business of pleasing a CIO, but of pleasing consumers. And consumers don’t care what’s under the hood as long as the service works and their lives aren’t interrupted by a downed server.
That’s kind of the reason that cloud computing exists, and has become such a successful delivery model for IT resources. Users get generic server specs on virtual machines, but all the magic happens at the layers above. Smart developers can take advantage of features such as auto-scaling and failover, as well as myriad open source components and open(ish) APIs, to piece together applications that might not look pretty, but stay online and don’t cost a fortune to run.
The cloud underpins a lot of applications that many web users couldn’t live without, from Instragram to Netflix. They all required some architectural creativity along the way to grow into what they’ve become, and the cloud enabled that.
Google wrote the book on this by designing its own servers, data centers and, it appears, networking switches, and the company has done alright for itself. Facebook, too, is lauded for its custom-built hardware and data centers. Both companies have gotten so good at designing gear and sourcing components that they don’t necessarily need buy much of anything from mainstream vendors in order to fill their data centers.
But neither company would consider itself a hardware company (which is why Facebook was willing to open source its designs under the Open Compute Project banner), they just realize that a little hacking can go a long way toward delivering a better service. Economically, custom gear that does away with extraneous bells and whistles while adding performance where needed means a lower sticker price, a lower power bill and a better user experience. Whatever reliability is lost by removing fans, server cases and vendor software is made up for by smart software engineers who design systems that expect gear to fail, and to keep running when that happens.
Google is so confident in its software it promises zero planned downtime for Gmail and achieves higher than 99 percent uptime for the service overall. And ask yourself the last time you remember Facebook, with its 950 million users, crashing. In the enterprise IT world, it’s this type of infrastructural intelligence that’s driving the software-defined network movement, and VMware’s vision of software-defined data centers.
What’s more, that epic feat of hackery wouldn’t have been possible if not for Backblaze’s even bigger contribution to cloud-service design — its $7,400 135TB storage pods. Because it designs and builds its own infrastructure, the company had the luxury of swapping out the most critical component without worrying about voiding a warranty or messing up something in a storage array whose blueprints it hadn’t seen. It’s possible that the then-bootstrapped Backblaze wouldn’t have made it out of the hard drive shortage alive, or at least without some upset customers, had it not been so in tune with its hardware needs.
Traditional big companies might not line up to deploy Open Compute gear as is — they’ll understandably wait until server makers such as Dell and HP productize the designs. But companies that want to be big like Facebook or Google (or even in sheer capacity a la Backblaze) ought to pay attention. Facebook’s server designs could be a great starting point from which to build your own specialized gear to make sure your application and your users’ experience are never at the mercy of fate or a vendor’s bottom line.
Feature image courtesy of Shutterstock user Jason Winter.
Source : http://gigaom.com/cloud/hacking-hardware-isnt-just-cool-its-also-good-business/
Smart companies trying to deliver services over the web realize that they’re not in the business of pleasing a CIO, but of pleasing consumers. And consumers don’t care what’s under the hood as long as the service works and their lives aren’t interrupted by a downed server.
That’s kind of the reason that cloud computing exists, and has become such a successful delivery model for IT resources. Users get generic server specs on virtual machines, but all the magic happens at the layers above. Smart developers can take advantage of features such as auto-scaling and failover, as well as myriad open source components and open(ish) APIs, to piece together applications that might not look pretty, but stay online and don’t cost a fortune to run.
The cloud underpins a lot of applications that many web users couldn’t live without, from Instragram to Netflix. They all required some architectural creativity along the way to grow into what they’ve become, and the cloud enabled that.
What’s good for the goose …
Life shouldn’t be any different just because a company decides to run its own servers rather than rent ephemeral boxes from Amazon Web Services. Especially for large-scale web applications or services, it might make good business sense to eschew the traditionally expensive world of off-the-shelf computing hardware and just build the cheapest-possible gear that gets the job done — gear that ends up looking a lot like those generic cloud computing resources.Google wrote the book on this by designing its own servers, data centers and, it appears, networking switches, and the company has done alright for itself. Facebook, too, is lauded for its custom-built hardware and data centers. Both companies have gotten so good at designing gear and sourcing components that they don’t necessarily need buy much of anything from mainstream vendors in order to fill their data centers.
But neither company would consider itself a hardware company (which is why Facebook was willing to open source its designs under the Open Compute Project banner), they just realize that a little hacking can go a long way toward delivering a better service. Economically, custom gear that does away with extraneous bells and whistles while adding performance where needed means a lower sticker price, a lower power bill and a better user experience. Whatever reliability is lost by removing fans, server cases and vendor software is made up for by smart software engineers who design systems that expect gear to fail, and to keep running when that happens.
Google is so confident in its software it promises zero planned downtime for Gmail and achieves higher than 99 percent uptime for the service overall. And ask yourself the last time you remember Facebook, with its 950 million users, crashing. In the enterprise IT world, it’s this type of infrastructural intelligence that’s driving the software-defined network movement, and VMware’s vision of software-defined data centers.
Backblaze blowback
This is why I was a little shocked to see some of the negative comments when I wrote recently about cloud-storage startup Backblaze’s efforts to deal with last year’s hard drive shortage by sourcing consumer-grade external hard drives from Costcos around the country. Given the choices — back off its unlimited storage for $5 promise, or find a way to procure capacity on the cheap — Backblaze almost certainly made the right choice. The software that runs the service expects hard drives to fail, and backing up data doesn’t require blazing fast data access. As long as consumer drives aren’t crashing by the hundreds, users don’t notice a thing.What’s more, that epic feat of hackery wouldn’t have been possible if not for Backblaze’s even bigger contribution to cloud-service design — its $7,400 135TB storage pods. Because it designs and builds its own infrastructure, the company had the luxury of swapping out the most critical component without worrying about voiding a warranty or messing up something in a storage array whose blueprints it hadn’t seen. It’s possible that the then-bootstrapped Backblaze wouldn’t have made it out of the hard drive shortage alive, or at least without some upset customers, had it not been so in tune with its hardware needs.
You can do it, too!
Given all this, it’s not insignificant that Facebook on Wednesday shared some tips to deploying its Open Compute servers in co-location facilities that might not be designed to handle custom rack designs. After all, unless you’re building your data centers like Facebook, Google or eBay, you have to play by your colo provider’s rules. Guidance from Facebook on actually deploying the servers under real-world circumstances makes Open Compute less good in theory and more good in practice.Traditional big companies might not line up to deploy Open Compute gear as is — they’ll understandably wait until server makers such as Dell and HP productize the designs. But companies that want to be big like Facebook or Google (or even in sheer capacity a la Backblaze) ought to pay attention. Facebook’s server designs could be a great starting point from which to build your own specialized gear to make sure your application and your users’ experience are never at the mercy of fate or a vendor’s bottom line.
Feature image courtesy of Shutterstock user Jason Winter.
Source : http://gigaom.com/cloud/hacking-hardware-isnt-just-cool-its-also-good-business/
0 comments:
Post a Comment