Monday, April 12, 2010

Microsoft: Datacenter Growth Defies Moore's Law - PCWorld

Fast, cheap and all over the place. That's how technology experts behind Microsoft's fast-growing Live offerings envision the future of the enterprise data center in a Web 2.0 driven world.  

Faced with a shortage of datacenter space and demand for datacenter space that is more than doubling every 24 months, Microsoft is eyeing alternatives to traditional datacenter design, including the use of mobile shipping-containers spread all over the globe to facilitate growth and deal with user demand, according to James Hamilton, architect of Microsoft's Windows Live, during a presentation Tuesday on datacenter design at the Web 2.0 Expo in San Francisco.

Just as Web 2.0 app developers pursue innovative means for achieving explosive growth, those in charge of the Web 2.0 server room are looking for cutting-edge methods for processing and facilitating that growth, Hamilton said.

"The glut of 2000 datacenter space is over," he said, referring to the fallout of the dotcom market collapse, which left a glut of excess datacenter and bandwidth capacity. "We are out of space industry-wide. We have taken all the most desirable space and everyone is in a mass buildup."

Google alone has 450,000 systems running across 20 datacenters, and Microsoft's Windows Live team is doubling the number of servers it uses every 14 months, which is faster than Moore's Law , Hamilton said, referring to the famous observation by Gordon Moore that the number of transistors on an integrated circuit doubles every 24 months.

Companies like Microsoft and Google have been expanding their datacenter facilities at a breathtaking pace in recent years. Most recently, Google said it would build a $600 million datacenter near Charleston, South Carolina , where tax breaks and ample electricity were available. Microsoft has announced similar plans for rural Washington State .

Microsoft, IBM, HP, Sun and others have also formed a consortium called the Green Grid to tackle an impending energy crisis that threatens data center growth.

Hamilton discussed a datacenter design strategy that used the storage container as a core building block and stressed the importance of a paradigm shift in server farm architecture in light of the data deluge brought on by the expansive growth of Web 2.0.

To meet the growing need for datacenter space, Hamilton advocates what he called a services rather than an enterprise model for the Web 2.0 datacenter: housing operations in dispersed, tightly packed storage-container datacenters, not unlike Sun Microsystems Project Blackbox.

The air-tight containers would be constructed to achieve close proximity between server boxes and CRACs (computer room air-conditioning units), with variable fans controlling airflow. The power-efficiency benefits of this construction would be further enhanced by the near total elimination of ingress and egress for human access, thereby eliminating the space-waste issues faced by traditional datacenters, Hamilton said.

The containers -- the datacenter equivalents of flash drives -- would be dispersed strategically about the globe in an effort to move capacity to the edge, closer to customers, thereby diminishing dependency on and costs associated with CDNs (content distribution networks, he said.

"We're not close enough to customers, so we have to work with content distribution networks as well," Hamilton said.

Web 2.0 enterprises also need to adopt a radically different approach to hardware maintenance in a world of small, cheap, mobile datacenters, he said.

Noting that 40 percent of the cost of datacenter operations goes toward staff, Hamilton said that companies need to eliminate headcount in datacenters as much as possible, by switching to a "detect/reboot/replace" approach to server failure, rather than costly maintenance and servicing of malfunctioning hardware. Such an approach could improve servers-per-administrator ratios from 100-to-1 to upwards of 1,000-to-1, Hamilton said.

"Spending 20 percent of your datacenter budget on service? Well, then, don't service it," Hamilton added. "It seems ironic to build systems that run well on unreliable components and then house those unreliable components in a super-reliable datacenter."

The payoffs of such an approach go beyond mere cost savings. Forgoing the solitary central facility in favor of a distributed-microcenter approach will also increase reliability and be more flexible in the face of large, oftentimes seasonal fluctuations of customer loads, or regional disputes like war or national disaster, he said.

Not that the distributed model doesn't come with risks and challenges. A shift to smaller, trailer based data centers has implications in the software programming model. Also, savings on datacenter administrators will likely be diverted to pay for more application developers.

Judging from the looks of the app-dev crowd that makes up the audience of  Web 2.0 Expo, however, that would be just fine.

Wow, this is from 3 years ago - I wonder where they're at today...

Posted via web from T.J. Walia's posterous

 
 
Custom Search