Facebook has taken delivery of the first set of innovative server racks it helped design, technology that the company hopes other organizations with large data centers will adopt.
The prototype racks represent some of the first significant tangible gear from a year-old multicompany effort, called the Open Compute Project, to drive down costs and improve data center hardware, namely by open sourcing the designs.
"I'm embarrassed to say but it was kind of emotional to see the Open Rack" units, said Frank Frankovsky, Facebook director of hardware design and founding board member of the Open Compute Project. "We had been working on Open Rack for a long while."
In April 2011, Facebook launched the Open Compute Project, an initiative to apply the open-source software collaboration model to the world of data center hardware. With this project, buyers of data center equipment could collaborate to design products, or at least the specifications for products, that they would like to see. Vendors can then use these blueprints to build the equipment.
Motherboards, power supplies and electrical subsystems are among the equipment Open Compute is collaboratively designing. A number of manufacturers have signed on and volunteered engineer support, including Asus, Hewlett-Packard, Advanced Micro Devices and Supermicro.
Open Rack can be seen as a test case for this approach. Facebook plans to test the prototypes over the next few months, and, if they work as planned, the company will start using them in its data centers by early next year, Frankovsky said. "From 2013 forward, every rack we deploy in our data centers will be Open Rack," he said. Thanks to how the rack design is freely available under an open-source license, any manufacturer can produce these racks for Facebook or other customers.
At first glance, an Open Rack chassis may appear much like any other rack, though these prototypes carry a lot of thoughtful design.
Today, the most widely used racks in data centers are based on the EIA 310-D specification, which wasn't developed for holding computer equipment at all, Frankovsky pointed out. It actually was created during the 1950s to hold railroad signal relays.
"They were never designed to go into data centers," Frankovsky said. Using EIA 310-D, "You have devices that are poking out the front, or poking out the back. You have cable routes that make IT nonserviceable."
Among other issues, EIA 310-D only specifies the length of the inner rails in a rack. Other dimensions, such as height, depth and mounting points, are left to manufacturers. As a result, each manufacturer creates a slightly different rack, and each vendor's rack is incompatible with those from competing vendors.
Frankovsky called this practice "gratuitous differentiation." If a data center wants to use only one kind of rack -- a common practice to help standardize the data center -- then it must buy all its racks from one vendor. While good for the vendor, this practice is potentially problematic for the customer, should the vendor wish to take advantage of this reliance by raising prices.
"Can I stick a HP server into a Dell chassis? No. Why not? Does it really have value to me to have different racks?" Frankovsky told an audience at the O'Reilly Open Source Conference (OSCON), held earlier this month in Portland. "Why can't we all get along right there, and work on something more innovative than the underlying physical infrastructure?"
To help design the racks, Facebook collaborated with Asian Internet giants Baidu and Tencent. Baidu and Tencent engineers visited Facebook at the company's Prineville, Oregon, data center to discuss rack technologies. Those companies experienced many of the same frustrations with racks that Facebook did, and even started their own design specification, Project Scorpio. "We converged our voices," Frankovsky said.
"We compete with those guys, but on the infrastructure side, if we can make our infrastructure more efficient, it makes everyone that much better," Frankovsky said. "Where we differentiate our business is in the service we provide to our end users."
The Open Rack specification calls for a slightly taller chassis, one 48 inches (122 centimeters) tall rather than the 44.5 inches (133 centimeters) that is the norm for most racks. The taller rack allows more air to circulate through the equipment and makes it easier for technicians to access the gear, Frankovsky said.
Like the EIA 310-D, Open Rack is 24 inches (61 centimeters) wide, the standard size for a floor tile patch. But the equipment bay itself is 21 inches (53 centimeters) wide -- 2 inches wider than EIA 310-D's -- allowing for three motherboards or five 3.5-inch disk drives side-by-side in one chassis.
The rack supports an entirely new modular server design Facebook wants to pursue, also under Open Compute development. In this design, all of the major components of a server -- such as the CPU, hard drive, network cards and memory -- would be easily accessible on the rack tray.
"Servers in the future will look a lot more like sleds," Frankovsky said. "The value of disaggregating servers is that you can [replace] the technology faster," he said. Instead of replacing the entire server when a component fails, or needs to be upgraded, the technician can just replace that particular component.
The racks are also innovative when it comes to supplying power to the individual servers -- through a cableless system consisting of distributed bus that server components can hook into. "The rack becomes the power enclosure," Frankovsky said.
At OSCON, Frankovsky reiterated some of the core drivers behind Open Compute. "We will be building larger and larger data centers, and they will have unique challenges associated with them," he said. "If we don't vote with our wallets, and demand something better, we may not ever know how far the cost of equipment would have gone down if we didn't work together on something like Open Compute."