May 13, 2018

Interview: Discussing data centre trends with Open Compute Project

Dan Brightmore
8 min
data centre
“Back in 2009 Facebook seized the opportunity to create what it termed the ‘vanity free’ server with less...

“Back in 2009 Facebook seized the opportunity to create what it termed the ‘vanity free’ server with less features to save cost, but more importantly improve reliability and logistics by omitting the parts you don’t need to combine weight reduction with improved energy efficiencies.”

The Open Compute Project’s (OCP) CTO Bill Carter is reflecting on a series of events, triggered by Facebook’s exponential growth, which led to the social media giant designing the world’s most efficient data centre after a small team of engineers spent two years building one from the ground up: software, servers, racks, power supplies, and cooling.

“These servers were designed for large scale outs. 10 years ago, Google, Microsoft and Facebook were all scaling up and making a huge impact on the data centre footprint so it made sense for them to do this. Facebook realised early on the differentiation was in their application and social media app, not as a hardware supplier. What’s not talked about is that the innovation within these servers had become stagnant. Reference designs came from Intel and those designs were duplicated by all the ODMs. If you create an open source society, as happened with software, the belief is you can drive innovation. More people looking at a software problem creates derivative work off these hardware products and vice versa. It can spread like wildfire in the right circumstances. That’s what Facebook wanted to create and was part of the genesis for the Open Compute Project.”

The result now stands in Prineville, Oregon. It was 38% more energy efficient to build and 24% less expensive to run than the company’s previous facilities – and has proved the catalyst for even greater innovation. In 2011, Facebook shared its designs with the public and – along with Intel and Rackspace, Goldman Sachs and Andy Bechtolsheim – launched the Open Compute Project and incorporated the Open Compute Project Foundation. The five members hoped to create a movement in the hardware space that would bring about the same kind of creativity and collaboration we see in open source software.


After spending 33 years working for Intel as a systems architect, and as the company’s OCP liaison in the early days, Carter joined the foundation to get involved with its work on operational efficiency on the facilities side of the business. “We’ve tried to promote collaboration in the data centre construction industry with the focused intent of driving innovation and embracing areas of technology we think we can accelerate and improve upon,” he maintains. “Technologies that have traditionally been controlled and managed by a standards organisation, but they tend to move pretty slow with consensus building. We’ve been able to move faster by enhancing direct communication between the hyperscale end users that drive the business providing the financial means, and the innovation of technology providers.”

Carter recalls with the traditional model you had a OEM or ODM (such as Dell, IBM or HP) that gathered all of that technology together and turned it into a project. They had control over that conversion and what technology actually made it to the marketplace.

“Now we provide a means for these technology providers to collaborate more directly with end users,” he says, adding that the one area of innovation, constant since OCP’s inception, is around energy and efficiency where its keen to push the envelope with new initiatives.

“The traditional data centre brings in power routed out to the physical racks connected to the servers,” explains Carter. “Each of these servers has a redundant power supply so they usually have two power sources with two cords fed to those sometimes from two different utilities in the area to provide that resiliency. You can have up to 80 servers in a rack of equipment, so you have a lot of duplication, and the problem with that 1+1 resiliency model (where you have a hot spare for back up) is that at any given time, each supply is only going to run 50% of its output which is not very efficient. In the very first Facebook design, which they called the Open Rack, they took all of the power supplies out of the server and into a power shelf in the rack and powered that via a higher voltage input power – they did one conversion to 12v and then bussed that across. Architecturally, that conversion from 480v achieved in excess of 90% efficiency which alone saved energy and eliminated all of the power supply elements in the server, reducing cost while allowing further serviceability of the compute node/storage node, so when you refreshed your IT equipment you were disposing of less hardware and only recycling the server board inside.”

To meet the goals of open source, collaborations and partnerships are paramount. A key one for OCP is with the Telecom Infra Project (TIP) – an open source hardware consortium focused on optical broadband networks and open cellular networks. “Much like OCP it works collaboratively with telecoms operators around the world,” reveals Carter. “We have an active networking focus at OCP. Telecom operators are providing universal Customer Premise Equipment (CPE): network access devices that would sit in a small business providing it with a set of access services, a bit like a set top box in your home. Rather than having every carrier design a custom CPE device, the industry saw an opportunity to create a generic device with standardised software that sits on top of that device. We’ve partnered to co-develop this architecture for CPE and develop the hardware Universal Customer Premise (UCP) equipment. We’re working on specs for a small portfolio of products.”

Assessing trends in the data centre industry, Cartner notes, on the networking side, most solutions are built with commodity silicon, which is now being used in all devices, branded or white box solutions. As a result, the tech is available to the consumer regardless of purchase option, so there are no disadvantages by choice. “Companies such as Cumulus Networks and Barefoot have emerged to provide solutions,” he adds. “They are taking open source hardware and qualifying an open source Linux-based network OS on top of it and building the data plate, control and orchestration software on top of that. You really can get a turnkey solution as the barriers to adoption are dissolving. Case in point, Edgecore is a white box provider, and significant contributor to tech at OCP, that recently announced a 400gb switch – one of the first available in the market.”

One of the OCP’s biggest successes to date has been in the field of rack architecture. “We have an Open Rack standard with products built off of 19” EIA3 compatible tech and contributions from Microsoft with the Olympus rack design,” highlights Carter. “Both of these have multi-sourced supply chains behind them. We’re giving the industry efficient choices and are proud to have changed the landscape for networks. For the past eight years working with Facebook and the ODM community we have disaggregated the network switch. Nowadays you can buy any flavour you need – there are multiple options for network OS and all open sourced. That’s a game changer we’ve been part of in developing pathways for these companies to collaborate – we’re not a provider but a true enabler.”

Its success was recently measured when OCP engaged IHS Markit, a world leader in critical information, analytics and solutions, to help validate the adoption of OCP gear across the data centre industry. Since inception, OCP has worked to drive innovation in the data centre industry, bringing together nearly 200 member organisations and more than 4,000 engineers. The demands on the modern data centre continue to expand with the growth of IoT, security and edge computing, as well as increasing energy consumption requirements. This has seen revenue generated from OCP approved equipment in 2017 reached $1.2bn from non-board member companies. OCP board member companies (such as Facebook and Microsoft) have widely adopted OCP principles with great success, however until now it wasn’t clear how far beyond these market leaders that OCP had reached. Carter is impressed that the results of the IHS Markit study forecast OCP adoption among non-board member companies set to surpass $6bn by 2021. Primary drivers for this are power efficiency, cost reductions, standardisation and quick deployment capability across more than 100 OCP developed products.

Until now most of the revenue derived from OCP gear has come from North America, but going forward Carter sees most of its growth opportunity in Asia and Europe as awareness of its work is raised through events like this year’s OCP European summit in Amsterdam, due to take place October 1st and 2nd. Events like these offer a chance for OCP to grow and share ideas around transformation. “Our data centre facilities group aims to drive new standards for colo providers to enhance energy efficiency and draws on the expertise of our European members,” confirms Carter.

What are the OCP’s goals for 2018 and beyond? “We would like to encourage data centres to adopt preparation recommendations to allow them to use this more efficient equipment,” Carter asserts. “We’re also working on hardware management – we have a profile based on the Redfish specification and want to roll that out across our products so that, regardless of where it comes from, if a product meets the OCP spec it can be managed correctly, which will allow the software community to have a universal hardware target they can better design orchestration software for.”

Concluding with an eye on sustainability, Carter believes embedded software is key to producing a complete open source solution. “When equipment is recycled you have to have the ability to maintain it down the road. Often, the BIOS becomes obsolete so if there’s a way for that to be open sourced the community can maintain it and extend the useful life of hardware. What’s useful for a hyperscaler today may be useful to a local cloud provider in the future if we can provide the necessary tools.”

Share article

Jun 17, 2021

Non-IT experts ‘to build majority of tech products by 2024’

2 min
Gartner has found that the majority of technology products will be built by professionals outside of IT by 2024

80% of technology products and services will be built by non-technology professions by 2024, says research firm Gartner. 

This is according to a new report from Gartner, which claims a new category of buyers outside the traditional IT organisation is now responsible for a growing share of the overall IT market.

“Digital business is treated as a team sport by CEOs and no longer the sole domain of the IT department,” said Rajesh Kandaswamy, distinguished research vice president at Gartner. “Growth in digital data, low-code development tools and artificial intelligence (AI)-assisted development are among the many factors that enable the democratistion of technology development beyond IT professionals.”

COVID-19 Accelerating Technology


Technology has started expanding into all areas of business, creating demand for products and services outside IT departments. In 2023, Gartner anticipates that US$30 billion in revenue will be generated by products and services that did not exist pre-pandemic. Gartner analysts said the rapid expansion of cloud services, digital business initiatives, and remote services opened the door for new possibilities in integrations and optimisation.

The research found that COVID-19 also reduced barriers for those outside of IT to create technology-based solutions by providing an entry point for anyone who was able to serve pandemic-induced needs. Gartner said technology providers are now finding themselves increasingly entering markets related to, or in competition with, nontechnology providers, including innovative firms in financial services and retail. 

Gartner expects high-profile announcements of technology launches from nontech companies to proliferate over the next 12 months.

“The availability of business technologists provides new sources of innovation and the ability to get work done. Thus, technology and service providers will need to extend their sourcing of ideas and technology development into new communities, whether they are based on citizen development, their own customer communities or other sources,” said Kandaswamy

Share article