ABR HOME PAGE

CONTACT US


IT Pros - Don't Be Left in the Dust on IT Server Room Design

One of the most painful mistakes the IT group can make is to finally put together a full computer room design and plan with power and cooling numbers only to find out that you have taken too long to complete the work and the design team has proceeded without you.  Facilities has finalized the design with the architect and engineers and the project is within budget.  You could be left with spaces that are not satisfactory and an environmental design that will not provide the power, cooling, redundancy and future growth that you need.  Think this can't happen?  Unfortunately, it frequently does.  If long-lead power and cooling systems have been ordered, it will take costly change orders to rectify the situation.  Often, the budget and IT needs collide and the result is an unsatisfactory compromise.  Occasionally, it's serious - the room will not support the equipment design that you have produced and other additional spaces must be planned.  

There are two circumstances that lay the foundation for this problem.  First, the Facilities group has a time frame and a budget to get this work done.  Facilities has already prepared a budget for this project and could be using a per square foot costs that won't work in today's IT design.  For example, today's highly-compacted server rooms are requiring 80-100 watts per square foot; some even higher.  This need calls for upsizing the HVAC, UPS and generator systems as well.  Just a few years ago, 45-60 watts per square foot would have done the job.  Second, at the beginning of the project, Facilities has requested that the IT group provide a complete list of equipment that they will be placing in the new IT spaces along with the environmental requirements for each piece of equipment.  The operative words here are "environmental requirements".  Further, the IT group needs to provide a CAD-type layout that shows how the equipment in the computer room, server room, network area, NOC, etc. will be placed in the spaces.  This layout must also be coordinated with the mechanical and electrical engineers who will also be placing their equipment in the spaces.  

As the design clock ticks away, here's a summary of many things that could be, and often are, occurring on the IT side:

  • Most IT managers and IT operations groups are oblivious to the fact that their new computer room may require 80-100 watts per square foot.  Their existing computer room is doing fine at much less.  What's being overlooked is the gradual transition to smaller and more powerful servers.  Where a typical rack will hold 6-8 servers, they now hold 40.  Having a few racks in your computer room with this level of density will not surface the power and cooling problem.  Designing an IT space full of them definitely will.  Then, you will be challenged because nobody will believe that you will actually populate your IT spaces in this manner. 
  • The IT group is being asked to visualize the new IT spaces over the next 3-5 years and to predict the power and cooling required for fully occupied rooms.  The architects and engineers need this information to design the main power panel, power to the IT spaces, all HVAC systems and the UPS and generator systems.  In many cases, the new premises are in an existing building with known sources of power and cooling.  Facilities must know if your needs will exceed existing capacity and require upgrades to the power and cooling systems.  
  • IT groups are constantly testing new computer offerings and are constantly in a process of upgrading and replacing older machines.  It's not uncommon to see the latest offering from Sun, Dell, HP, Cisco, EMC, Network Appliance and others somewhere in the computer room.  Predicting what will be in their IT spaces in 3-5 years is not easy.  Besides, a lot of what could be in those rooms hasn't been invented yet.  When we designed spaces for the large IBM 3090 mainframes on government projects, we had to carefully tell them that by the time the data center was built, IBM would no longer be selling the systems specified in the Construction Documents. (Note: upgrading to newer IBM systems as part of the move event was highly beneficial as IBM included installation and programming costs as part of the purchase.  Very expensive relocation costs were avoided).   Just before move-in, we would have to change order the electrical for the newer systems. 

Now for the rest of the secret.  Newer rack-mountable servers are running hotter per rack unit than older and larger servers.  This reverses a long trend of smaller, faster and less power.  Place 40 1U servers in a rack and you have a rack that requires up to 3.6kW in power.  It will take 12,000 BTUs (one ton of air) to cool this one rack.  Place 10-20 racks of this type in a room and you start to see the problem.  Think your room won't get that dense?  Think you will never have that many servers?  You may not put that many servers in every rack but its a growing trend.   ABR Consulting Group, Inc. and our associates have literally dozens of clients where this is occurring.  We are on co-location data center and numerous high-technology projects and this type of density occurs in all.  We have one co-location site ready to come online with 6,500 racks.  Figure 8 servers per 8' rack and you have 52,000 servers, network and storage systems.  The building has a 10-megawatt power supply.  We have another client who has replaced 60 older servers with 350 smaller servers- all within a years time.  We have yet another client who is ordering over 200 new Sun Serengeti systems.  Sun Microsystems announced the product as the computer room construction was in final stages.  22 amps each and they want each rack to hold 4.  That's 88 amps per rack.  It will take 36,000 BTUs to cool one rack.  These numbers are stunning but it is happening.  

Returning to the design process, the IT group must present the design team with complete information on what will go in their IT spaces and the power and cooling requirements for these systems.  Hopefully, this information can be completed by the schematic design phase of the design process but it must be no later than the Design Development phase to be able to resolve any design problems with a minimum of cost and difficulty.  Based on the above, this is not an easy process.  However, note one very important thing.  The Facilities group must meet their deadlines and can complete the Construction documents and put them out for bid without a communications cabling design or a final computer/server room plan.  In fact, its a frequent occurrence.  If you have a traditional IBM or legacy-type data center or a computer/server room that will not grow rapidly over the next several years, you're probably immune from the problems identified above.  However, if you have a computer/server room that is constantly growing and changing, you must be pro-active and submit your completed power and cooling requirements early in the design process to ensure that you will receive the IT spaces you need for your ever-expanding computer/server room.  Have a successful design and relocation and don't get left in the dust. 

Contact us at www.abrconsulting.com  Phone:  925.872.5523  Fax:  916.478.2814