Home > Colocation, Network, Telecom, Wireless > How to choose a colocation facility

How to choose a colocation facility

Choosing a colocation facility is one of the most important decisions an IT professional can make.  It will have repercussions for years down the road, as there is generally a contract term associated, and it becomes difficult/costly to move.  At the same time, unless you are a facilities professional, it is hard to tell the difference between the quality of one facility vs. that of another without knowing the right questions to ask.  I have developed this list in the hopes that it will be a reference to folks evaluating datacenter options.  This has been written using the assumption that you need a local datacenter rather than a DR facility (which can have very different needs), however, many of the same concepts will apply.

Location

  • When it comes right down to it, there are still certain things you have to do physically in person. You can’t run a network cable through SSH or RDP. Having a datacenter close by makes a huge difference, especially when you lose remote connectivity and must go push a button in an emergency (we all have done this once or twice). In general, the newer, more high-end, and redundant your equipment is, the less you should have to touch it in person. Things are getting much better with out of band remote access controllers, but sometimes being there is worth a lot. You can’t hear that fan making funny noises from your office.
  • Does the facility have good access to transportation such as freeways and airports? Are their hotels nearby if you will have out-of-town contractors visiting? How close to logistics depots are you for your vendor-of-choices parts, i.e. Cisco, Dell, HP, etc…
  • Does the facility have adequate parking that is close to the building, does it cost money? Is it somewhere you want to leave your car in the middle of the night while you are inside working?
  • Do you have line-of-sight to the datacenter? If you can manage to get a wireless link to your datacenter this can be an extremely cost-effective option for high speed connectivity. There is something to be said for controlling your own destiny when it comes to your connectivity rather than being at the mercy of a telecom provider. Will the facility allow you to put a wireless antenna on the roof and how much will they charge?

Staffing

  • Do they have on-site staff 24×7 to respond to emergency situations, to secure the facility, and to provide access when you forget/loose your badge (or have to stop by on your way home from the gym).
  • If they do not have staff on site 24×7, what is their on-call policy? How long would it take them to respond to a power failure, a UPS exploding, a transformer catching fire in the parking lot, an Internet outage, an FM-200 fire suppression system going off, an HVAC system failing, or any other major malady (yes I have had all of these things happen to me in facilities I have worked in, and I am still waiting for the day a fire sprinkler goes off or there is a real fire in a datacenter).
  • What level of professional services can they provide? Basic remote hands (please press the power button)? More advanced troubleshooting (help diagnose a failed network switch)? Or even managed services (i.e. they take care of backups).
  • How competent are their NOC engineers, facilities folks, etc… What quality of vendors do they use to do electrical work, HVAC maintenance, network cabling? This can be hard to tell, but there are lots of small clues you can pick up on.
  • Does their staff speak English fluently and without heavy accent? It is extremely difficult to communicate on the phone with someone in a loud datacenter environment about complex technical issues when both of you are having a hard time understanding each other. This dramatically slows down the troubleshooting process and increases the chance of error.

Connectivity options

  • Do they provide Internet access themselves, or do need to contract with other providers (ala the Pittock Block)? Having a datacenter provide Internet connectivity (if they give you a reasonable rate) can be more cost effective than running your own routers, with multiple ISPs (assuming you don’t have special routing needs that require it). You do need to make sure your datacenter has good upstream providers, good quality routers, and competent staff to run them. Be careful to ensure your provider can absorb moderate sized DDoS attacks without equipment failure or running out of bandwidth. You don’t want your neighbors online dating site to come under attack and impact your Internet connectivity.
  • Are they “carrier neutral”? Will they allow you to bring in your own connectivity (Internet/WAN)? Or do they want a piece of the pie of everything (i.e. resell you everything)? Are they charging your chosen provider ridiculous fees to have “right of entry” into the building (which drives up your end user costs).
  • What fiber providers do they have available? – The more connectivity options you have available, the harder bargain you can drive with providers to get the best deal possible. If you need connectivity to many different sites, it is likely that some sites will be cheaper/better/faster to connect with one provider, and others will be cheaper/better/faster with another. A good example would be TWTelecom and Integra Telecom here in Portland Oregon. They each have extensive fiber optic networks around the metro area, but if you are trying to get from Infinity Internet to various locations around town, whichever has fiber closer to your destination will have a price/technical advantage to provide you service.
  • Who is the local exchange carrier? You might need a POTS (Plain Old Telephone Service) line or two for paging access, etc…
  • What do they charge for cross connect fees? If you order a $300/mo T-1 are they going to charge you $100/mo cross connect fee for the two pairs of phone wire to get it to your cage/cabinet?

Power Infrastructure

  • What type of power grid design are they on? Radial or interconnected? On a Radial system (such as you would find out in the suburbs), if a car crashes into a pole, or a backhoe takes out a single conduit, power will be lost. In an interconnected system there are multiple “primary” feeds connected to multiple transformers which energize a “secondary” bus that actually feeds power to the facility. This type of design significantly reduces single points of failure and allows entire transformers to be taken offline for maintenance without service interruption.
  • Is the power grid in the area above ground or below ground? Above ground systems are susceptible to windstorms, lightning, trees, etc… Below ground systems fall prey to backhoes, horizontal boring machines, water penetration, etc… In general, below ground is going to be more reliable.
  • If on a Radial system, do they at least have multiple transformers (preferably off of separate primary feeds) even if they are not tied together on the secondary bus? Often you will see two transformers with each feeding a separate power distribution system within the datacenter.
  • Are the transformers well protected from vehicles in the parking lot?
  • What type of electrical transfer switches does the facility have to switch between main power and generator power? Are they capable of “make before break” operation when switching to the generator during test cycles or planned outages? Can they operate as “make before break” when switching back to grid power after an outage? This is important as the most likely time for a UPS to fail is during switching. If you can minimize the number of voltage-loss events it will reduce the likelihood of UPS failure.
  • How many generators does the facility have? If multiple, is their distribution system setup in such a way that you can get separate power feeds in your cage/rack that come from completely independent PDUs, UPSs, Generators, and Transformers? Just because a facility has multiple generators/UPSs/Transformers does not mean they are redundant for each other, they could just be there to increase capacity.
  • Does the facility regularly test their generators *with* load applied (either the actual datacenter load, or a test load)?
  • Has the facility designed and more importantly, *operated* their system such that a failure of one UPS/Transformer/Generator does not cause an overload on other parts of the distribution system.
  • Does the facility participate in programs that allow the power utility to remotely start the generators and switch the facility over to Generator power to reduce grid loading? While this is good for the overall health of the power grid (and possibly the environment), it can be a liability to your equipment at the datacenter since more power transfer events will be occurring.
  • How much fuel is stored on site – how many hours does that represent? Does the facility have contracts for emergency refueling services?
  • Can the generator be re-fueled easily from the road, or is it located on the roof?
  • What type of UPS systems do they have? How old are they? How often are the batteries tested and replaced? Can they take their UPS offline for maintenance without impacting customer power?
  • Can they provide you custom power feeds for equipment such as large Storage Area Networks or high power blade enclosures? (i.e. you need a 3 phase 208 volt 30 amp circuit)

Cooling

  • Do they use many direct expansion cooling units, or do they have a water/glycol loop with a cooling tower? Or do they even use chilled water? Each of these has it’s pros and cons, however, the multiple direct expansion model is very simple and redundant in that you likely have many individual units (it is not as energy efficient though). The trick is controlling the HVAC units to not “fight” each other, causing short-cycling on the compressors.
  • Are the cooling units designed for datacenter usage (running 24x7x365), with the ability to control humidity within reasonable levels, or are they made for office cooling applications with expected usage of 10 hours a day?
  • If the facility uses cooling towers for evaporative cooling processes, do they have on-site water storage to provide water during utility outages (such as after an earthquake). Are all parts of the cooling loop system redundant (including the control system).
  • Does the facility maintain and enforce hot/cold aisle design? This is becoming critical as power densities increase and power efficiency becomes critical.
  • Does the facility have an outside air exchange system to provide “free” cooling during the months of the year that outside air is of appropriate temperatures? While good for the environment, you must be careful about the outside air’s humidity as well as the dust/pollen that could come in with outside air. There is a dramatic difference between servers that have been in a quality datacenter for a few years, vs. ones with poor HVAC systems for a few years. I have removed servers from facilities before that have not gotten a speck of dust on them and others that are caked in black dust (depending on the facility they were in).
  • Is the entire cooling system on a single generator, or is it spread across multiple units for redundancy?

Cages/Racks

  • Does the facility provide Cages? Cabinets?  Or both?  These days most everything will fit in standard square hole cabinets, however, in some cases if you buy large enough equipment it might come with its own racks or as a freestanding unit that cannot go in cabinets provided by the facility. If you go with a cage you must carefully plan how much space you are going to need ahead of time. Adding additional cabinets as needed can be an effective growth strategy, though you must plan for network and SAN cabling between them.
  • If you get a cage (or just custom cabinets) make sure to agree upon who will bolt down your cabinets and how much it will cost.  This can be particularly tricky on raised floors to properly secure them in the event of an earthquake.  Any work done must be properly done to not throw dust into the air and to mitigate any potentially harmful vibrations that could impact running equipment.
  • One gotcha I have run into before is that some facilities cabinets are not deep enough for modern servers (specifically some Dell servers). I also have been shocked to find many facilities that still are leasing ancient cabinets that are telco-style with solid doors on them. Modern equipment requires front-to-back airflow, not bottom-to-top as was the old telco style. Also note that most network equipment is still uses side-to-side airflow and is best suited in two-post telecom racks (where possible) rather than four post server cabinets.
  • When selecting a colo facility make sure to specify exactly what type of cabinet you are expecting in the contract if they have multiple types available.
  • Modern cabinets have built in mounting holes/brackets for vertical mount PDU’s which are becoming the standard.  This allows you to use very short (think 2 foot) power cables to attach servers without excess slack.  They also do not take up usable rack space.
  • Modern cabinets should also have a way to cleanly route cables vertically (think about power cables, network cables, fiber SAN cables, etc…)
  • Does the facility provide PDU’s in the cabinets for you, or are you responsible to provide them yourself?  It is critical that your PDU’s have power meter displays on them as power in a datacenter is typically very expensive and so you want to load them up as much as possible for peak cost efficiency, while not risking tripping a circuit breaker (never load a circuit more than 80% it’s rated capacity – which means 16 amps on a 20 amp circuit, or 24 amps on a 30 amp circuit).  When plugging dual power supply servers into different circuits, ensure that in the event one circuit blows the other can handle the entire load without blowing.
  • What type of power plugs will they be delivering in your rack/cage?  I recommend locking plugs like an L5-20 or L5-30 to plug your PDU’s into (even though a NEMA recepticle can handle the current capacity in 20 amp circuits).  Also common these days is using 208 volt 30 amp circuits with an L6-30 receptacle.  Most everything manufactured in the last 5 years is capable of accepting 208 volt power.  Using the higher voltage allows you to have more equipment in a cabinet with fewer circuits which also means less PDU’s.

Fire suppression

  • Is the structure made of metal and concrete, or of wood?
  • Does it have traditional “wet-pipe” sprinklers, “dry-pipe” sprinklers, or “pre-action” sprinkers”? Or even none at all? If an electrician hits a sprinkler head with a ladder in either a “wet-pipe” or “dry-pipe” system, it will immediately release large amounts of water until the fire department shows up to turn it off. Pre-action systems require both a smoke sensing system to alarm, as well as heat setting off a sprinkler head in order to let water flow.
  • What type of fire detection system does the facility have? Standard smoke sensors, and/or VESDA sensors?
  • Does the facility have an inert gas fire suppression system such as FM-200, Inergen, or Halon? An inert gas system will deploy if two smoke sensors are deployed, and hopefully extinguish the fire before it can set off a water based system (typically still required to meet fire code). In reality though, I have never seen modern computer equipment really catch fire. Most of it does not burn very well (as long as you don’t store cardboard in the datacenter).
  • Who are your neighbors within the building? Are any of them high risk?
  • How old is the building’s fire suppression system? You might be in a suite within the building that has the latest and greatest fire control, but if the rest of the building has a simple fire panel from 1970 and no sprinklers, it could still burn to the ground. Upgrades to fire control systems are generally not required unless the building owner does a major renovation.

Physical facility

  • What is the risk of water damage to your equipment? Are you right below a poorly maintained roof? Are there non-pre-action sprinklers above you? Is there a domestic water pipe above your cage? Bathroom drains from the tenant above? Storm drain pipes from the roof? Condensate drains from the HVAC system? Cooling loop pipes? Note that if a fire sprinkler goes off several floors up it can seep down through cracks between floors you never knew existed into your equipment.
  • Is the facility located in a flood plain? Is it below ground level? There are places in Portland that have water mains large enough to cause localized flooding if they break.
  • Does the building have a convenient loading dock for receiving equipment? What is the largest equipment that will fit into the building and up the elevator? This is a problem in many older buildings.
  • How large is the space you are in (by volume) compared to the equipment load? If cooling was lost (say because the fire alarm inadvertently went off which shuts down all HVAC), how much thermal buffer is there to keep the temperature from rising too much until the system is reset?
  • Is there a grid of ceiling tiles above you? If so, it will probably fall down and create dust in an earthquake. I would rather see all of the piping and mechanical systems on the ceiling anyway rather than let them be hid above a ceiling grid.
  • Is the facility on a slab floor or raised floor? It is easier to effectively bolt things down to a slab floor for seismic purposes, but a raised floor can also conveniently provide space for electrical power and cables. It is becoming less feasible for cooling purposes however, since density is increasing so much.
  • What is the seismic rating of the facility? How much will it shake your equipment in an earthquake and will the building be damaged to the point that it is unsafe to continue operation?
  • Do they have requirements about what types of equipment you can put in the datacenter? i.e. if in a traditional telco facility certain ratings may be required.
  • Is the facility well kept and “clean”?  This can tell you a lot about the quality of the facility.  It is hard to tell if proper maintenance is being done at scheduled intervals on their power equipment, but if a facility can not simply keep cables managed properly it is a likely sign that they are skipping other non-visible things as well.

Creature comforts

  • Does the facility have comfortable areas for you to work while on-site (i.e. a conference room) or do you have to spend all your time on the cold/loud datacenter floor?
  • Do they provide “crash carts” (i.e. a portable keyboard, monitor, mouse) to utilize if you don’t have your own KVMs?
  • Do they have vending machines or refreshments when you need that late night pick-me-up?
  • Will they accept deliveries for you? Do they have someone at the facility during business hours? I find this to be *very* important.
  • How good is the cell phone coverage for the specific provider(s) you care about?
  • Do they have a guest wireless network you can jump on while you are working there to easily get Internet access without having to provide it yourself?

Security

  • How do they control access to the facility? Is it manned, or unmanned? If they have an access control system does it have biometric features?
  • Do they have security cameras? How long is the footage kept for?

Pricing

  • How much do they charge you per cabinet, or per square foot of space?
  • How much does power cost? Is it per provisioned circuit, or based on actual usage? What is their pricing model? Note that it is more and more common to need 208 volt circuits, or three phase circuits with modern blade enclosures and SANs. It is no longer just increments of 20 amp 110v circuits.
  • Will they provide you second power feeds at a reduced price if you are only going to be using them for failover? Note that these second feeds may cost them UPS, Generator, etc… capacity they must plan for, however, you won’t be utilizing electricity from them (which they must pay the utility company for) or loading their total feed capacity from the utility since they are just for redundancy.
  • Can you get price guarantees for future expansion (power costs, cabinet costs, etc…)?
  • Does the facility want to sell you completely managed services and as such makes colocation costs un-tenable?
  • Do they provide some amount of basic remote hands service hours each month? How much do they charge for professional services?
  • Does the facility provide service-level-agreements (SLAs) that have teeth? Frankly, I don’t put much faith in SLAs since usually they only involve a credit for the period of time service is unavailable. This generally is nothing in comparison the amount of money you lose when your datacenter goes down or your costs in man-hours to bring it back up.

Switching Costs

  • Once you move into a facility there can be significant (if not astronomical) switching costs. They may offer you a smoking deal to get you in the door, and then make it up by charging higher-than-market-rate for add on services down the road. Realize that you are inevitably likely to need more power down the road, and more bandwidth. Also realize that bandwidth costs fall steadily so you don’t want to get locked in for long term rates on telecommunications circuits. It is also possible in the long term for your needs to go down in the future as virtualization gets more popular, “cloud computing” becomes a reality, and computers become more efficient.
  • Contracts are normally in place to protect the provider, but they can also protect you. If you get a smoking deal on something, locking it in for a term commitment can be a good idea. It is reasonable for a provider to require a contract term as they do have significant capital and sales costs that they need to cover. Also, realize that the average lifespan of a datacenter is not all that long these days. A datacenter built 7 years ago has nowhere near the cooling capacity required in a modern datacenter.
  • Think about your growth pattern. You don’t want to be paying ahead of time for service you don’t need/use, but you also don’t want to get hit for huge incremental costs to add cabinets/power down the road. Contracts with “first right of refusal” clauses built into them (on additional space/capacity) are common.
  • Think about how difficult it will be for you to pick up and move at a later date. Some of the most “sticky” items are storage area networks. It might be easy to move a few servers at a time, but if you are all dependent on a single Storage Array, everything connected to it must move at once.
  • Telecommunication circuits also increase your “stickiness”. They are generally under term commitments and can be difficult to coordinate a move at a specific time. If you have a circuit from XO and move to a facility that does not have XO fiber, you might have to switch providers, or pay someone else for the local loop.
  • If you are purchasing Internet connectivity from your datacenter you are most likely being assigned IP addresses from their address space. When you move or change providers you will need to re-number. Depending on your network design and use cases, this might be easy, or an extremely difficult task.

Final Words

While there are numerous factors to consider, the reality is that there are likely a number of providers in town that can meet your needs successfully.  The reliability level of Portland’s power grid and of datacenter equipment is getting so high that we are really “chasing nines” to get ever so slightly more uptime (for dramatically higher cost).  For most organizations, being in a datacenter with only a single generator provides plenty of uptime.  Is that extra 0.009% uptime really worth it to go from “four nines” to “five nines”?  That is an increase of 47 minutes of uptime.  Is that worth doubling your costs?

Perhaps one of the most important aspects to your decision is the relationship you build with the owners, management, and staff of your colocation facility.  You want to have as much of a “partnership” as possible, and not merely a buyer/seller relationship.  Finding a facility with a long history of treating their customers well will increase your chances of success.

If you have any comments/questions feel free to post below, or shoot me an email.

-Eric

Categories: Colocation, Network, Telecom, Wireless Tags:
  1. April 7th, 2009 at 23:31 | #1

    I finally decided to write a comment on your blog. I just wanted to say good job. I really enjoy reading your posts.

  2. gabrielle
    April 8th, 2009 at 07:48 | #2

    This is an excellent, exhaustive list. Thanks!

  3. Kevin
    April 9th, 2009 at 11:18 | #3

    Great job Eric!

  4. September 8th, 2011 at 11:47 | #4

    Eric,

    You really went the extra mile on detailing out every possible thing to think about when selecting a colocation provider. Very nice work!

  1. April 17th, 2009 at 00:26 | #1