Home > Colocation, Network, Telecom > Reasons to move into a colocation facility

Reasons to move into a colocation facility

Every organization I have worked at has done battle with our long lasting enemies,  “Space”, “Power”, and “Cooling” (not to mention “Bandwidth”).  These three (or four) items seem to be the bane of ITs existence.  There is seemingly an endless demand for more capabilities for the business, which means more applications, which means more servers, which require “Space”, “Power”, “Cooling” and “Bandwidth”.

We are called upon to look into our crystal ball to determine how our needs will grow (or shrink?) over the next several years so that we can purchase the right Generator, UPS, and Cooling equipment.  And once we procure said space, power, and cooling, we spend our days, nights, and weekends making sure maintenance is performed or responding to emergencies.

I am here to tell you there may be a better way!  (I say *may* because this is not a one-size-fits all solution)  In many cases you can make an excellent business case for outsourcing these pain points that might be cash neutral, or even save money!  In the rest of this post I will outline a number of factors to consider in your decision making process:


  • Is your office space very expensive per square foot?  Could you be using that high cost “Class A” space for something more appropriate (you would be surprised at the number of server rooms that have a view).  Are you otherwise space-constrained in some way that makes moving the servers elsewhere attractive?
  • Are your servers good neighbors to those working around them (i.e. are they too loud or do they create too much heat)?
  • Could someone break down the door on the weekend and steal your servers with all your client data?  i.e. is your facility adequately secure?  Are you required to meet PCI compliance, etc…?
  • Does your building have an inadequate fire system or other high risk tenants?  Is your sprinkler system a “wet pipe” system that could get bumped with a ladder and flood your servers?
  • Is the physical environment appropriate for servers?  Is there a lot of dust in the air, vibration from machinery, etc…  I have seen network equipment in mechanical rooms along with steam heat exchangers and janitorial closets with mop cleaning basins.
  • What is the seismic rating of your facility?  In a natural disaster your employees may be able to work remotely if your servers are still available across the Internet.
  • Do you want to make the investment in your current facility to provide an adequate environment for your servers?  (i.e. cooling, UPS, generator, fire suppression, etc…)  If your lease is short term or nearly up and you are not planning on moving, it may not make sense to spend any of your own capital.


  • Do you have access to enough power to run your datacenter in house?  Might you need to bring in extra transformer/switchgear/riser/breaker capacity for expansion?
  • Does your facility have three phase power available? (which is required for many large UPS units and some new SAN’s, blade enclosures, etc…)
  • Are you in an industrial business park with other businesses that have large motors starting and stopping all the time?  These can cause surges/spikes/brownouts and they increase the likilyhood of causing you a power outage (especially if you share a transformer).
  • Do you have a good quality double online conversion UPS, or just random small line interactive UPSs?
  • Does your facility have a generator that can support your servers AND your cooling needs? (not to say that all environments *need* a generator)
  • How reliable is the power at your office?  Do you have a history of power problems there?
  • Do you pay for power usage at your office (i.e. are you sub-metered) or do you just get billed a portion of the overall bill split amongst the tenants?


  • If you are in a standard commercial office building, there is a decent chance that your server room is cooled by the main building cooling units.  Most building leases provide for cooling during normal business hours (8-5pm Monday through Friday). Have you been in your server room on the weekends?  Is it 90 degrees in there?  If you want to have cooling available 24×7 you may need to pay the owner a lot more money to have them run the main building hvac units at all times.  In addition to being potentially costly, this is not very good for the environment.
  • If your server room is cooled by the main building cooling units and let’s say they are even running 24×7 you must remember that they are designed for comfort heating and cooling.  Depending on the type of system in place it may actually blow *heated* air into your server room during “morning warm up” cycles rather than cooling.  It can’t provide cooling to the server room while it is trying to heat the entire building.  I have seen server rooms that cause heating and cooling issues in the office space surrounding them as the server room is *always* calling for cooling even in the winter.
  • Normal air conditioners are designed to operate eight hours a day five days a week.  They are not designed for continuous 24×7 operation and so they will break more often when used in that fashion, and they may “freeze up” due to the fact that they must continue operating all the time (and don’t get a break to let ice melt from the coils) even in the winter when their are higher humidity levels.
  • Is providing appropriate cooling for a server room prohibitively expensive in your facility? (i.e. your in a high rise building)  Do you have somewhere to easily reject the waste heat outside from the computer room?
  • It is worth noting that while cooling for computing facilities is generally a big power hog, some modern colocation facilities are taking steps to be more environmentally friendly with their cooling by taking advantage of low outside air temperatures to cool servers without requiring the operation of refrigerant compressors. 


  • Where are the majority of your uses based?  If 100% of them are at your office and you are not concerned about other factors listed above, the best place for your servers may still be at your office as you don’t run the risk of being disconnected from them.  However, if you have the majority of your users at remote locations (including working from home) the argument for basing your servers in a colocation facility becomes much stronger. 
  • One of the most compelling arguments for colocation is being able to save on telecommunications costs.  You might be able to make it pay for itself based soley on your savings in telecom expenses.  There is more competition in a good colocation facility and so you can shop around.  Whereas at your office, you might only have the LEC (Local Exchange Carrier) available, in a good colocation facility in Portland you would have at least 4 on-net providers (if not more).  In many cases it makes sense to use one provider for access to offices in Washington, and another to get you to Boston.  Don’t let yourself get locked into using a single vendor!
  • Costs may be futher reduced by not having to pay for “local loop” access if you are in the same building as your network service providers routers.  Bringing in a DS-3 to a lumber mill out in a remote location is much more expensive than in downtown Portland.
  • In many cases, colocation facilities are located near telecommunciation hubs and as such, your chance of being cut off from your WAN or Internet provider is much lower.  Not to mention that colocation facilities are generally on fiber optic rings with redundant paths.  If you keep your servers at your office and make them available to the Internet (and other WAN locations) via copper T-1’s it is very easy for your T-1’s to be taken down by someone installing your neighbors POTS (Plain Old Telephone Service) line.  Note that even if you use T-1’s for connectivity within a colo facility, they are likely brought into the building across fiber rings which makes them much “cleaner” and more reliable.
  • If you are planning on having a DR facility in another city and you need a high speed link between them it will likely be much cheaper if you can bid this out to multiple ISP’s available at both datacenters.  High speed circuits between datacenters often cost less than circuits to customer premises.
  • One of the most important technological and financial factors in your decision making process needs to be:  “How will I get a high-speed connection from my office to my servers?”  This new cost must be factored in and the potential for the connection going down must be considered in your evaluation.  This is perhaps the most negative technical factor in the argument for moving your servers into a colo facility.


  • You don’t have to staff in house for and spend time/effort/money on managing your physical infrastructure.  This is taken care of for you by the colocation provider.
  • You don’t have to spend capital on UPSs, generators, floor space, fire control, and cabinets.  Granted you do pay for this somehow on an ongoing basis to the colo provider.
  • Uptimes can be improved as you will have fewer power/cooling/communication failures and your servers will have fewer hardware problems as they will be operating in a more stabile environment.  (seriously, MTBF in a good colo facility will be increased)
  • You don’t have to spend your nights and weekends worrying about the air conditioning failing!  Even if there is an issue, it is someone else’s problem.  Also, you can go out of town and even if a server fails you may be able to have someone else go push buttons for you.  You could even have a vendors tech dispatched to the site to work on your server without you needing to be there
  • You can buy Internet access from the datacenter (make sure you negotiate well on this and make sure the price per megabit keeps falling over time).  If you get upstream Internet from a solid in-building provider (that has good quality upstreams), you may not even need to purchase Internet routers.  All you need is a firewall tier and switch tier.
  • You can grow incrementally in a colocation facility as your needs grow, or cut back as they shrink (assuming you are not under contract).  When you run your own fixed equipment you are most likely to not be running your UPSs near full load where they are most efficient.  In your own facility, once you run out of capacity you are artificially constrained as that next server will require you to make another major investment


  • If you already have a server room with a dedicated cooling unit and enough power, etc.. it may simply not make financial sense to move.  You may want to pursue a split model for the things that have higher uptime requirements.
  • You need to pay for connectivy from your office to the datacenter.  This is a new cost that did not exist before.
  • Connectivity from the colo facility to your office could be interrupted, bringing work to a halt (where as previously even if cut off from the Internet, employees could still access the servers which were in-house).
  • Touching your servers requires traveling to the datacenter.  This travel time takes away from productivity (though getting out of the office once in a while can be nice!)
  • The monthly cost of a datacenter can cause sticker shock when looking at it simply from a “new cash cost” standpoint, however, this can be offset by savings on network circuits (if negotiated at the same time).  More importantly, you must consider the “total cost of ownership” of running your servers in house both in terms of hard and soft costs.

Example designs

Depending on your specific business model, there may be a few different reference designs you could choose from:

  • I have worked with a medical practice that we literally took their entire closet of servers one weekend and moved them into a colo facility.  All that was left was three network switches.  I would call this the “full colo” model.
  • If you have significant amounts of remote users but still have some in house users that require high speed access to file servers or application servers, you may want to consider a “split model” where most of your equipment (and WAN core) is located in the colo facility, but certain high-bandwidth servers stay on-site (like your file server).
  • An organization with all their employees in house may choose to keep all their corporate IT servers at the office, but put any Internet facing servers (like hosted applications or the corporate web site) at a datacenter.  I have implemented this model several times at various software companies in the past.  You must consider what the uptime requirements of your various services are.  Generally, Internet facing services will have a larger audience  and so they need a higher level of reliability than internal IT services.
  • Another variation on this may be to just keep your WAN routing equipment at a datacenter and then have a single backhaul connection to the office where the servers are located.  If you have many remote sites that get connectivity from different providers, it may make more sense to terminate them in a cabinet at a colo facility and then use a single metro ethernet connection back to the office.

Final thoughts

So your convinced?

Great!  Now check out my post on how to choose a colocation facility, and if you are based in Portland, check out my list of all the available facilities, plus the Google map I put together of where they are all located!  These resources also outline your telecommunications provider options.

As always, please email me with feedback or post a comment!


Categories: Colocation, Network, Telecom Tags:
  1. No comments yet.
  1. No trackbacks yet.