Archive

Archive for the ‘Colocation’ Category

Reasons to move into a colocation facility

April 17th, 2009 No comments

Every organization I have worked at has done battle with our long lasting enemies,  “Space”, “Power”, and “Cooling” (not to mention “Bandwidth”).  These three (or four) items seem to be the bane of ITs existence.  There is seemingly an endless demand for more capabilities for the business, which means more applications, which means more servers, which require “Space”, “Power”, “Cooling” and “Bandwidth”.

We are called upon to look into our crystal ball to determine how our needs will grow (or shrink?) over the next several years so that we can purchase the right Generator, UPS, and Cooling equipment.  And once we procure said space, power, and cooling, we spend our days, nights, and weekends making sure maintenance is performed or responding to emergencies.

I am here to tell you there may be a better way!  (I say *may* because this is not a one-size-fits all solution)  In many cases you can make an excellent business case for outsourcing these pain points that might be cash neutral, or even save money!  In the rest of this post I will outline a number of factors to consider in your decision making process:

Space

  • Is your office space very expensive per square foot?  Could you be using that high cost “Class A” space for something more appropriate (you would be surprised at the number of server rooms that have a view).  Are you otherwise space-constrained in some way that makes moving the servers elsewhere attractive?
  • Are your servers good neighbors to those working around them (i.e. are they too loud or do they create too much heat)?
  • Could someone break down the door on the weekend and steal your servers with all your client data?  i.e. is your facility adequately secure?  Are you required to meet PCI compliance, etc…?
  • Does your building have an inadequate fire system or other high risk tenants?  Is your sprinkler system a “wet pipe” system that could get bumped with a ladder and flood your servers?
  • Is the physical environment appropriate for servers?  Is there a lot of dust in the air, vibration from machinery, etc…  I have seen network equipment in mechanical rooms along with steam heat exchangers and janitorial closets with mop cleaning basins.
  • What is the seismic rating of your facility?  In a natural disaster your employees may be able to work remotely if your servers are still available across the Internet.
  • Do you want to make the investment in your current facility to provide an adequate environment for your servers?  (i.e. cooling, UPS, generator, fire suppression, etc…)  If your lease is short term or nearly up and you are not planning on moving, it may not make sense to spend any of your own capital.

Power

  • Do you have access to enough power to run your datacenter in house?  Might you need to bring in extra transformer/switchgear/riser/breaker capacity for expansion?
  • Does your facility have three phase power available? (which is required for many large UPS units and some new SAN’s, blade enclosures, etc…)
  • Are you in an industrial business park with other businesses that have large motors starting and stopping all the time?  These can cause surges/spikes/brownouts and they increase the likilyhood of causing you a power outage (especially if you share a transformer).
  • Do you have a good quality double online conversion UPS, or just random small line interactive UPSs?
  • Does your facility have a generator that can support your servers AND your cooling needs? (not to say that all environments *need* a generator)
  • How reliable is the power at your office?  Do you have a history of power problems there?
  • Do you pay for power usage at your office (i.e. are you sub-metered) or do you just get billed a portion of the overall bill split amongst the tenants?

Cooling

  • If you are in a standard commercial office building, there is a decent chance that your server room is cooled by the main building cooling units.  Most building leases provide for cooling during normal business hours (8-5pm Monday through Friday). Have you been in your server room on the weekends?  Is it 90 degrees in there?  If you want to have cooling available 24×7 you may need to pay the owner a lot more money to have them run the main building hvac units at all times.  In addition to being potentially costly, this is not very good for the environment.
  • If your server room is cooled by the main building cooling units and let’s say they are even running 24×7 you must remember that they are designed for comfort heating and cooling.  Depending on the type of system in place it may actually blow *heated* air into your server room during “morning warm up” cycles rather than cooling.  It can’t provide cooling to the server room while it is trying to heat the entire building.  I have seen server rooms that cause heating and cooling issues in the office space surrounding them as the server room is *always* calling for cooling even in the winter.
  • Normal air conditioners are designed to operate eight hours a day five days a week.  They are not designed for continuous 24×7 operation and so they will break more often when used in that fashion, and they may “freeze up” due to the fact that they must continue operating all the time (and don’t get a break to let ice melt from the coils) even in the winter when their are higher humidity levels.
  • Is providing appropriate cooling for a server room prohibitively expensive in your facility? (i.e. your in a high rise building)  Do you have somewhere to easily reject the waste heat outside from the computer room?
  • It is worth noting that while cooling for computing facilities is generally a big power hog, some modern colocation facilities are taking steps to be more environmentally friendly with their cooling by taking advantage of low outside air temperatures to cool servers without requiring the operation of refrigerant compressors. 

Communications

  • Where are the majority of your uses based?  If 100% of them are at your office and you are not concerned about other factors listed above, the best place for your servers may still be at your office as you don’t run the risk of being disconnected from them.  However, if you have the majority of your users at remote locations (including working from home) the argument for basing your servers in a colocation facility becomes much stronger. 
  • One of the most compelling arguments for colocation is being able to save on telecommunications costs.  You might be able to make it pay for itself based soley on your savings in telecom expenses.  There is more competition in a good colocation facility and so you can shop around.  Whereas at your office, you might only have the LEC (Local Exchange Carrier) available, in a good colocation facility in Portland you would have at least 4 on-net providers (if not more).  In many cases it makes sense to use one provider for access to offices in Washington, and another to get you to Boston.  Don’t let yourself get locked into using a single vendor!
  • Costs may be futher reduced by not having to pay for “local loop” access if you are in the same building as your network service providers routers.  Bringing in a DS-3 to a lumber mill out in a remote location is much more expensive than in downtown Portland.
  • In many cases, colocation facilities are located near telecommunciation hubs and as such, your chance of being cut off from your WAN or Internet provider is much lower.  Not to mention that colocation facilities are generally on fiber optic rings with redundant paths.  If you keep your servers at your office and make them available to the Internet (and other WAN locations) via copper T-1’s it is very easy for your T-1’s to be taken down by someone installing your neighbors POTS (Plain Old Telephone Service) line.  Note that even if you use T-1’s for connectivity within a colo facility, they are likely brought into the building across fiber rings which makes them much “cleaner” and more reliable.
  • If you are planning on having a DR facility in another city and you need a high speed link between them it will likely be much cheaper if you can bid this out to multiple ISP’s available at both datacenters.  High speed circuits between datacenters often cost less than circuits to customer premises.
  • One of the most important technological and financial factors in your decision making process needs to be:  “How will I get a high-speed connection from my office to my servers?”  This new cost must be factored in and the potential for the connection going down must be considered in your evaluation.  This is perhaps the most negative technical factor in the argument for moving your servers into a colo facility.

Benefits

  • You don’t have to staff in house for and spend time/effort/money on managing your physical infrastructure.  This is taken care of for you by the colocation provider.
  • You don’t have to spend capital on UPSs, generators, floor space, fire control, and cabinets.  Granted you do pay for this somehow on an ongoing basis to the colo provider.
  • Uptimes can be improved as you will have fewer power/cooling/communication failures and your servers will have fewer hardware problems as they will be operating in a more stabile environment.  (seriously, MTBF in a good colo facility will be increased)
  • You don’t have to spend your nights and weekends worrying about the air conditioning failing!  Even if there is an issue, it is someone else’s problem.  Also, you can go out of town and even if a server fails you may be able to have someone else go push buttons for you.  You could even have a vendors tech dispatched to the site to work on your server without you needing to be there
  • You can buy Internet access from the datacenter (make sure you negotiate well on this and make sure the price per megabit keeps falling over time).  If you get upstream Internet from a solid in-building provider (that has good quality upstreams), you may not even need to purchase Internet routers.  All you need is a firewall tier and switch tier.
  • You can grow incrementally in a colocation facility as your needs grow, or cut back as they shrink (assuming you are not under contract).  When you run your own fixed equipment you are most likely to not be running your UPSs near full load where they are most efficient.  In your own facility, once you run out of capacity you are artificially constrained as that next server will require you to make another major investment

Downsides

  • If you already have a server room with a dedicated cooling unit and enough power, etc.. it may simply not make financial sense to move.  You may want to pursue a split model for the things that have higher uptime requirements.
  • You need to pay for connectivy from your office to the datacenter.  This is a new cost that did not exist before.
  • Connectivity from the colo facility to your office could be interrupted, bringing work to a halt (where as previously even if cut off from the Internet, employees could still access the servers which were in-house).
  • Touching your servers requires traveling to the datacenter.  This travel time takes away from productivity (though getting out of the office once in a while can be nice!)
  • The monthly cost of a datacenter can cause sticker shock when looking at it simply from a “new cash cost” standpoint, however, this can be offset by savings on network circuits (if negotiated at the same time).  More importantly, you must consider the “total cost of ownership” of running your servers in house both in terms of hard and soft costs.

Example designs

Depending on your specific business model, there may be a few different reference designs you could choose from:

  • I have worked with a medical practice that we literally took their entire closet of servers one weekend and moved them into a colo facility.  All that was left was three network switches.  I would call this the “full colo” model.
  • If you have significant amounts of remote users but still have some in house users that require high speed access to file servers or application servers, you may want to consider a “split model” where most of your equipment (and WAN core) is located in the colo facility, but certain high-bandwidth servers stay on-site (like your file server).
  • An organization with all their employees in house may choose to keep all their corporate IT servers at the office, but put any Internet facing servers (like hosted applications or the corporate web site) at a datacenter.  I have implemented this model several times at various software companies in the past.  You must consider what the uptime requirements of your various services are.  Generally, Internet facing services will have a larger audience  and so they need a higher level of reliability than internal IT services.
  • Another variation on this may be to just keep your WAN routing equipment at a datacenter and then have a single backhaul connection to the office where the servers are located.  If you have many remote sites that get connectivity from different providers, it may make more sense to terminate them in a cabinet at a colo facility and then use a single metro ethernet connection back to the office.

Final thoughts

So your convinced?

Great!  Now check out my post on how to choose a colocation facility, and if you are based in Portland, check out my list of all the available facilities, plus the Google map I put together of where they are all located!  These resources also outline your telecommunications provider options.

As always, please email me with feedback or post a comment!

-Eric

Categories: Colocation, Network, Telecom Tags:

How to choose a colocation facility

April 7th, 2009 4 comments

Choosing a colocation facility is one of the most important decisions an IT professional can make.  It will have repercussions for years down the road, as there is generally a contract term associated, and it becomes difficult/costly to move.  At the same time, unless you are a facilities professional, it is hard to tell the difference between the quality of one facility vs. that of another without knowing the right questions to ask.  I have developed this list in the hopes that it will be a reference to folks evaluating datacenter options.  This has been written using the assumption that you need a local datacenter rather than a DR facility (which can have very different needs), however, many of the same concepts will apply.

Location

  • When it comes right down to it, there are still certain things you have to do physically in person. You can’t run a network cable through SSH or RDP. Having a datacenter close by makes a huge difference, especially when you lose remote connectivity and must go push a button in an emergency (we all have done this once or twice). In general, the newer, more high-end, and redundant your equipment is, the less you should have to touch it in person. Things are getting much better with out of band remote access controllers, but sometimes being there is worth a lot. You can’t hear that fan making funny noises from your office.
  • Does the facility have good access to transportation such as freeways and airports? Are their hotels nearby if you will have out-of-town contractors visiting? How close to logistics depots are you for your vendor-of-choices parts, i.e. Cisco, Dell, HP, etc…
  • Does the facility have adequate parking that is close to the building, does it cost money? Is it somewhere you want to leave your car in the middle of the night while you are inside working?
  • Do you have line-of-sight to the datacenter? If you can manage to get a wireless link to your datacenter this can be an extremely cost-effective option for high speed connectivity. There is something to be said for controlling your own destiny when it comes to your connectivity rather than being at the mercy of a telecom provider. Will the facility allow you to put a wireless antenna on the roof and how much will they charge?

Staffing

  • Do they have on-site staff 24×7 to respond to emergency situations, to secure the facility, and to provide access when you forget/loose your badge (or have to stop by on your way home from the gym).
  • If they do not have staff on site 24×7, what is their on-call policy? How long would it take them to respond to a power failure, a UPS exploding, a transformer catching fire in the parking lot, an Internet outage, an FM-200 fire suppression system going off, an HVAC system failing, or any other major malady (yes I have had all of these things happen to me in facilities I have worked in, and I am still waiting for the day a fire sprinkler goes off or there is a real fire in a datacenter).
  • What level of professional services can they provide? Basic remote hands (please press the power button)? More advanced troubleshooting (help diagnose a failed network switch)? Or even managed services (i.e. they take care of backups).
  • How competent are their NOC engineers, facilities folks, etc… What quality of vendors do they use to do electrical work, HVAC maintenance, network cabling? This can be hard to tell, but there are lots of small clues you can pick up on.
  • Does their staff speak English fluently and without heavy accent? It is extremely difficult to communicate on the phone with someone in a loud datacenter environment about complex technical issues when both of you are having a hard time understanding each other. This dramatically slows down the troubleshooting process and increases the chance of error.

Connectivity options

  • Do they provide Internet access themselves, or do need to contract with other providers (ala the Pittock Block)? Having a datacenter provide Internet connectivity (if they give you a reasonable rate) can be more cost effective than running your own routers, with multiple ISPs (assuming you don’t have special routing needs that require it). You do need to make sure your datacenter has good upstream providers, good quality routers, and competent staff to run them. Be careful to ensure your provider can absorb moderate sized DDoS attacks without equipment failure or running out of bandwidth. You don’t want your neighbors online dating site to come under attack and impact your Internet connectivity.
  • Are they “carrier neutral”? Will they allow you to bring in your own connectivity (Internet/WAN)? Or do they want a piece of the pie of everything (i.e. resell you everything)? Are they charging your chosen provider ridiculous fees to have “right of entry” into the building (which drives up your end user costs).
  • What fiber providers do they have available? – The more connectivity options you have available, the harder bargain you can drive with providers to get the best deal possible. If you need connectivity to many different sites, it is likely that some sites will be cheaper/better/faster to connect with one provider, and others will be cheaper/better/faster with another. A good example would be TWTelecom and Integra Telecom here in Portland Oregon. They each have extensive fiber optic networks around the metro area, but if you are trying to get from Infinity Internet to various locations around town, whichever has fiber closer to your destination will have a price/technical advantage to provide you service.
  • Who is the local exchange carrier? You might need a POTS (Plain Old Telephone Service) line or two for paging access, etc…
  • What do they charge for cross connect fees? If you order a $300/mo T-1 are they going to charge you $100/mo cross connect fee for the two pairs of phone wire to get it to your cage/cabinet?

Power Infrastructure

  • What type of power grid design are they on? Radial or interconnected? On a Radial system (such as you would find out in the suburbs), if a car crashes into a pole, or a backhoe takes out a single conduit, power will be lost. In an interconnected system there are multiple “primary” feeds connected to multiple transformers which energize a “secondary” bus that actually feeds power to the facility. This type of design significantly reduces single points of failure and allows entire transformers to be taken offline for maintenance without service interruption.
  • Is the power grid in the area above ground or below ground? Above ground systems are susceptible to windstorms, lightning, trees, etc… Below ground systems fall prey to backhoes, horizontal boring machines, water penetration, etc… In general, below ground is going to be more reliable.
  • If on a Radial system, do they at least have multiple transformers (preferably off of separate primary feeds) even if they are not tied together on the secondary bus? Often you will see two transformers with each feeding a separate power distribution system within the datacenter.
  • Are the transformers well protected from vehicles in the parking lot?
  • What type of electrical transfer switches does the facility have to switch between main power and generator power? Are they capable of “make before break” operation when switching to the generator during test cycles or planned outages? Can they operate as “make before break” when switching back to grid power after an outage? This is important as the most likely time for a UPS to fail is during switching. If you can minimize the number of voltage-loss events it will reduce the likelihood of UPS failure.
  • How many generators does the facility have? If multiple, is their distribution system setup in such a way that you can get separate power feeds in your cage/rack that come from completely independent PDUs, UPSs, Generators, and Transformers? Just because a facility has multiple generators/UPSs/Transformers does not mean they are redundant for each other, they could just be there to increase capacity.
  • Does the facility regularly test their generators *with* load applied (either the actual datacenter load, or a test load)?
  • Has the facility designed and more importantly, *operated* their system such that a failure of one UPS/Transformer/Generator does not cause an overload on other parts of the distribution system.
  • Does the facility participate in programs that allow the power utility to remotely start the generators and switch the facility over to Generator power to reduce grid loading? While this is good for the overall health of the power grid (and possibly the environment), it can be a liability to your equipment at the datacenter since more power transfer events will be occurring.
  • How much fuel is stored on site – how many hours does that represent? Does the facility have contracts for emergency refueling services?
  • Can the generator be re-fueled easily from the road, or is it located on the roof?
  • What type of UPS systems do they have? How old are they? How often are the batteries tested and replaced? Can they take their UPS offline for maintenance without impacting customer power?
  • Can they provide you custom power feeds for equipment such as large Storage Area Networks or high power blade enclosures? (i.e. you need a 3 phase 208 volt 30 amp circuit)

Cooling

  • Do they use many direct expansion cooling units, or do they have a water/glycol loop with a cooling tower? Or do they even use chilled water? Each of these has it’s pros and cons, however, the multiple direct expansion model is very simple and redundant in that you likely have many individual units (it is not as energy efficient though). The trick is controlling the HVAC units to not “fight” each other, causing short-cycling on the compressors.
  • Are the cooling units designed for datacenter usage (running 24x7x365), with the ability to control humidity within reasonable levels, or are they made for office cooling applications with expected usage of 10 hours a day?
  • If the facility uses cooling towers for evaporative cooling processes, do they have on-site water storage to provide water during utility outages (such as after an earthquake). Are all parts of the cooling loop system redundant (including the control system).
  • Does the facility maintain and enforce hot/cold aisle design? This is becoming critical as power densities increase and power efficiency becomes critical.
  • Does the facility have an outside air exchange system to provide “free” cooling during the months of the year that outside air is of appropriate temperatures? While good for the environment, you must be careful about the outside air’s humidity as well as the dust/pollen that could come in with outside air. There is a dramatic difference between servers that have been in a quality datacenter for a few years, vs. ones with poor HVAC systems for a few years. I have removed servers from facilities before that have not gotten a speck of dust on them and others that are caked in black dust (depending on the facility they were in).
  • Is the entire cooling system on a single generator, or is it spread across multiple units for redundancy?

Cages/Racks

  • Does the facility provide Cages? Cabinets?  Or both?  These days most everything will fit in standard square hole cabinets, however, in some cases if you buy large enough equipment it might come with its own racks or as a freestanding unit that cannot go in cabinets provided by the facility. If you go with a cage you must carefully plan how much space you are going to need ahead of time. Adding additional cabinets as needed can be an effective growth strategy, though you must plan for network and SAN cabling between them.
  • If you get a cage (or just custom cabinets) make sure to agree upon who will bolt down your cabinets and how much it will cost.  This can be particularly tricky on raised floors to properly secure them in the event of an earthquake.  Any work done must be properly done to not throw dust into the air and to mitigate any potentially harmful vibrations that could impact running equipment.
  • One gotcha I have run into before is that some facilities cabinets are not deep enough for modern servers (specifically some Dell servers). I also have been shocked to find many facilities that still are leasing ancient cabinets that are telco-style with solid doors on them. Modern equipment requires front-to-back airflow, not bottom-to-top as was the old telco style. Also note that most network equipment is still uses side-to-side airflow and is best suited in two-post telecom racks (where possible) rather than four post server cabinets.
  • When selecting a colo facility make sure to specify exactly what type of cabinet you are expecting in the contract if they have multiple types available.
  • Modern cabinets have built in mounting holes/brackets for vertical mount PDU’s which are becoming the standard.  This allows you to use very short (think 2 foot) power cables to attach servers without excess slack.  They also do not take up usable rack space.
  • Modern cabinets should also have a way to cleanly route cables vertically (think about power cables, network cables, fiber SAN cables, etc…)
  • Does the facility provide PDU’s in the cabinets for you, or are you responsible to provide them yourself?  It is critical that your PDU’s have power meter displays on them as power in a datacenter is typically very expensive and so you want to load them up as much as possible for peak cost efficiency, while not risking tripping a circuit breaker (never load a circuit more than 80% it’s rated capacity – which means 16 amps on a 20 amp circuit, or 24 amps on a 30 amp circuit).  When plugging dual power supply servers into different circuits, ensure that in the event one circuit blows the other can handle the entire load without blowing.
  • What type of power plugs will they be delivering in your rack/cage?  I recommend locking plugs like an L5-20 or L5-30 to plug your PDU’s into (even though a NEMA recepticle can handle the current capacity in 20 amp circuits).  Also common these days is using 208 volt 30 amp circuits with an L6-30 receptacle.  Most everything manufactured in the last 5 years is capable of accepting 208 volt power.  Using the higher voltage allows you to have more equipment in a cabinet with fewer circuits which also means less PDU’s.

Fire suppression

  • Is the structure made of metal and concrete, or of wood?
  • Does it have traditional “wet-pipe” sprinklers, “dry-pipe” sprinklers, or “pre-action” sprinkers”? Or even none at all? If an electrician hits a sprinkler head with a ladder in either a “wet-pipe” or “dry-pipe” system, it will immediately release large amounts of water until the fire department shows up to turn it off. Pre-action systems require both a smoke sensing system to alarm, as well as heat setting off a sprinkler head in order to let water flow.
  • What type of fire detection system does the facility have? Standard smoke sensors, and/or VESDA sensors?
  • Does the facility have an inert gas fire suppression system such as FM-200, Inergen, or Halon? An inert gas system will deploy if two smoke sensors are deployed, and hopefully extinguish the fire before it can set off a water based system (typically still required to meet fire code). In reality though, I have never seen modern computer equipment really catch fire. Most of it does not burn very well (as long as you don’t store cardboard in the datacenter).
  • Who are your neighbors within the building? Are any of them high risk?
  • How old is the building’s fire suppression system? You might be in a suite within the building that has the latest and greatest fire control, but if the rest of the building has a simple fire panel from 1970 and no sprinklers, it could still burn to the ground. Upgrades to fire control systems are generally not required unless the building owner does a major renovation.

Physical facility

  • What is the risk of water damage to your equipment? Are you right below a poorly maintained roof? Are there non-pre-action sprinklers above you? Is there a domestic water pipe above your cage? Bathroom drains from the tenant above? Storm drain pipes from the roof? Condensate drains from the HVAC system? Cooling loop pipes? Note that if a fire sprinkler goes off several floors up it can seep down through cracks between floors you never knew existed into your equipment.
  • Is the facility located in a flood plain? Is it below ground level? There are places in Portland that have water mains large enough to cause localized flooding if they break.
  • Does the building have a convenient loading dock for receiving equipment? What is the largest equipment that will fit into the building and up the elevator? This is a problem in many older buildings.
  • How large is the space you are in (by volume) compared to the equipment load? If cooling was lost (say because the fire alarm inadvertently went off which shuts down all HVAC), how much thermal buffer is there to keep the temperature from rising too much until the system is reset?
  • Is there a grid of ceiling tiles above you? If so, it will probably fall down and create dust in an earthquake. I would rather see all of the piping and mechanical systems on the ceiling anyway rather than let them be hid above a ceiling grid.
  • Is the facility on a slab floor or raised floor? It is easier to effectively bolt things down to a slab floor for seismic purposes, but a raised floor can also conveniently provide space for electrical power and cables. It is becoming less feasible for cooling purposes however, since density is increasing so much.
  • What is the seismic rating of the facility? How much will it shake your equipment in an earthquake and will the building be damaged to the point that it is unsafe to continue operation?
  • Do they have requirements about what types of equipment you can put in the datacenter? i.e. if in a traditional telco facility certain ratings may be required.
  • Is the facility well kept and “clean”?  This can tell you a lot about the quality of the facility.  It is hard to tell if proper maintenance is being done at scheduled intervals on their power equipment, but if a facility can not simply keep cables managed properly it is a likely sign that they are skipping other non-visible things as well.

Creature comforts

  • Does the facility have comfortable areas for you to work while on-site (i.e. a conference room) or do you have to spend all your time on the cold/loud datacenter floor?
  • Do they provide “crash carts” (i.e. a portable keyboard, monitor, mouse) to utilize if you don’t have your own KVMs?
  • Do they have vending machines or refreshments when you need that late night pick-me-up?
  • Will they accept deliveries for you? Do they have someone at the facility during business hours? I find this to be *very* important.
  • How good is the cell phone coverage for the specific provider(s) you care about?
  • Do they have a guest wireless network you can jump on while you are working there to easily get Internet access without having to provide it yourself?

Security

  • How do they control access to the facility? Is it manned, or unmanned? If they have an access control system does it have biometric features?
  • Do they have security cameras? How long is the footage kept for?

Pricing

  • How much do they charge you per cabinet, or per square foot of space?
  • How much does power cost? Is it per provisioned circuit, or based on actual usage? What is their pricing model? Note that it is more and more common to need 208 volt circuits, or three phase circuits with modern blade enclosures and SANs. It is no longer just increments of 20 amp 110v circuits.
  • Will they provide you second power feeds at a reduced price if you are only going to be using them for failover? Note that these second feeds may cost them UPS, Generator, etc… capacity they must plan for, however, you won’t be utilizing electricity from them (which they must pay the utility company for) or loading their total feed capacity from the utility since they are just for redundancy.
  • Can you get price guarantees for future expansion (power costs, cabinet costs, etc…)?
  • Does the facility want to sell you completely managed services and as such makes colocation costs un-tenable?
  • Do they provide some amount of basic remote hands service hours each month? How much do they charge for professional services?
  • Does the facility provide service-level-agreements (SLAs) that have teeth? Frankly, I don’t put much faith in SLAs since usually they only involve a credit for the period of time service is unavailable. This generally is nothing in comparison the amount of money you lose when your datacenter goes down or your costs in man-hours to bring it back up.

Switching Costs

  • Once you move into a facility there can be significant (if not astronomical) switching costs. They may offer you a smoking deal to get you in the door, and then make it up by charging higher-than-market-rate for add on services down the road. Realize that you are inevitably likely to need more power down the road, and more bandwidth. Also realize that bandwidth costs fall steadily so you don’t want to get locked in for long term rates on telecommunications circuits. It is also possible in the long term for your needs to go down in the future as virtualization gets more popular, “cloud computing” becomes a reality, and computers become more efficient.
  • Contracts are normally in place to protect the provider, but they can also protect you. If you get a smoking deal on something, locking it in for a term commitment can be a good idea. It is reasonable for a provider to require a contract term as they do have significant capital and sales costs that they need to cover. Also, realize that the average lifespan of a datacenter is not all that long these days. A datacenter built 7 years ago has nowhere near the cooling capacity required in a modern datacenter.
  • Think about your growth pattern. You don’t want to be paying ahead of time for service you don’t need/use, but you also don’t want to get hit for huge incremental costs to add cabinets/power down the road. Contracts with “first right of refusal” clauses built into them (on additional space/capacity) are common.
  • Think about how difficult it will be for you to pick up and move at a later date. Some of the most “sticky” items are storage area networks. It might be easy to move a few servers at a time, but if you are all dependent on a single Storage Array, everything connected to it must move at once.
  • Telecommunication circuits also increase your “stickiness”. They are generally under term commitments and can be difficult to coordinate a move at a specific time. If you have a circuit from XO and move to a facility that does not have XO fiber, you might have to switch providers, or pay someone else for the local loop.
  • If you are purchasing Internet connectivity from your datacenter you are most likely being assigned IP addresses from their address space. When you move or change providers you will need to re-number. Depending on your network design and use cases, this might be easy, or an extremely difficult task.

Final Words

While there are numerous factors to consider, the reality is that there are likely a number of providers in town that can meet your needs successfully.  The reliability level of Portland’s power grid and of datacenter equipment is getting so high that we are really “chasing nines” to get ever so slightly more uptime (for dramatically higher cost).  For most organizations, being in a datacenter with only a single generator provides plenty of uptime.  Is that extra 0.009% uptime really worth it to go from “four nines” to “five nines”?  That is an increase of 47 minutes of uptime.  Is that worth doubling your costs?

Perhaps one of the most important aspects to your decision is the relationship you build with the owners, management, and staff of your colocation facility.  You want to have as much of a “partnership” as possible, and not merely a buyer/seller relationship.  Finding a facility with a long history of treating their customers well will increase your chances of success.

If you have any comments/questions feel free to post below, or shoot me an email.

-Eric

Categories: Colocation, Network, Telecom, Wireless Tags:

Map of all Portland Telecommunication and Colocation Facilities

April 6th, 2009 2 comments

Over the past couple weeks I have been working on something special to go along with my previously posted list of all the telecom and colocation facilities in town.

After *way* too many hours looking at satelite photos, I have created a Google map of every telecom and colocation facility that I know of in the Portland Metro Area.  This includes Colocation facilities, Verizon Central Offices, Qwest Central Offices, CLEC facilities, long-haul facilities, major fiber splice points, Wireless Provider CO’s, and even some notable private datacenters.

You can even download the .kml file and fly around the map in Google Earth (though Google Maps is probably a better interface for this).

I am *sure* that I am missing some facilities, so please, as always, email me with updates.

-Eric

Categories: Colocation, Network, Telecom Tags:

Portland OR Telecom and Colo Providers

March 7th, 2009 No comments

I have created a permanent page on the site that I will keep updated as a reference to all of the various Colocation and Telecommunications options in Portland Oregon.  This is all information that I have acquired over the years that I suspect may come in handy for others.  If you have any questions/comments/corrections feel free to post them on the site or shoot me an email!

-Eric

Categories: Colocation, Telecom Tags: