Archive for the ‘Uncategorized’ Category

Dark Fiber in the Metro for Enterprises

May 17th, 2016 No comments

As competition increases in the metro for fiber transport, exciting new options are becoming available for “Clueful Enterprises” ™ from forward thinking non-traditional telco’s.

Fiber to the tower is driving huge buildouts in fiber networks to areas that were previously infeasible economically. That coupled with high count fiber cables (288 to 864 count) being deployed cost effectively means that fiber is not such a scarce resource.

The trend I am seeing is for Enterprises to lease a pair of dark fiber from their offices back to a datacenter within the same metro area (usually on both sides of a ring for redundancy) and then to purchase all other telecom services out of the datacenter where a competitive market exists. (i.e. IP Transit, MPLS, SIP trunks, etc…)

Once you have dark fiber on a metro path, you can “light it” with as much bandwidth as necessary and upgrade capacity over time when needed without having to re-contract in a disadvantaged negotiating position (due to remaining term length). Optics to push 1 gigabit down 20km of fiber can be purchased for as little as $7 each (you read that correctly!). 10 gigabit over 100km can be as low as $350/each. Stepping up modestly in cost, you can push 40x 10g waves over that same pair of fiber for 400 gigabit total (on each side of the ring).

One thing that makes this kind of deployment so tenable is the lack of need for specialized transport equipment. You can now put long distance (1550nm) and even DWDM optics in regular old desktop switching platforms. The only equipment needed at an office building is now basic network switches – 100% of the servers can be located in datacenters or in the cloud.

When making use of dark fiber (with diverse paths) there are actually significantly less single points of failure than if that exact same fiber was “lit” with Ethernet Private Line services from a provider. In a classic EPL scenario the CPE device is a single point of failure, the client facing optic on that CPE device is a single point of failure, and the aggregation router back at some hub site is a single point of failure. Additionally, you are dependent on power reliability at that hub site, plus depending on ring architecture, potentially dual failure scenarios with many other customer premises on either side of the ring (i.e. during a widespread power outage).

With dark fiber, no amount of typos from someone in the NOC, software upgrades, or automation gone wrong can take down your connectivity (your fiber has to actually be physically broken to make it stop working).

While many see dark fiber deals as a negative for telecom companies, I actually see many upsides:

  1. Contract terms are longer – I advise enterprises to co-terminate fiber leases with their office space leases – typically 5-7 years vs. a maximum of 3yrs for lit services.
  2. Deal size is typically larger as Enterprises do see the value in having effectively unlimited bandwidth that allows them to access other services in the datacenter where market economies exist. This architecture can allow the elimination of traditional “server rooms” which are large portions of many Tenant Improvement budgets and take away expensive office square footage.
  3. CAPEX and OPEX for equipment is ZERO. Splicing costs may be more than for a traditional lit services deal, but that is easily offset by the longer term length and the lack of need for expensive equipment.
  4. Building right of entry agreements may be easier to come by and less costly when you have no powered equipment on-site that requires 24×7 access.
  5. Competition is lower when offering dark fiber solutions. In a traditional lit services transport deal there are often 5-6 carriers available, but due to fiber and policy constraints taking it to a dark fiber deal may reduce that to 2-3 options (or less!).


Categories: Uncategorized Tags:

Motorola Droid Bionic Review

September 9th, 2011 2 comments

So my wait for a new phone finally came to an end today as I was able to snag a new Droid Bionic on Verizon Wireless.  I had been increasingly frustrated with the slowness of my Droid version 1 phone that seemed to get slower and slower.  A special thanks to my corporate data rep who went the extra mile to make sure I got one on launch day after having my morning wasted by the wireless kiosk idiots at Costco who claimed to have them in stock the evening before when I stopped by.

After only having it for about four hours I must say, thus far I am impressed.

Things I like:

  1. It is fast (responsiveness wise).  It lives up to my expectations so far.
  2. It is fast (network wise). Verizon Wireless LTE is absolutely amazing (I am in Beaverton, OR currently). It is also fast on my 802.11g wireless via Frontier FiOS.  Using Google Maps is no longer frustrating.
  3. The screen seems very good. (not like strikingly great, but certainly good)
  4. The touch screen seems more accurate than my Droid V1.
  5. The OS is Gingerbread of course (which I did not have on my Droid V1)
  6. The camera seems to be better quality, though it’s odd that the app does not rotate it’s menu’s.  Also- The location tagging outputs number coded errors on the screen while taking pictures which seems like poor spit and polish (though it’s way faster than the Droid V1 so I am happy)
  7. The form factor works well for me thus far.  I think it is lighter than my Droid V1 and certainly much much thinner.  I tend to carry it in my front shirt pocket and so while it does stick out the top a bit, I think the reduced weight does not make my shirt look funny as much.

Things I don’t like:

  1. Any kind of crap installed by the Carrier (i.e. Verizon Wireless) – vCast, their paid Navigation app, their paid Visual Voicemail app, etc…  cMon people…  Google does most of these way better than you and they made them free.  Deal with it and go on with life.
  2. Just about anything written by Motorola (i.e. Motoblur).  I think that having all the phone manufacturers write their own UI’s is stupid in many ways.  I find that they don’t do it any better than Google, and it just makes software upgrade cycles slower, and user training a pain due to the differences.  I understand that all the manufacturers don’t want to become a commodity (and this is a way to provide unique “”value””), but as an educated consumer, I would 100% of the time buy a vanilla Android device over one with aftermarket UI’s any day (if one were available).
  3. Verizon has some “backup” application for local contacts.  That’s dumb.  Google provides that feature for all my contacts and settings.  Perhaps it makes sense in the context of moving from “feature phones” over to Android based smart-phones.
  4. I have only made one or two calls so far and quality was good- Though I did get some feeling that the earpiece does not get incredibly loud, and it started clipping a bit at max volume.  This may not bode well for use in datacenters (TBD).
  5. So far the car mount kit I bought seems a bit flaky at detecting that the phone should launch the handsfree app as I bought a rubber cover for the phone.  It’s supposed to handle the covers OK after removing an insert.
  6. Some of the rumors had made me think it would support GSM for international roaming.  That would have been very nice.  Also, it sounded like wireless charging was a default feature, but instead, it sounds like it requires a special backplate that is not yet available.
  7. All the notifications and ringtones come set to incredibly annoying “DROID” sounds.  It’s totally a branding thing, I get it, but it is awful.

Things I am worried about:

  1. As mentioned before, we will see how well it works in the datacenter audio wise (though with the extra mic’s for noise canceling presumably, perhaps it will do well for the remote end caller)
  2. Battery life.  It get’s warm under heavy use- Which can’t bode well for battery consumption.  Also- It would appear that Verizon has only installed LTE on a patchwork of the towers in the metro area and skipped a bunch in between.  Since LTE propagates well at 700mhz they can somewhat get away with this (as LTE device density is pretty low right now) – Though I am sure this is a massive contributor to battery drain!  i.e. your phone must communicate with a tower farther away because your closest tower has no LTE panels/sectors/gear…

Overall this phone is a great win for me, though I suspect I will be unhappy with it before 24 months is up at the current innovation pace.  If for no other reason than for the fact that more efficient LTE chipsets will come out.

If your looking for a new phone on Verizon Wireless right now (and your not deeply entrenched in the Apple ecosystem) I don’t think there is really any question that the Droid Bionic is the way to go.


Categories: Uncategorized Tags:

Cogent Eastbound route out of Portland to Boise and a new POP

July 9th, 2010 3 comments

It would appear that Cogent finally has a long-awaited route Eastbound out of Portland.  I just noticed it on their web site today and a quick traceroute confirms there is now connectivity to Boise.

Translating “”…domain server (xx.xx.xx.xx) [OK]
Type escape sequence to abort.
Tracing the route to (
1 (38.104.104.xx) [AS 174] 0 msec 1 msec 0 msec
2 ( [AS 174] 0 msec 1 msec 1 msec
3 ( [AS 174] 11 msec ( [AS 174] 12 msec *

I then noticed that traffic Eastbound beyond Boise to Salt Lake still prefers going through Sacramento.

Translating “”…domain server (xx.xx.xx.xx) [OK]
Type escape sequence to abort.
Tracing the route to (
1 (38.104.104.xx) [AS 174] 1 msec 1 msec 0 msec
2 ( [AS 174] 0 msec 1 msec 1 msec
3 ( [AS 174] 12 msec ( [AS 174] 13 msec 12 msec
4 ( [AS 174] 25 msec ( [AS 174] 25 msec *

Further investigation using Cogent’s looking glass tool from Washington DC shows me that either their network map is incorrect, or they currently have a circuit down from Boise to Salt Lake (or for some weird traffic engineering reason my traceroutes are not hitting it).  Routing from DC to Boise through PDX is not exactly what I would consider “optimal”.  😉

Looking Glass Results: Washington, DC
Query: trace
Type escape sequence to abort.
Tracing the route to (
1 ( 4 msec 4 msec 4 msec
2 ( 4 msec 4 msec 0 msec
3 ( 0 msec 0 msec 4 msec
4 ( 28 msec ( 20 msec ( 24 msec
5 ( 32 msec ( 32 msec 32 msec
6 ( 48 msec ( 44 msec 44 msec
7 ( 56 msec ( 60 msec 64 msec
8 ( 64 msec ( 72 msec ( 76 msec
9 ( 100 msec 80 msec 80 msec
10 ( 84 msec ( 84 msec 84 msec
11 ( 96 msec * ( 92 msec

Cogent has talked about an Eastbound route for some time now so I am jazzed to see it is finally happening!  Here is to hoping that Boise <-> Salt Lake link comes online very soon!

Whoh, hold the phone- I just noticed that my route to Boise is going the  That’s new!  Previously they only had a single router in Portland in the Pittock building.  A quick check of their POP list in Portland reveals that they are saying 707 SW Washington St is a POP which is the Bank of California building.  That’s even better news as they are now one of (or perhaps the only?) carrier that has multiple routes out of town with core routers in multiple facilities.  I can’t say that I know for sure of any carrier in town with core routers in more than one facility.


Categories: Uncategorized Tags:

Upgrading Qwest DSL to 12 megabit ADSL2+

July 6th, 2010 No comments

Last week I upgraded a Qwest Business DSL (err, High Speed Internet) line in downtown Portland from 7 meg to 12 meg as they are finally offering speeds above 7 meg (though 12 was the max).  It was a nominal additional monthly cost, and the upgrade was free (they even gave a month of free service).

Some interesting notes:

I had previously set my modem to do PPPoA (PPP over ATM) such that it could support full 1500 byte MTU’s (rather than the PPPoE that they have been recommending for quite some time in anticipation of the transition away from ATM).  When you do PPP over Ethernet there is an 8 byte PPP header that cuts your max payload down to 1492.  In order to take advantage of the new service however, I was forced to reconfigure to PPPoE (the 1492 byte max MTU is not a big deal and is pretty common in residential/small biz internet connections these days).  This in combination with the fact that they told me they had to make a wiring change in a “cross box” somewhere tells me that I got moved to a new DSLAM that is not fed by ATM anymore (thank goodness!).

I am particularly happy about this because I am guessing a lot of the ATM based DSLAM’s out there are likely fed by NxT-1 backhaul setups (i.e. a bunch of bonded T-1’s) which seriously limits the amount of aggregate bandwidth available to all the users.  If your providing 100 7 meg DSL lines and you only have 8 T-1’s for backhaul, that’s some serious oversubscription!  I would recommend that anyone out their with Qwest DSL do what you can (i.e. upgrade service tiers) to get hooked to one of the new DSLAM’s, even if you then later switch back to a lower speed service offering as the newer DSLAM’s are likely to be loaded nowhere near as heavily (i.e. they likely have 1 gig ethernet fiber backhaul connections).

Here is a speedtest from Qwest’s speedtest site:

Qwest DSL Speed Test After Upgrade to 12 Megabit

Anecdotally, it would seem that ping times are faster on the new DSL, though I can’t say I actually plugged into the network (wired rather than wireless) and ran the same test before making the change:

erosenbe-mac:~ erosenbe$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=56 time=38.520 ms
64 bytes from icmp_seq=1 ttl=56 time=38.820 ms
64 bytes from icmp_seq=2 ttl=56 time=39.110 ms
64 bytes from icmp_seq=3 ttl=56 time=39.335 ms
64 bytes from icmp_seq=4 ttl=56 time=39.174 ms
64 bytes from icmp_seq=5 ttl=56 time=39.575 ms
64 bytes from icmp_seq=6 ttl=56 time=38.693 ms
64 bytes from icmp_seq=7 ttl=56 time=38.723 ms
64 bytes from icmp_seq=8 ttl=56 time=39.066 ms
64 bytes from icmp_seq=9 ttl=56 time=39.227 ms
64 bytes from icmp_seq=10 ttl=56 time=39.550 ms
— ping statistics —
11 packets transmitted, 11 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 38.520/39.072/39.575/0.333 ms
erosenbe-mac:~ erosenbe$

It is worth noting that the service is indeed ADSL2+ (even though I think technically ADSL2 can go to 12 megabit under the right conditions).  The upload speed still is extremely pitiful.  I expect more in this day and age.  My FiOS can do 25 megabit down, and at least 10 megabit up (I think some plans include up to 25 megabit upload).

In this case, I am only a couple blocks from the downtown Portland Central office (PTLD69) so I am able to sync at the full line rate of 12 megabit:

Qwest DSL Modem Status Linked at 12 Megabit

So overall, it is cool that Qwest is finally offering over 7 megabit of service, but I am disappointed that 12 megabit is the top end they are offering in downtown.  I have heard they are offering 20 megabit elsewhere (perhaps out in the suburbs where they are competing with cable modems).  I have also heard that they are offering VDSL rather than ADSL2+ in some areas.  I can not think of any reason to not offer speeds in excess of 12 megabits downtown, other than to keep from competing with their metro ethernet over copper/fiber products and other high margin services.

Integra Telecom offers pretty high speed DSL offerings these days, and they are even now offering MPLS over DSL (or at least that is what I have heard).  Qwest needs to catch up.

It still is disappointing that Qwest can’t muster the cash to deploy a real broadband network (i.e. fiber to the home/business).  They are getting their butts kicked by Comcast in residential, and by all the CLEC’s in commercial.  Hopefully when they get taken over by CenturyLink things will change, but at the moment I am not holding my breath.  I am glad to be out in Verizon (err, Frontier) FiOS land.  We shall see how that transition goes as well…


Categories: Uncategorized Tags:

What type of server rack/cabinet should I buy?

October 17th, 2009 2 comments

Over the years I have run into any number of problems physically mounting servers/equipment into racks/cabinets/enclosures.  This is often a major headache as it is not easy to change out your enclosure without taking everything offline, and often times (in the case of colocation facilities) it is simply not an option.

I was asked yesterday by a colocation facility for my advice on what types of new cabinets they should buy, which started me thinking.  I decided to post my current recommendations online in the hopes that it is of use to others:

  • Any server cabinet you buy absolutely 100% must have front to back cooling with fully perforated doors (none of this lexan crud with small holes and fans).  Every last square inch of the front and back needs to be perforated.  Period.
  • Don’t even think about using a bottom-to-top cooling setup cabinet.  If your colo provider tries to give you one of these, run away screaming.  This type of design was intended for telco style equipment.  What happens is the gear in the top of the rack bakes.  With modern gear you need much more airflow than that model can provide.
  • Make sure the cabinet is deep enough for everything you intend to put in it (including bezels on the front and cables on the back)  This is the largest problem I run into with colocation facility’s that have old racks.  The equipment has gotten longer but a lot of facilities don’t want to spend the money to upgrade (and the longer ones take up more floor space).  From a quick survey of rack manufacturers, it looks like 42″ is the new standard depth that should work with pretty much everything.
  • Make sure any cabinet you buy has standard mount points for vertical mount PDU’s (Power Distribution Units).  With density increasing vertical PDU’s are the only way to go (and they put the power strip right where you need it so you can use extremely short power cables).
  • The industry standard is now to have square holes rather than round holes.  This keeps you from stripping out threading and ruining an entire rack rail.  You can put cage nuts in the holes if you need threads.  (As a side note, I have seen at least three types of round hole racks, two with different types of threading, and one with no threading at all – I am glad these are all going away – except in two post racks where threaded holes are still standard)
  • Vertical wire management channels, chases, brackets, are a plus.  Think about how you are going to run your power, network, fiber, etc… cables.
  • Make sure the cabinet is built heavy-duty enough to handle the increased density of modern equipment.  Older cabinets were not designed for today’s weight loads.
  • The cabinets you get need to have proper heavy-duty bolt down points for earthquake and stability purposes (so they don’t tip over on you when you pull out servers).  Think about how this will work in the context of raised floors (if you have raised floors).
  • Decide if you want combo dials on the cabinets, or key’d entry.  I personally think colo facility’s should offer both and let the customer decide.
  • Your standard height cabinet is 42 rack units.  I don’t see any reason to deviate from this unless you can’t get something that tall into the building.  They also make taller ones, but who really thinks lifting servers above your head is a good idea OHSA wise?
  • Standard width these days is 24 inches.  If this is your own personal datacenter you could consider wider cabinets to provide a little more wiring space, but 24 is the industry norm (note that regardless of cabinet width, the rail width needs to be 19″ which is the standard).
  • Some cabinets come with split rear doors for reduced clearance which I find to be very convenient in many cases.  I really like the Dell ones.
  • The doors need to be very easy to remove and put back on (by ONE person) without hassle (like little nylon washers that fall out and get lost).  The doors should not bow or flex such that lining up the pins is a pain in the butt.  Dell gets good marks here too.
  • When you go to put equipment in your cabinet, if it has adjustable rails, make sure to adjust them properly BEFORE you install all your equipment.  Most server equipment can accept a certain range of depths these days so pick a depth that fits all your gear.

As usual, please post below if you have any comments/questions or shoot me an email!


Categories: Uncategorized Tags:

Review of ATT Wireless HSDPA and Verizon Wireless EVDO Rev A.

September 16th, 2009 2 comments

Today I brought home a new ATT USBConnect Mercury card to test out the service in comparison to my trusty Dell Wireless 5720 EVDO Rev-A card built into my Latitude D630 (which is about 18 months old now).

Not wanting to pollute my primary work laptop with extra cruft, I installed the ATT card software on a spare Dell D800 I had laying around for test purposes.

Here is a screenshot of the ATT Communication Manager as connected from the master bedroom of my house in Beaverton/Hillsboro Oregon:

ATT Communication Manager

As you can see, the signal is decent, but not right under a tower.

The first test as always is to ping my favorite IP address.  Sorry for the long paste here, but it is important to see how the latency varies over time.  From this test (and numerous others earlier in the day from work) I was not very impressed with the latency of the card or the consistency in the latency (jitter).

C:\Documents and Settings\Administrator>ping -t
Pinging with 32 bytes of data:
Reply from bytes=32 time=318ms TTL=53
Reply from bytes=32 time=388ms TTL=53
Reply from bytes=32 time=386ms TTL=53
Reply from bytes=32 time=345ms TTL=53
Reply from bytes=32 time=384ms TTL=53
Reply from bytes=32 time=422ms TTL=53
Reply from bytes=32 time=311ms TTL=53
Reply from bytes=32 time=339ms TTL=53
Reply from bytes=32 time=338ms TTL=53
Reply from bytes=32 time=336ms TTL=53
Reply from bytes=32 time=365ms TTL=53
Reply from bytes=32 time=323ms TTL=53
Reply from bytes=32 time=362ms TTL=53
Reply from bytes=32 time=320ms TTL=53
Reply from bytes=32 time=409ms TTL=53
Reply from bytes=32 time=397ms TTL=53
Reply from bytes=32 time=276ms TTL=53
Reply from bytes=32 time=285ms TTL=53
Reply from bytes=32 time=353ms TTL=53
Reply from bytes=32 time=322ms TTL=53
Reply from bytes=32 time=350ms TTL=53
Reply from bytes=32 time=309ms TTL=53
Reply from bytes=32 time=417ms TTL=53
Reply from bytes=32 time=266ms TTL=53
Reply from bytes=32 time=304ms TTL=53
Reply from bytes=32 time=515ms TTL=53
Reply from bytes=32 time=94ms TTL=53
Reply from bytes=32 time=102ms TTL=53
Reply from bytes=32 time=101ms TTL=53
Reply from bytes=32 time=99ms TTL=53
Reply from bytes=32 time=1056ms TTL=53
Reply from bytes=32 time=369ms TTL=53
Reply from bytes=32 time=353ms TTL=53
Reply from bytes=32 time=431ms TTL=53
Reply from bytes=32 time=390ms TTL=53
Reply from bytes=32 time=308ms TTL=53
Reply from bytes=32 time=337ms TTL=53
Ping statistics for
    Packets: Sent = 37, Received = 37, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 94ms, Maximum = 1056ms, Average = 345ms
C:\Documents and Settings\Administrator>

The next test up was to see how fast the download/upload performance would be.  For this purpose I used ATT Test From Home

Hmm, that is certainly nothing to write home about, but not absolutely horrible for a mobile broadband card.

After checking latency and bandwidth, I moved on to test the maximum MTU size the connection would support.  Some applications are finicky about MTU sizes (especially UDP based ones).  I found that on ATT Wireless, the largest packet size it would support was 1450 bytes (note that in Windows ping tool below you specify this as the ICMP payload size of 1422 to which it adds 8 bytes of ICMP header and 20 bytes of IP header).

C:\Documents and Settings\Administrator>ping -f -l 1422
Pinging with 1422 bytes of data:
Reply from bytes=1422 time=1021ms TTL=53
Reply from bytes=1422 time=167ms TTL=53
Reply from bytes=1422 time=178ms TTL=53
Reply from bytes=1422 time=167ms TTL=53
Ping statistics for
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 167ms, Maximum = 1021ms, Average = 383ms
C:\Documents and Settings\Administrator>

The Verizon test

Here is the same ping test from the verizon card in my Dell Latitude D630:

C:\Users\eric.rosenberry>ping -t
C:\Users\eric.rosenberry>ping -t Pinging with 32 bytes of data: Reply from bytes=32 time=87ms TTL=49 Reply from bytes=32 time=87ms TTL=49 Reply from bytes=32 time=90ms TTL=49 Reply from bytes=32 time=87ms TTL=49 Reply from bytes=32 time=85ms TTL=49 Reply from bytes=32 time=92ms TTL=49 Reply from bytes=32 time=88ms TTL=49 Reply from bytes=32 time=86ms TTL=49 Reply from bytes=32 time=90ms TTL=49 Reply from bytes=32 time=84ms TTL=49 Reply from bytes=32 time=89ms TTL=49 Reply from bytes=32 time=88ms TTL=49 Reply from bytes=32 time=86ms TTL=49 Reply from bytes=32 time=91ms TTL=49 Reply from bytes=32 time=87ms TTL=49 Reply from bytes=32 time=113ms TTL=49 Reply from bytes=32 time=97ms TTL=49 Reply from bytes=32 time=95ms TTL=49 Reply from bytes=32 time=85ms TTL=49 Reply from bytes=32 time=90ms TTL=49 Reply from bytes=32 time=88ms TTL=49 Reply from bytes=32 time=88ms TTL=49 Reply from bytes=32 time=92ms TTL=49 Reply from bytes=32 time=92ms TTL=49 Reply from bytes=32 time=87ms TTL=49 Reply from bytes=32 time=85ms TTL=49 Reply from bytes=32 time=89ms TTL=49 Reply from bytes=32 time=86ms TTL=49 Reply from bytes=32 time=89ms TTL=49 Reply from bytes=32 time=86ms TTL=49 Reply from bytes=32 time=89ms TTL=49 Reply from bytes=32 time=85ms TTL=49 Ping statistics for Packets: Sent = 32, Received = 32, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 84ms, Maximum = 113ms, Average = 89ms Control-C ^C C:\Users\eric.rosenberry>

Note how much lower that is than the ATT HSDPA card!  An average of 89ms vs 345!  Verizon’s max ping time is nearly the same as ATT’s best!

How about the speed test?

Verizon Wireless Speedtest From HomeIt is still not great, but it is a bit better than ATT.  I often wonder how many of these speed test sites actually have enough bandwidth to provide realistic tests?

When testing the MTU size capabilities, I was plesently surprised to find that the Verizon EVDO card allowed full 1500 byte packets!

Wrap up

I feel I must point out the issues with my testing before drawing any conclusions:

  • My EVDO card is built into my Dell D630 and has diversity antennas spatially separated in the screen that are probably larger than the one in the ATT dongle.  I could get a card that would work with ATT for my D630 to do more apples-to-apples testing.
  • I only tested from a single location.  Wireless service is incredibly location dependent.  ATT could happen to have weaker signal at this particular location than Verizon (though I actually got similar results at my office earlier in the day in downtown Portland which provides a second data point).
  • Load on my particular tower may happen to have been heavy at the time of my testing, so these types of results are only useful in aggregate when enough samples are collected to be statistically significant.

It is also worth noting that anecdotally, browsing on the ATT card was painfully slow, where browsing on the Verizon card was pretty good.  I started this blog post on the ATT card and then switched over to the Verizon card to finish it.

Other things that can impact performance include:

  • Type of gear deployed on the tower you are connecting to (is it HSDPA capable)
  • Signal strength
  • Number of carrier channels deployed on the tower in question
  • Amount of other users on the tower (density)
  • The time of day (rush hour gets a lot of use)
  • The back-haul capacity from the tower (is it connected by one or two T-1’s, or do they have DS-3/OC-3 back-hauls?)

So my conclusion?  If purchasing a broadband card for myself I would definetly choose the Verizon card over the ATT card no questions asked.  In Portland Oregon, Verizon Wireless has a network that is extremely difficult to beat!


Categories: Uncategorized Tags:

Host/System and Device/Router Naming Standards

May 21st, 2009 No comments

At each organization I am exposed to, it is interesting to see the various naming schemes that have been employed over time.  I most often find a hodgepodge of different naming standards that have been poorly followed.  Well thought out naming standards will make a huge difference in the ease of maintaining your environment.

So how should you come up with a device naming standard?  I won’t profess to give you a one-size-fits-all solution, but instead I will outline a number of the pitfalls to device naming that I have run into in order to help you devise your own convention.

Uses for a name

In IT, device names serve three primary roles:

  • They are a unique identifier used to define a device (note that a MAC address or serial number could be used as a unique ID, though it provides no other information about the device and is difficult for humans to work with).
  • When entered into DNS they provide an easy way to connect to a given device by typing in it’s name from scratch, or device names may be selected from a list in a program such as a SSH program.
  • When you see a device name in a log, or on a document it’s name should be obvious what the device in question is and convey to you critical information about the device.

Naming goals

  • Names should be as short as possible, easy to type and read, but with enough information to be unique and descriptive.
  • Make things as intuative as possible.  If you have an IT contractor working in your environment it should be pretty obvious to them what various servers do based soley on the machine names.
  • Your naming system should be flexible enough to allow for growth.

Naming structure

  • Generally you should start the name with the most significant identifier, and work your way through to the least significant identifier.   This makes sorting useful.
  • Think about how long should each field in the name be.  It needs to be long enough to hold unique entries for as many items of that type as will likely be utilized using the characterset defined for that field (i.e. if you have a two digit alpha field for site code, you can have a max of 676 sites, though if you want them to be intuative you probably don’t want to use the XZ designator) – a numeric only field has less options, 0-9 only yields 10 possibilities per digit.
  • Within a name you might choose to include delimiters between fields in order to seperate them, or just for stylistic reasons.  This makes names longer to type (and sometimes to long to fit in documentation, etc…), but they are often worthwhile from a readability standpoint.  PRF5A is a lot harder to read than PR-F5-A.  Most special characters are banned from device names, though dash “-” seems pretty well supported.
  • You can only have one variable length field in a name, unless you are using delimeters, or adjacent fields are obviously seperate since some are alpha only, and others are numeric only.
  • Note that not everything needs to have names of the same length – It is ok to name one server PDXFILE1 and another PDXSAN1.
  • Not everything needs to follow exactly the same nomenclature – routers and network hardware can follow one standard, while servers may follow another.  THIS IS OK!  As long as they don’t conflict…

Know your organization

  • Think about how your company will grow.  Might you ever have more than one VMWare server?
  • Unless there is no way your business will ever have more than one site (what if you were acquired) I highly recommend your names start with a site code (more on this below).
  • Not everybody has the same needs!  You don’t have to force the same scheme on every organziation!  A small manufacturering company has different needs from a global multinational.  You can get away with much simpler names in a small company than in a huge multinational corporation.

Who is your audience?

  • Names should be descriptive to your audience,  Who is your audience?  Users?  IT staff?
  • In an optimal world, machine names should not be seen by users.  In end-user facing situations I recommend using CNAME’s wherever possible to alias “service names” to “server names”. (i.e. could be CNAME’d to  Note that this often falls down in Windows since in Outlook for instance it insists on showing the user the *real* servername…  The same goes for file server names.
  • Internet facing services should never have users seeing the machine names.  They are likely connecting to a firwall and or load balancer first anyway so this is easy to hide.

High-level recommendations

  • Don’t name things non-sensical names, this is not 1990 (yeah, I know I broke this rule when naming
  • Avoid putting un-necessary junk in server names – I don’t really care what the model number of server is (in most cases), or even if it is a VMWare guest server or a physical server (this matters less and less as time goes on).
  • Don’t put a version number of software in the name as you will likely upgrade it! (I have seen servers named Win2k that are running Windows 2003 Server)
  • If the server might end up running multiple applications don’t put the name of one piece of software in the name, call it an application server or something…  (I have seen a server named backupexec that was running netbackup…)
  • In a software development shop (or even a non-software shop), you will likely have multiple copies of similar environments for testing purposes.  PRODUCTION, QA, DEVELOPMENT, STAGING , etc…  This is a good thing to include in the name as you typically have similar server names in each and you don’t want to inadvertantly make a change in Production when you intended to make it in QA.
  • Usually it makes sense to name services with a number on the end as you might have multiple servers performing the same function, or even if you only have a single server in that function you might move to another physical server later which you designate with a different number on the end.
    Many environments put two numbers on the end of servers, but how often do you really have more than 9 servers of the same type at one site?  It may be ok for some servers to have a single digit number on the end, while others have two digits.

Site codes

In most organizations I recommend the use of site codes as even single-site companies often end up with remote sales offices, disaster recovery datacenters, etc…

The goal with site codes is to choose a identifier that people both from the site in question, and others far away can easily identify as being related to a given location.  I have often struggled with this as there is no standard, and lots of potential for confusion and overlap.

You must decide how long you want your site codes to be.  I know Intel used to use two digit codes.  Many organizations choose three digit codes which conveniently enough corresponds with airport codes.

There are  a couple issues with airport codes however:

  • Some airport codes are not obvious which city they are in
  • You often times will have multiple sites within the serving area of a single airport

Note that not all site names have to be the same length (depending on your name structure).  At the last company I worked for I gave the large headquarters site in each region a three digit code, and then the smaller satellite sites got five character codes that began with the three digit region in which they were located.  i.e. PDX was the headquarters site and PDXPC was the Pacific Center satellite site.

A few other notes

Two situations to consider: Naming a device after a department, but that department moves elsewhere physically, but the device stays…  Or, naming a device after a building, but the company moves to another facility along with the device, and keeps the name.  Sometimes you must make a decision as to what a device will stay sticky with, the company/department, or the physical facility.

What is the timespan that your naming scheme must be good for?  I doubt a single site company is going to become a multinational overnight…  Your average IT device lasts 3-7 years so your naming scheme can easily change at replacement time to handle growth.

You might need to consider naming of devices with multiple network interfaces, each with different IP’s.

  • Windows is dumb and by default wants to register every interface with the same thing in DNS.  This can lead to issues if all networks are not directly reachable by all hosts accessing the device.
  • Solaris is interesting in that it wants each interface named differently.  In this case I recommend making the main server name map to the “primary” interface (i.e. probably the one you set the default gateway on) and then use <hostname>-xx for additional interfaces where -xx is something like -bk for backups, etc…
  • Routers should have different forward and reverse names for each interface, plus forward and reverse names for a loopback IP.  (i.e. and and just plain for the loopback IP)

In one environment I have worked in we name all of our iLO, ilom’s, DRAC’s, etc…  <hostname>-SC (sc = service controller).  This makes it easy to go login to one in an emergency.  Just don’t accidentally cross the DNS entries or else you might power cycle the wrong box!

You must be careful not use special characters in device names.  Note that different devices and directory systems may have different “special characters”.  Think about Windows names, Unix names, router names, DNS names, WINS names, etc…  Each different type of name has different restrictions on what characters and symbols are allowed, and what the minimum and maximum lengths are.  Some names could be case sensitive, but most are not.

I personally find uppercase names easier to read in documentation and on screen, but that is in many cases a matter of personal preference, and in others may be enforced by the system in/on which the name is set (i.e. DNS).

IP addressing in relation to names

This is a topic worthy of another complete blog post, but I will point out just a couple of key recommendations here.

Since private ip address space is “free” and “plentiful” I generally build my subnets with plenty of IP space so that I can space machines widely and align their last number with their server number.  Most often I will use /23 subnets for servers and clients which gives me 512 IP’s (minus a few for network, broadcast, and default gateway).  As an example, you could have a server called PDXESX1 with an IP of and another called PDXESX2 with IP, PDXESX3 as, etc…

On a somewhat unrelated note, in my oppinion the default gateway should always be the lowest usable IP in the range because it is intuative for anyone that follows after you.  Along these same lines, I am a fan of always making my DNS servers .11 and .12 in a given subnet (or .11 in one subnet and .11 in another subnbet).

Is this the right time to change?

Is change really needed?  Or is it simply change for change sakes?

The natural tendency for each new “owner” of a network is to want to do things their way with a naming standard that makes sense to them.  Don’t keep changing your naming schemes!  Even if the existing one is not perfect, it may be better overall just to leave it as is!

You generally don’t want to avoid changing a machines name after it has been set – the name gets referenced all over the place, and unless your process to change it is perfect, it will get missed somewhere and cause confusion down the road…  Think about all of the places you might have to change the name:

  • On the machine itself (hostname, hosts files, application configurations…)
  • In your ip address spreadsheets
  • In your inventory system
  • In DNS entries (including CNAME’s that reference the host name)
  • On the labels stuck to the machine physically
  • Your labels in the network switch (and supporting documentation)
  • Labels on the cables attached to the server – network, power, etc…
  • In your monitoring software
  • On your kvm switch
  • In description fields on your remote power cycle device (PDU’s) 
  • On your network diagrams and documentation

Final thoughts

While this may be a bit overwhelming, it is crucial to consider all of these aspects ahead of time in order to avoid needing to change your standard down the road.  I hope this has given you an overview of many of the pitfalls of naming I have run into during my career such that you can avoid the same mistakes!

As always, if you have any additional comments, feel free to post them here, or shoot me an email and I may include them in a future post.


Categories: Uncategorized Tags:

Sun X4100 and X4200 Lower Non-critical going low

April 29th, 2009 4 comments

For over a year now our team of oncall engineers has been tortured by an error generated periodically by our racks of Sun X4100 and X4200 servers.  These alerts come from the integrated ILOMs which we have set to syslog to our EM7 monitoring platform.  Usually about once a week one of our many servers will report something along the lines of the following error:

FIRST REPORTED: 2009-04-29 14:50:33

LAST REPORTED: 2009-04-29 14:50:34




SOURCE: Syslog


DEVICE: prsun1-sc


Full message text for most recent occurrence:

<130>logmgr: ID = 343 : Wed Apr 29 14:52:39 2009 : IPMI : Log : critical : ID =   7f : 04/29/2009 : 14:52:39 : Voltage : mb.v_+12v : Lower Non-critical going high : reading 12.16 > threshold 10.96 Volts


This event has not been acknowledged


Sent by notification policy: Major/Critical Events


The EM7 has received a CRITICAL syslog notification from this server.

If you go look at the event log on the ILOM it looks more like this:

04/29/2009 : 14:52:39 : Voltage : mb.v_+12v : Lower Non-critical going high : reading 12.16 > threshold 10.96 Volts
04/29/2009 : 14:52:38 : Voltage : mb.v_+12v : Lower Non-critical going low : reading 7.37 < threshold 10.96 Volts

Looking at the event log another server with the same type of issue, the error is for a different sensor, but yet it has the same behavior:

02/21/2009 : 06:25:01 : Voltage : p1.v_vddio : Lower Non-critical going high : reading 1.85 > threshold 1.60 Volts
02/21/2009 : 06:24:55 : Voltage : p1.v_vddio : Lower Non-critical going low : reading 0.97 < threshold 1.60 Volts

I should note that these errors *never* seem to turn out to be anything but noise…  We all just acknowledge the alarm and go back to bed.

This week I finally got annoyed enough to go look further into this issue as I do participate in the on-call rotation which covers these systems (even though I don’t *own* these systems).

After doing some digging, I found the following obscure note in the release notes for some firmware update bundle which includes ILOM firmware:

ILOM Service Processor firmware
  * Fixed the bug of lower non-critical voltage sense issue.

So I have gone ahead and upgraded a couple of my servers thus far.  Hopefully this will resolve the issue!

I have to get in a couple of jabs at Sun here since I burned an entire day today messing with their servers:

  • When you upload the ILOM firmware (which includes a system BIOS upgrade also)  your server may get powered off during the upgrade without any warning.
  • When you upgrade to a 2.0 BIOS from a 1.x version, you have to manually clear the CMOS according to their release notes (the update utility seriously could not do this for us?)
  • And my personal favorite, their documentation makes some obscure reference to some bug you might run into and so they tell you that you must upload the new firmware *twice* in order to ensure it applied properly.  Mind you they don’t tell you what the problem you might run into is, and they give you no way to tell if the person that upgraded the firmware for you previously did the double firmware update properly.
  • After the ILOM firmware and system BIOS updates I did today, the servers somehow managed to change the device ID’s (or something) on the onboard NVIDIA NICs in such a way that Windows recognized them as new NIC’s (5 and 6).  This caused them to loose all IP settings and I had to log in through the ILOM and reset them.  This happend on the two servers I upgraded.
  • To upgrade the RAID card firmware/BIOS you must boot the server from a CD that runs DOS.  Note that on a Dell box you drop in the Openmanage CD, it scans your system to determine what needs updating to get you to a “known good set” of drivers, and you click the go button.  It takes care of all Firmware/Drivers/Software for you.
  • The LSI software for Windows to monitor the built in RAID card is a joke.  It looks like an intern wrote it.
  • At least Sun does provide a streamlined Windows driver installer package, this did work well.

Overall, I am not completely thrilled with Sun’s x86 hardware lines, though I suppose things may be better if you are a Solaris-on-x86 shop.


UPDATE 5/13/09

I got another voltage error on one of my fully updated servers.  I have called Sun and opened another case on this, though so far Tier 1 and Tier 2 techs do not seem to have any ideas as to what is causing this issue.  I sent them a bunch of output from the ipmi tool that they are looking through.

ID = 1 : 05/10/2009 : 23:58:42 : Voltage : p1.v_vtt : Upper Non-critical going high : reading 1.79 > threshold 1.00 Volts

I should also note that after the firmware updates, one of the machines is now reporting ECC errors.  This makes me wonder if the previous firmware was not properly reporting them.  We have had almost zero RAM problems with our dozens of Sun x86 servers which makes me worry that they are just hiding their problems.  I must say the server handled the failure gracefully.  It was getting dual bit (uncorrectable) ECC errors and so upon boot it disabled the two (of four) offending DIMMS.  Very nice.

Also, I would like to take a moment to comment on Sun’s build quality in the x4100 and x4200 servers.  I opened a couple of them up today for the first time and I must say, I am *very* impressed with the physical build quality.  Sun has some very talented hardware engineers (almost over-built I would say).  The servers are made from some heavy gauge metal among other things.

So while I have changed my mind a bit on Sun’s build quality, they are certainly lacking some of the finer touches needed for x86 servers.  Their out of band management controllers (previously ALOM’s, now iLOM’s) have been quite the fiasco for us.  They also are a royal pain to bring all the different firmwares/drivers up to “known good sets”.  Dell has quite a nice tool for this.

One of the tech’s also did mention that there was a firmware update for the power supplies to keep them from powering the machine off in the event of a momentary power loss (like as a UPS kicks in).  Apparently they are programmed to power down after 20ms of lost power.  They should be able to run for over 100ms even after power is lost.

Categories: Uncategorized Tags:

Verizon and Verizon Business don’t peer in Portland

April 28th, 2009 1 comment

I discovered last night that Verizon Business (aka UUNET, MCI,, AS701) and Verizon proper (i.e. the Local Exchange Carrier here in Portland, AS19262) don’t appear to peer here.  That is a major shame since I am on Verizon FiOS and I can’t even access other businesses that use Verizon Business as their ISP here in Portland without bouncing of Seattle.

Check out this traceroute from my router on my FiOS connection to SilverStar Telecom who uses Verizon Business as one upstream:


Type escape sequence to abort.
Tracing the route to (

  1 ( 4 msec 4 msec 4 msec
  2 ( 4 msec 4 msec 4 msec
  3 ( 8 msec 8 msec 8 msec
  4 ( 8 msec 8 msec 8 msec
  5 ( 12 msec 16 msec 12 msec
  6 POS6-0-0.GW9.POR3.ALTER.NET ( 12 msec 16 msec 12 msec
  7 ( 12 msec 16 msec 12 msec
  8 ( 12 msec 16 msec 12 msec
  9 ( 12 msec 16 msec 16 msec

What a bummer.  I hope they rectify this situation soon!


Categories: Uncategorized Tags:

WordPress running on DreamHost

March 7th, 2009 No comments

I finally got around to putting to use this evening.  I have long been looking for an outlet to document the IT challenges I run across on a daily basis, in a place that Google can index them.  I frequently make use of the postings of others and I intend to use this blog as a way to give back to the community.

Not wanting to bother with hosting my own server anymore, I went ahead and signed up for a DreamHost account and loaded WordPress on it.  I must say I was plesently surprised at how painless the entire process was.  Kudo’s to DreamHost and WordPress for understanding the importance of workflow design.  I even installed WordPress by hand (they do have a 1-click installer available if you prefer) which was incredibly simple.


Categories: Uncategorized Tags: