Telecom Informer

    

by The Prophet

Hello, and greetings from the Central Office!

I'm on Shaw Island, one island over from the home of ToorCamp, and it is snowing.  That's rare here, but it has been an unusually cold winter so far.  There have been windstorms and snow and ice, all of which have wreaked havoc on our aging and rickety outside plant.  I'm here tasked with figuring out what, exactly, we're going to do about it.  Who am I kidding, though?  The answer is probably nothing.

A phone line, conceptually, used to be a single copper pair, which ran all the way from your phone to the frame in the central office.

In reality, it was far more complicated than that: you owned your inside wiring (which was your responsibility), which would interface with the telephone company's outside plant at the SNI or TNI (typically a box on the side of your house).  On the telephone company's side, a drop cable would run from your premises to a splice enclosure (these usually look like a post, serving multiple houses in a neighborhood), which will then connect via a distribution cable to a serving area interface (these are the green boxes you typically see at the entrance to a neighborhood), which then connects to a feeder cable and finally to the central office frame.

So, it wasn't really just one cable - it was a patchwork of cables spliced together, which formed one continuous circuit between your telephone and the central office.

That's a long way to push electrons.  It might be five miles or more.  Both resistance and capacitance exist.  Cables are twisted, and this causes further attenuation because the distance is about three percent greater than if cables went straight through.  We're also, due to runaway global warming, seeing higher temperatures in the Pacific Northwest than networks were designed for (and this is every network, from electricity to cable TV to telephone to wireless networks).

In our case, higher temperatures cause deterioration of cable sheathing at an elevated rate.  It used to be that five percent of the time, a fault was in the aerial or underground portion of a cable, and 95 percent of the time, a fault was at an interconnection point.  That's no longer true; it's now closer to ten percent of the time that the fault is up a pole, or somewhere underground.  These faults are harder to find and much harder to fix.  Why?  Nearly every outside plant component of the telephone system - from poles to cables and beyond - is past its useful life.

What's more, there are capacitors inline throughout the network and, if you own an old electronic device, you have probably experienced a capacitor that has failed from age.  This happens in the telephone system too.  Fortunately, these failures are easier to find, but due to supply chain disruptions, the parts can be very difficult and expensive to come by.  Naturally, phone companies dramatically reduced inventories of spare parts to save money starting in the early-2000s.  I have heard of cases where outside plant technicians will, when faced with a shortage of spare parts, borrow capacitors from a part of the network with fewer subscribers (potentially causing an outage) in order to restore service in an area with more subscribers.

Overall, it's really complicated to run a network that is outside in the weather, with trees falling on it and deer pissing on it and the occasional meth head stealing copper from it (yes, this really happens).  Funny story about that.  One genius thought it was a good idea to cut a 2400 pair PE-89 cable.  These are 24-gauge, filled with icky-pic, and weigh over ten pounds per foot.  When his buddy cut it, it dropped straight down and clocked him in the head, splitting his forehead open.  The techs found him in the gutter, bleeding, and out cold.  Miraculously, he survived.

If all of this sounds unsustainable, it is.

And naturally, the company doesn't really want to invest much (if anything) in fixing problems because the network is obsolete.  They don't make money selling traditional telephone service.  All of the money is in selling broadband these days, which is not only unregulated, it mostly relies on different technologies.  Fiber to the node is going in everywhere.  The way this works is that new, fiber optic cable is run from the central office to each Serving Area Interface (SAI), where a Digital Subscriber Line Access Multiplexer (DSLAM) and SIP gateway are installed.  The existing copper cabling is used for "last mile" connectivity to nearby buildings.  It sounds great in theory, but this plant is still decades old and has been deteriorating as much as the rest of the network (arguably even more), so it's a stop-gap solution at best.

Then again, telecom executives seem to be treating fiber optic cables as a future-proof technology that will never wear out or require maintenance, which is completely inaccurate.  The lifespan of fiber optic cable is lower than that of copper cabling!  All of this stuff will need to be replaced in 30 years, if technology hasn't already passed us by prior to that.

What's the future?

Well, right now, on places like Shaw Island, it's the present, which is essentially the past.  This will be among the last places to get much additional investment.  There's not much here: a couple of convents, a community center, a ferry dock, a general store, and some houses.  It'll be far down the priority list.  As beautiful as the place is, I really wonder what I'm even doing here.

A more fun outside plant implementation, operated by phreaks at the ToorCamp hacker camp this year on neighboring Orcas Island, was presented by Shadytel.  These folks show up and operate a virtual phone company, offering free landline telephone service allowing hackers to make phone calls from their campsites (assuming they solve a puzzle required to initiate service).

Two AT&T Definity PBXs were installed, serving as central offices, one each at upper and lower camps.  Two trunks ran between them (T1, digital, for switching between exchanges, using High-Bit-Rate Digital Subscriber Line [HDSL] as transport).  This provided both redundancy and spare capacity in the event of anticipated hacker shenanigans (such as attempting to ring every telephone at ToorCamp at once).

For local distribution, T1s were run to multiplexers distributed throughout the camp, which were equipped with line cards (up to six per mux, supporting two T1s in total).  Each line card supported up to eight subscriber lines, which were then run over Category 5 cable to distribution points throughout each camping area.  From there, campers would run their own phone lines to connect to the network.

Naturally, many of the same problems that occur in the real world occurred at ToorCamp.

Splicing caused constant headaches.  Splices could get damp, or contaminated with dirt, or attacked by raccoons, and connectivity would be lost - potentially to large parts of camp.  Power could be interrupted (occasionally by campers unplugging network equipment to run kitchen appliances).  Aging equipment sometimes failed under the extreme conditions.  Fortunately, the HDSL equipment was equipped with LEDs to monitor the state, and the equipment was colocated with Shadytel operations.  When a connection dropped, it was visibly evident: the LED would turn red.  Fortunately, copper theft was never the root cause; hackers are a friendly crowd.

And with that, it appears I have a new nemesis: a squirrel.

One has evidently packed a splice enclosure full of nuts, and this is the root cause of the outage I'm dealing with.  Have a happy and safe winter, don't forget to check your tires, and let the gentle hum of a dial tone be your spirit guide.

I'll see you in the new year.

Return to $2600 Index