Home
ATM Failure Modes

ATM Failure Modes

ATM Failure Modes

 

This document describes some of the more interesting and maybe subtle failure modes that I and my colleagues have encountered while debugging ATM device software and troubleshooting ATM networks. The equipment included ATM switches, multiplexers, and end stations from a variety of vendors including Agile Networks, Ascend, Avaya, Fore, Lannet, Lucent Technologies, Marconi, and Yurie.

 

ILMI Registration Fails Following Clearing of LOS

 

Symptom

 

The end station does not reregister with the ATM switch following the clearing of a Loss of Signal (LOS) alarm. All subsequent SETUP attempts fail, typically with a Cause Code 3, “No route to destination”.

 

Abstract

 

The ATM Forum ILMI 4.0 specification requires that upon alarming and subsequent clearing of LOS, the IME state machine, which implements the ILMI protocol, must restart. Specifically, an Event 6, Cold Start, should be injected into the state machine. However, the ATM switch did not feedback LOS into the state machine. So an LOS (e.g. a fiber break) between the end station and the ATM switch causes the IME state machine on the end station to transition through the Link Failing, Link Down and into the Establishing states, while the switch IME state machine remains in the Verifying state.

 

Resetting the end station may resolve this issue even with this bug. It takes long enough for the end station to reset that the IME in the switch times out and cold starts. The end station IME comes up in the Establishing state and the IME state machines are once again in sync.

 

ILMI sysUpTime Rollover Drops All SVCs

 

Symptom

 

All SVCs are RELEASEd, typically with a Cause Code 27 “Destination out of order” or Cause Code 41 “Temporary failure”. Subsequent SETUP attempts may fail with a Cause Code 3 “No route to destination” until the IME state machine in the end station reregisters with the ATM switch.

 

Abstract

 

The ATM Forum ILMI 4.0 specification calls for a MIB variable, sysUpTime, to be periodically transmitted between ILMI peers. This variable contains the duration of time in 1/100ths of a second that the IME has been operational. This is part of the connectivity verification process in the ILMI protocol. If the value of this variable ever "runs backwards", that is, is less than the prior value, the peer is to assume that the other side has restarted, and must itself restart. Restarting ILMI requires that all SVCs setup over the link managed by ILMI be torn down.

 

sysUpTime is a per-port MIB variable, not a system-wide variable. On an ATM switch, the sysUpTime epoch begins at zero when the ILMI link on that port is first instantiated. So it is possible that different ports on the same ATM switch have different notions of when sysUpTime rolls over.

 

The sysUpTime MIB variable is defined in RFC1213 to be of type TimeTicks, which is defined in RFC1155 to be a non-negative integer in the range of 0 to 4,294,967,295 (0xFFFFFFFF). In programming terms, this would be an unsigned 32-bit integer. On many (but not all) computers in C or C++ that would be an unsigned long variable.

 

Even if ILMI is implemented exactly according to the specification, sysUpTime will roll over from its maximum possible value 0xFFFFFF back to 0x00000000 after 497 days, 2 hours, 27 minutes, 52.95 seconds; that's 0xFFFFFFFF or 4,294,967,295 hundredths of a second.

 

ILMI is based on the SNMP protocol, developed for and typically used in the Internet Protocol (IP) domain, but adapted for ATM. The IP community acknowledges the problem with this rollover and conventional wisdom is that ILMI peers should recognize this rollover and not restart. However, how to recognize rollover is not currently required (or even discussed) in any standard, IP or ATM.

 

I have implemented rollover-awareness on ATM end stations, but my experience is that it is not generally implemented on ATM switches. Until the standards change and those changes are dissiminated in new software for all the ATM switches in the world, you can plan on loosing all of your SVCs every 498 days on the typical ATM switch that can be said to conform to the ILMI standard. The ATM device that conforms to ILMI 4.0 (and RFC1213 and RFC1155) must interpret the "less than" clause in ILMI 4.0 to take into account counter rollover.

 

A typical bug in protocol stacks is to do a time conversion from some real-time clock in the underlying real-time OS to sysUpTime 1/100ths of a second. A little back of the envelope calculation will convince you that it is almost impossible to do such a conversion without losing bits at the top or the bottom of the 32-bit unsigned integer.

 

Losing bits at the bottom (low order) just affects the granularity of the variable. For example, the an end station may correctly report sysUpTime in 1/100ths of a second or 10ms units, but it deliberately does so with 1 second granularity, meaning that sysUpTime changes in increments of 100 10ms units. This is generally not a problem.

 

Losing bits at the top (high order) causes the converted sysUpTime value to roll over prematurely. For example, suppose your ATM switch keeps native time not in 1/100ths (.01) of a second, but instead in milliseconds or 1/1000ths (.001) of a second, and converts to 1/100ths of a second to provide a value for sysUpTime. The value 0xFFFFFFFF (4,294,967,295) in milliseconds, when converted into the sysUpTime units of 1/100th of a second, becomes the value 0x19999999 (429,496,729). So your millisecond counter has 1/10th the "dynamic range" of a sysUpTime counter: your millisecond counter will roll over in 1/10th the time of a sysUpTime counter that keeps time in 1/100ths of a second. Any ATM device you talk to will infer a rollover in 1/10th the time that it should. Worse, some ATM devices (like the ones for which I wrote software) are smart enough to infer a legitimate rollover event by recognizing a rollover from a value on or near 0xFFFFFFFF to a value on or near 0x00000000. This false rollover makes the sysUpTime value go from 0x19999999 to 0x00000000, and hence defeats any such mechanism to avoid dropping all circuits during a legitimate rollover.

 

My personal software development experience is that it is nearly impossible to do a time conversion from units other than 1/100ths of a second for the sysUpTime value and not get into trouble. The applications I have developed for end stations keep sysUpTime time in exactly the same units and range as that required for the sysUpTime MIB variable, requiring no conversion. The rollover aware algorithm I developed uses unsigned computations and a comparison to timestamps taken from the internal system clock.

 

This rollover issue seems to crop up several times a year, in contexts outside of ATM or even SNMP as well.

 

Examples

 

An ATM switch had a bug that lost the top two bits such that the sysUpTime rolls over in 124 days, 6 hours, 36 minutes, 58.23 seconds, or 0x3FFFFFFF hundredths of a second.

 

An ATM switch had a bug which rolled the sysUpTime value back to zero about every 12 or so days, the infamous "cell stall" problem. The switch reported sysUpTime in units of 1/200ths of a second (a 2x factor). It did a time conversion from units of 1ms instead of 10ms (a 10x factor). It used a signed variable instead of an unsigned variable (a 2x factor). So altogether, sysUpTime on the switch rolled over 2x10x2 or 40 times faster than normal.

 

A commercial third-party UNI protocol stack did not handle sysUpTime correctly due to a time conversion problem, which produced rollover about every 298 days. We fixed the our application using this stack, but the bug is likely to exist in other ATM products which use this same stack.

 

A commercial third-party ILMI protocol stack could not decode the correctly encoded ASN.1 value for sysUpTime when the top bit is set (0x80000000 through 0xFFFFFFFF). We strongly suspect that it also emitted an incorrectly encoded sysUpTime value when it sends sysUpTime to its ATM switch. However, it could decode its own incorrectly encoded value, which is how we missed this in our own testing. Specifically, the problem is that the sysUpTime values 0x80000000 through 0xFFFFFFFF are correctly encoded in ASN.1 as 0x0080000000 and 0x00FFFFFFFF -- that's right, as five bytes. The ILMI stack fails to decode these ASN.1 values and tosses the SNMP GetResponse sysUpTime message as invalid. It then continued to send SNMP GetRequest sysUpTime messages in a vain attempt to get a value it liked. After thirty seconds, ILMI times out and cold starts, and all hell breaks loose.

 

Asynchronous RELEASE Processing Causes SETUP Failures at High Call Volumes

 

Symptoms

 

High call volumes result in SVC SETUP failures with Cause Code 35 “Requested VPCI/VCI unavailable”.

 

Abstract

 

The ATM Forum UNI 3.1 specification states that when an ATM switch sends a RELEASE COMPLETE message acknowledging a RELEASE of an SVC, all resources associated with that SVC -- including the VCI -- should be available for reuse. Our experience with virtually every ATM switch is that this is not what happens. The switch sends a RELEASE COMPLETE while asynchronously releasing the resources associated with the SVC. Vendors do this to speed up their claimed rate of SVC SETUPs.

 

At low call volumes, this isn't a problem, but at high call volumes, the ATM switch serving as the network-side of the IISP or PNNI inter-switch connection may reassign the VCI before its partner ATM switch serving as the user-side has released it. The result is that the SETUP fails with a cause code 35 "Requested VPCI/VCI unavailable". (Note that even though PNNI is a symmetric protocol, there is still a user side and a network side. However, unlike IISP, this is not chosen directly by administration but rather by which ATM switch has the higher PNNI node address.)

 

Examples

 

Some ATM switches assign VCIs round robin, others reuse VCIs immediately. The latter allocation algorithm exacerbates this problem.

 

Sometimes a fast ATM switch can cause this to occur when peered with a slower ATM switch, when the faster switch is the network side of the PNNI or IISP connection. We've had some luck fixing this by making the ATM switch with the slower CPU the network side.

 

PNNI Crank Back Not Implemented Causing SETUP Failures

 

Symptoms

 

SVC SETUPs fail with Cause Code 37 “User cell rate unavailable” even though bandwidth is available on other paths between the same two end points.

 

Abstract

 

The ATM switch implemented PNNI, but not PNNI Crank Back. Crank-Back is where an if an SVC SETUP fails due to lack of bandwidth (or really, any other reason) on one path, PNNI will search for another path. Without Crank-Back, it is possible to have a switch routing SVC SETUPs across multiple DS1s to the same destination, the desire being to load balance across the DS-1s, and have a SETUP fail due to lack of bandwidth on one DS-1 even though there was enough bandwidth on a parallel DS1.

 

IMA Induces Excessive Cell Jitter Leading to Cell Policing

 

Symptom

 

End station experiences cell loss. Data applications will experience excessive retransmissions. Voice applications will hear audible artifacts in the talk path. Modem connections using voice applications will fail.

 

Abstract

 

Inverse Multiplexing for ATM (IMA) is an ATM feature in which cells are multiplexed across multiple physical links and demultiplexed in the correct order on the far end. This causes multiple physical links (for example, several DS1s) to act as a single cell pipe. The multiplex/demultiplex process on an ATM multplexer jittered the resulting demultiplexed cell stream so badly -- that is, introduces so much variation into the intercell spacing -- that the cells no longer conform to the traffic contract imposed on the SVC to which they belong. We have seen downstream ATM switches correctly police out cells that passed through IMA between two ATM multiplexers.

 

Problems with Point-to-Multipoint

 

Symptom

 

ADD PARTY and DROP PARTY requests fail, typically with Cause Code 89 “Invalid endpoint reference” or Cause Code 16 “Normal call clearing”. For voice applications, conference calls experience one-way talk paths following a call transfer, sometimes with no error indication at all.

 

Abstract

 

Of all the technology in ATM, the most problematic has been SVCs (versus PVCs). Of that, the most troublesome has been Point-to-Multipoint (versus Point-to-Point) SVCs. Of that, more of our ire has been directed at ADD PARTY and DROP PARTY operations (versus SETUPs and RELEASEs) on PMP SVCs.

 

One problem is that an Endpoint Reference Value (ERV), which identifies a particular party on a multiparty call, cannot be immediately reused. (However, unlike VCIs after RELEASE COMPLETE, the UNI 3.1 specification does not require that ERVs be available for reuse upon sending a DROP PARTY ACK, so this behavior while inconvenient at least conforms to the specification.)

 

Examples

 

Some ATM switches do not understand a SETUP with an ERV that is non-zero. This occurs when an ADD PARTY for a subsequent party is turned into a SETUP by the ATM switch at which the cell stream bifurcates.

 

Some ATM switches do not correctly implement certain sequences of ADD PARTY/DROP PARTY. If a PMP SVC had two parties or leafs, and an ADD PARTY had been received and forwarded to a downstream ATM switch (and so the ADD PARTY hadn't completed yet) and a DROP PARTY was received that dropped one of the original two parties on the call, the ATM switch would not handle the transient condition where there was temporarily only one party on the call pending completion of the ADD PARTY. The PMP SVC was RELEASEd with a Cause Code 16 "Normal call clearing". The ATM Forum UNI 3.1 specification says that a received ADD PARTY that hasn't completed (or failed) yet must cause the PMP SVC to be held in a pending state, even though the SVC temporarily only has one end.

 

An ATM switch handled DROP PARTY operations incorrectly, dropping the wrong PMP SVC leaf. This resulted in missing talk paths (one party could not hear another) following call conferencing or transfers, with no error indications by the ATM network.

 

Traffic Policing with a Single Leaky Bucket

 

Symptoms

 

End station experiences cell loss when uncorrelated cell streams are aggregated, particularly when voice streams are aggregated with data streams. Data applications will experience excessive retransmissions. Voice applications will hear audible artifacts in the talk path. Modem connections using voice applications will fail.

 

Abstract

 

Many ATM switches do not implement the standard dual-leaky bucket cell policing scheme. This is done for reasons of cost reduction. Since cell handling is generally done in hardware, this is not a software fixable issue. These ATM switches typically just police SVCs using the peak traffic contract defined by the PCR and CDVT parameters. This means SVCs may exceed their sustainable traffic contracts defined by the SCR and MBS parameters.

 

Aggregating uncorrelated data cell streams onto the same inter-switch physical link as voice cell streams may cause the SVCs to emit cells outside of their traffic contracts, leading to switch congestion and lost cells.

 

It also means that such a switch may hide existing traffic conformance issues with ATM applications, which only come to light when a new ATM switch correctly policing on both peak and sustainable traffic contracts is installed.

 

Resources Exhausted During Bursts of SETUPs

 

Symptoms

 

SVC SETUPs fail, typically with Cause Code 47 “Resources unavailable, unspecified”, Cause Code 49 “Quality of service unavailable”, or Cause Code 63 “Service or option unavailable, unspecified”.

 

Abstract

 

Lower-end ATM switches have problems handing quick bursts of SVC SETUPs. We believe this is probably a queue exhaustion problem with the signaling protocol stack inside the switch. SVC SETUPs fail because the protocol stack ran out of space to queue the request. This is a transient condition that is not necessarily related to SVC SETUP volume, but rather to distribution interarrival times of SETUPs.

 

During testing this happens typically when a test call generator first cranks up; the first dozen or so SVC SETUP attempts fail. Once the ATM switch "gets on its feet", it seems to do fine with the sustained load. My guess is this may have to do with dynamic memory allocation or task scheduling.

 

Ungraceful Recovery from Resource Exhaustion

 

Symptoms

 

The ATM switch resets.

 

Abstract

 

Some low end ATM switches panic and reset when internal resources are exhausted. The preferred response to resource exhaustion is to reject SETUPs. However, switches should not take this a step too far and reject SETUPs even when lightly loaded, as we have seen.

 

Optional Information Elements Required

 

Symptoms

 

SVC SETUP fails with Cause Code 21 “Call rejected”.

 

Abstract

 

Carrier-class ATM switches used by a service provider may be administered to reject any SVC SETUP attempt if the SETUP message does not contain the Calling Party Information Element.

 

Bellcore standard GR-1110-CORE contains recommendations for the use of ATM by service providers. One of these recommendations permits the ATM network to use the CALLING PARTY IE for billing purposes, and to reject any SETUP not containing this IE, even though the IE is optional in the ATM Forum UNI 3.1 specification.

 

VBR Traffic Contract in which PCR Equals SCR Confuses ATM Switch

 

Symptoms

 

ATM switch issues warning on its console or error log.

 

Abstract

 

Having a SVC traffic contract with a Variable Bit Rate (VBR) Class of Service (COS) in which the Peak Cell Rate (PCR) equals the Sustainable Cell Rate (SCR) is perfectly legal, and there are good reasons for doing so. But many ATM switches seem to have problems with it. Some just log errors, others have bugs in their UNI protocol stack or even their CAC algorithms tickled.

 

ATM cell handling chipsets typically prioritize cell emission based on the relative COS of each SVC. Cells for Constant Bit Rate (CBR) SVCs may be emitted first, followed by real-time VBR (VBR-rt) cells, followed by non-real-time VBR (VBR-nrt) cells, followed finally by Unspecified Bit Rate (UBR) cells. Setting COS to VBR-nrt with PCR equal to SCR should cause the cells in such an SVC to be scheduled behind CBR and VBR-rt cells, but the Connection Admission Control (CAC) algorithm should guaranteed the full requested bandwidth of the VBR-nrt SVC.

 

Based on discussions I have had with switch vendors and service providers, this appears to be unexpected.

 

Individual SVC Traffic Contracts Versus Aggregate PVP Traffic Contracts

 

Symptoms

 

End station experiences cell loss when uncorrelated cell streams are aggregated, particularly when voice streams are aggregated with data streams. Data applications will experience excessive retransmissions. Voice applications will hear audible artifacts such as echo and delay in the talk path, or in extreme cases even missing talk paths. Modem connections using voice applications will fail.

 

Abstract

 

This is not a bug but rather an traffic engineering issue.

 

Suppose you want to aggregate all of your SVCs over a Permanent Virtual Path (PVP) whose traffic contract is less than the line rate of the physical interface over which it travels. An example would be some type of ATM switch with several OC-3s on one side leading individual end stations. On the other wide would be a single DS-3 provided by your local ATM service provider. You pay for a PVP with a traffic contract of 96,000 cells per second. Emit cells any faster than that and the provider may police your cells in the PVP. (This is a very typical configuration in my experience.)

 

The end station may explicitly or implicitly traffic shape all of the cells in each individual SVC going to the edge ATM switch. If so, the cells in each SVC conform to that SVC's traffic contract. When the cells from many SVCs are aggregated into a single PVP, the ATM switch hardware usually places them on the PVP in just a first-come first-served manner if they have the same Class of Service, for example, Constant Bit Rate (CBR) or Unspecified Bit Rate (UBR). When they arrive through the PVP from the far end, they are also placed on the appropriate SVC in a first-come first-served manner; existing ATM switches generally do not do traffic shaping.

 

Now the problem is pretty clear: since the cells in one SVC are not correlated in any way with the cells in any other SVC, there is no way to insure that the aggregate cell stream will not exceed the traffic contract of the PVP. For example, each of ten SVCs could deliver a cell to the ATM switch more or less simultaneously. The switch puts all ten cells on the outgoing PVP back to back. To the PVP, this looks like a burst of ten cells at full line rate (OC-3), 353,207 cells per second, a far cry from the DS-3 traffic contract of 96,000. The ATM switch at the far end of the PVP would correctly police some or most of the cells from the PVP stream, or at least mark them as not conforming to the traffic contract, to then perhaps be policed by a downstream ATM switch.

 

Suppose you shaped the cells to match the PVP contract. Then taking the cells off the far end of the PVP and placing them on the SVC may result in ten cell streams that do not conform to the individual contracts of each of the ten SVCs at the far end.

 

What you need to do is shape the aggregate traffic from the SVCs to the PVP contract, then reshape each of the streams to each individual SVC contract at the far end. This insures that all contracts are met.

 

This has its own drawback: obviously you can't send a cell before you receive it, so the only way to shape a cell is to delay it. The worst case is you delay the cell at the near end so that it conforms to the PVP contract, and then delay it more at the far end so that it conforms to its SVC contract. Each step adds more delay what already incurred due to cell processing and transmission latency. More delay can lead to perceptible talk-path echo and satellite-like delay for voice applications

 

References

 

ATM User-Network Interface Specification Version 3.1, Annex E, ATM Forum, af-uni-0010.0002, September 1994

 

Integrated Local Management Interface (ILMI) Specification Version 4.0, ATM Forum, af-ilmi-0065.000, September 1996

 

Broadband Integrated Services Digital Network (B-ISDN) - Digital Subscriber Signaling System No. 2 (DSS 2) - User-Network Interface (UNI) Layer 3 Specification for Basic Call/Connection Control, ITU-T, Recommendation Q.2931, February 1995

 

Broadband Switching System (BSS) Generic Requirements, Bellcore, GR-1110-CORE, Issue 4, December 2000

 

M. Rose et al., Structure and Identification of Management Information, IETF, RFC1155, May 1990

 

K. McCloghrie et al., Management Information Base for Network Management, IETF, RFC1213, March 1991

 

Author

 

J. L. Sloan jsloan@diag.com 2005-08-16

© 2005 by the Digital Aggregates Corporation. All rights reserved.

The author would like the acknowledge the dozens of engineers, with ATM equipment vendors, service providers, and customers, who contributed to this base of knowledge.

 

Presentation: Implications of Memory Consistency (or Lack of It) Models for Java, C++, and C Developers (more)

Seminar Review: Jack Ganssle, Better Firmware Faster, 2006 (more)

Article: Vaster than Empires and More Slow: The Dimensions of Scalability (more)

Article: In Praise of do-while (false) (more)

Book Review: Joel Spolsky, Best Software Writing I, Apress, 2005 (more)

Presentation: Robert Austin, Measuring and Managing Performance in Organizations, Dorset House, 1996 (more)

Book Review: Joel Spolsky, Joel on Software, Apress, 2004 (more)

Presentation: James Surowiecki, The Wisdom of Crowds, Doubleday, 2004 (more)

Travelogue: China Journal: Dancing with a Sleeping Giant (more)

Unless otherwise specified, all contents Copyright © 1995-2015 by the Digital Aggregates Corporation, Colorado, USA.
Such copyrighted content is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License.