using a T1 to extend PRI service and involuntary fax over sip

I stumbled across a situation where a fax-to-email service gets inbound calls, from Carrier X, over a PRI into a cisco 5350 which then relays the calls, via an x-conn PRI, to a directly-attached linux box. This has worked for years. About a year ago, it was revealed that the above-mentioned PRI connects to the edge of Carrier X's SIP infrastructure. That is, calls start from some PSTN-connected fax machine, somewhere in the USA, go through a PSTN-to-SIP gateway, travel as SIP to the building where this cisco 5350 is, and then go through Carrier X's SIP-to-TDM equipment for delivery over the PRI to the cisco 5350. This led to some concern, since we all know that fax over SIP can be problematic. But everything was working, hundreds of faxes a day were pulsing through the system. Up until September the cisco 5350 was in the same building as Carrier X's TDM equipment. In late September, a point-to-point, B8ZS/ESF T1 was used to extend the in-building cross-connect between Carrier X and the cisco 5350. The cisco 5350, and related servers, are now eight miles away (both endpoints in Manhattan, using VerizonBusiness for the T1, both endpoints "on-net", no ILEC involved). Since November, maybe half a percent of the faxes fail to work. They get a communication error at the start, at the modem negotiation. The T1 circuit is clean. Some people think that failures may not have been reported/noticed in October, but they occured nevertheless. This would suggest that the previous set-up was a very delicately balanced system and the moving of the cisco5350 eight miles away, necessitating the use of a T1 to carry the PRI to the new location, may be the root cause of the failures. Occam's razor reasoning supports this. The grassy knoll people believe that, in November, Carrier X started an effort to wring more out of their SIP network. Perhaps they started using different peers in various parts of the country. Maybe their PSTN-to-SIP gateways were tuned to use less bandwidth. When asked, Carrier X answered a different question, in a fashion similar to a politician. I'm wondering whether any experts here have an opinion to offer? Thanks, -mark

"A year ago", "September", "November"... looks like there is no solid time domain event correlation to be had. What would Chuck Norris do? Bash in everyone's head at the same time, knowing that one of them must be the bad guy. -B On Mon, Jan 24, 2011 at 10:24 AM, Mark Kent <mark at noc.mainstreet.net> wrote:
I stumbled across a situation where a fax-to-email service gets inbound calls, from Carrier X, over a PRI into a cisco 5350 which then relays the calls, via an x-conn PRI, to a directly-attached linux box. This has worked for years.
About a year ago, it was revealed that the above-mentioned PRI connects to the edge of Carrier X's SIP infrastructure. That is, calls start from some PSTN-connected fax machine, somewhere in the USA, go through a PSTN-to-SIP gateway, travel as SIP to the building where this cisco 5350 is, and then go through Carrier X's SIP-to-TDM equipment for delivery over the PRI to the cisco 5350.
This led to some concern, since we all know that fax over SIP can be problematic. But everything was working, hundreds of faxes a day were pulsing through the system.
Up until September the cisco 5350 was in the same building as Carrier X's TDM equipment. In late September, a point-to-point, B8ZS/ESF T1 was used to extend the in-building cross-connect between Carrier X and the cisco 5350. The cisco 5350, and related servers, are now eight miles away (both endpoints in Manhattan, using VerizonBusiness for the T1, both endpoints "on-net", no ILEC involved).
Since November, maybe half a percent of the faxes fail to work. They get a communication error at the start, at the modem negotiation. The T1 circuit is clean.
Some people think that failures may not have been reported/noticed in October, but they occured nevertheless. This would suggest that the previous set-up was a very delicately balanced system and the moving of the cisco5350 eight miles away, necessitating the use of a T1 to carry the PRI to the new location, may be the root cause of the failures. Occam's razor reasoning supports this.
The grassy knoll people believe that, in November, Carrier X started an effort to wring more out of their SIP network. Perhaps they started using different peers in various parts of the country. Maybe their PSTN-to-SIP gateways were tuned to use less bandwidth. When asked, Carrier X answered a different question, in a fashion similar to a politician.
I'm wondering whether any experts here have an opinion to offer?
Thanks, -mark _______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

Chuck Norris would put an STM-1's worth of jitter-free calls on one copper strand without a ground. Your gateway would gain DSP density when he talks. -- Alex Balashov - Principal Evariste Systems LLC 260 Peachtree Street NW Suite 2200 Atlanta, GA 30303 Tel: +1-678-954-0670 Fax: +1-404-961-1892 Web: http://www.evaristesys.com/ On Jan 24, 2011, at 4:46 PM, Beth Johnson <bethjohnson5060 at gmail.com> wrote:
"A year ago", "September", "November"... looks like there is no solid time domain event correlation to be had.
What would Chuck Norris do? Bash in everyone's head at the same time, knowing that one of them must be the bad guy.
-B
On Mon, Jan 24, 2011 at 10:24 AM, Mark Kent <mark at noc.mainstreet.net> wrote: I stumbled across a situation where a fax-to-email service gets inbound calls, from Carrier X, over a PRI into a cisco 5350 which then relays the calls, via an x-conn PRI, to a directly-attached linux box. This has worked for years.
About a year ago, it was revealed that the above-mentioned PRI connects to the edge of Carrier X's SIP infrastructure. That is, calls start from some PSTN-connected fax machine, somewhere in the USA, go through a PSTN-to-SIP gateway, travel as SIP to the building where this cisco 5350 is, and then go through Carrier X's SIP-to-TDM equipment for delivery over the PRI to the cisco 5350.
This led to some concern, since we all know that fax over SIP can be problematic. But everything was working, hundreds of faxes a day were pulsing through the system.
Up until September the cisco 5350 was in the same building as Carrier X's TDM equipment. In late September, a point-to-point, B8ZS/ESF T1 was used to extend the in-building cross-connect between Carrier X and the cisco 5350. The cisco 5350, and related servers, are now eight miles away (both endpoints in Manhattan, using VerizonBusiness for the T1, both endpoints "on-net", no ILEC involved).
Since November, maybe half a percent of the faxes fail to work. They get a communication error at the start, at the modem negotiation. The T1 circuit is clean.
Some people think that failures may not have been reported/noticed in October, but they occured nevertheless. This would suggest that the previous set-up was a very delicately balanced system and the moving of the cisco5350 eight miles away, necessitating the use of a T1 to carry the PRI to the new location, may be the root cause of the failures. Occam's razor reasoning supports this.
The grassy knoll people believe that, in November, Carrier X started an effort to wring more out of their SIP network. Perhaps they started using different peers in various parts of the country. Maybe their PSTN-to-SIP gateways were tuned to use less bandwidth. When asked, Carrier X answered a different question, in a fashion similar to a politician.
I'm wondering whether any experts here have an opinion to offer?
Thanks, -mark _______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops
_______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

All "Chuckles" aside, it doesn't matter if you wield the razor or the magic bullet on this one. Open tickets with everybody, blame everything in sight, and one of those efforts will pay off. -B On Mon, Jan 24, 2011 at 1:46 PM, Beth Johnson <bethjohnson5060 at gmail.com>wrote:
"A year ago", "September", "November"... looks like there is no solid time domain event correlation to be had.
What would Chuck Norris do? Bash in everyone's head at the same time, knowing that one of them must be the bad guy.
-B
On Mon, Jan 24, 2011 at 10:24 AM, Mark Kent <mark at noc.mainstreet.net>wrote:
I stumbled across a situation where a fax-to-email service gets inbound calls, from Carrier X, over a PRI into a cisco 5350 which then relays the calls, via an x-conn PRI, to a directly-attached linux box. This has worked for years.
About a year ago, it was revealed that the above-mentioned PRI connects to the edge of Carrier X's SIP infrastructure. That is, calls start from some PSTN-connected fax machine, somewhere in the USA, go through a PSTN-to-SIP gateway, travel as SIP to the building where this cisco 5350 is, and then go through Carrier X's SIP-to-TDM equipment for delivery over the PRI to the cisco 5350.
This led to some concern, since we all know that fax over SIP can be problematic. But everything was working, hundreds of faxes a day were pulsing through the system.
Up until September the cisco 5350 was in the same building as Carrier X's TDM equipment. In late September, a point-to-point, B8ZS/ESF T1 was used to extend the in-building cross-connect between Carrier X and the cisco 5350. The cisco 5350, and related servers, are now eight miles away (both endpoints in Manhattan, using VerizonBusiness for the T1, both endpoints "on-net", no ILEC involved).
Since November, maybe half a percent of the faxes fail to work. They get a communication error at the start, at the modem negotiation. The T1 circuit is clean.
Some people think that failures may not have been reported/noticed in October, but they occured nevertheless. This would suggest that the previous set-up was a very delicately balanced system and the moving of the cisco5350 eight miles away, necessitating the use of a T1 to carry the PRI to the new location, may be the root cause of the failures. Occam's razor reasoning supports this.
The grassy knoll people believe that, in November, Carrier X started an effort to wring more out of their SIP network. Perhaps they started using different peers in various parts of the country. Maybe their PSTN-to-SIP gateways were tuned to use less bandwidth. When asked, Carrier X answered a different question, in a fashion similar to a politician.
I'm wondering whether any experts here have an opinion to offer?
Thanks, -mark _______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

You added two more points of interference - each side of the T1. That may have something to do with it. It is likely that the SIP back-haul changed and it's not running G.729 On 1/24/2011 1:24 PM, Mark Kent wrote:
I stumbled across a situation where a fax-to-email service gets inbound calls, from Carrier X, over a PRI into a cisco 5350 which then relays the calls, via an x-conn PRI, to a directly-attached linux box. This has worked for years.
About a year ago, it was revealed that the above-mentioned PRI connects to the edge of Carrier X's SIP infrastructure. That is, calls start from some PSTN-connected fax machine, somewhere in the USA, go through a PSTN-to-SIP gateway, travel as SIP to the building where this cisco 5350 is, and then go through Carrier X's SIP-to-TDM equipment for delivery over the PRI to the cisco 5350.
This led to some concern, since we all know that fax over SIP can be problematic. But everything was working, hundreds of faxes a day were pulsing through the system.
Up until September the cisco 5350 was in the same building as Carrier X's TDM equipment. In late September, a point-to-point, B8ZS/ESF T1 was used to extend the in-building cross-connect between Carrier X and the cisco 5350. The cisco 5350, and related servers, are now eight miles away (both endpoints in Manhattan, using VerizonBusiness for the T1, both endpoints "on-net", no ILEC involved).
Since November, maybe half a percent of the faxes fail to work. They get a communication error at the start, at the modem negotiation. The T1 circuit is clean.
Some people think that failures may not have been reported/noticed in October, but they occured nevertheless. This would suggest that the previous set-up was a very delicately balanced system and the moving of the cisco5350 eight miles away, necessitating the use of a T1 to carry the PRI to the new location, may be the root cause of the failures. Occam's razor reasoning supports this.
The grassy knoll people believe that, in November, Carrier X started an effort to wring more out of their SIP network. Perhaps they started using different peers in various parts of the country. Maybe their PSTN-to-SIP gateways were tuned to use less bandwidth. When asked, Carrier X answered a different question, in a fashion similar to a politician.
I'm wondering whether any experts here have an opinion to offer?
Thanks, -mark _______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

Most point-to-point T1s that are provisioned are now being backhauled over IP. This means that the T1 has to be encapsulated in an IP stream, and the timing for the T1 is also embedded in the stream. Any slips in the timing may be masked by the backhaul method (you won't see an alarm). It also means that now you have the possibility of a dropped or out-of-order packet, which was not possible if the circuit were T1/T3 all the way through. The possibility of timing issues would also be increased by the fact that now you are encapsulating twice, once in the point-to-point T1, and also at the SIP-to-PSTN interface (assuming the PSTN link is TDM). The fax-to-email service itself has to also demux the data, and then interprets the result of that as an analog stream using software. You are stacking cards, and eventually they will fall. Personally, I think half a percent is a pretty good result. I see about the same rate for inbound to my fax servers We also have outbound fax service using one of my fax servers, and I see 4% failure rate on those. I believe the difference between the two is that we use SIP on the termination trunk for a large number of the calls (anything that is not in our local areas). So, we have TDM in, ethernet transport over OC3 through our network, and PRI to the fax server (total errors less than 1%). For the outbound, it is PRI from the fax server, ethernet over OC3 through our network, terminating in T1 for calls that are local to our POPs, and SIP to a long-distance provider for any non-local terminations (total errors ~4%). Lonny Clark On Mon, Jan 24, 2011 at 10:24 AM, Mark Kent <mark at noc.mainstreet.net> wrote:
I stumbled across a situation where a fax-to-email service gets inbound calls, from Carrier X, over a PRI into a cisco 5350 which then relays the calls, via an x-conn PRI, to a directly-attached linux box. This has worked for years.
About a year ago, it was revealed that the above-mentioned PRI connects to the edge of Carrier X's SIP infrastructure. That is, calls start from some PSTN-connected fax machine, somewhere in the USA, go through a PSTN-to-SIP gateway, travel as SIP to the building where this cisco 5350 is, and then go through Carrier X's SIP-to-TDM equipment for delivery over the PRI to the cisco 5350.
This led to some concern, since we all know that fax over SIP can be problematic. But everything was working, hundreds of faxes a day were pulsing through the system.
Up until September the cisco 5350 was in the same building as Carrier X's TDM equipment. In late September, a point-to-point, B8ZS/ESF T1 was used to extend the in-building cross-connect between Carrier X and the cisco 5350. The cisco 5350, and related servers, are now eight miles away (both endpoints in Manhattan, using VerizonBusiness for the T1, both endpoints "on-net", no ILEC involved).
Since November, maybe half a percent of the faxes fail to work. They get a communication error at the start, at the modem negotiation. The T1 circuit is clean.
Some people think that failures may not have been reported/noticed in October, but they occured nevertheless. This would suggest that the previous set-up was a very delicately balanced system and the moving of the cisco5350 eight miles away, necessitating the use of a T1 to carry the PRI to the new location, may be the root cause of the failures. Occam's razor reasoning supports this.
The grassy knoll people believe that, in November, Carrier X started an effort to wring more out of their SIP network. Perhaps they started using different peers in various parts of the country. Maybe their PSTN-to-SIP gateways were tuned to use less bandwidth. When asked, Carrier X answered a different question, in a fashion similar to a politician.
I'm wondering whether any experts here have an opinion to offer?
Thanks, -mark _______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

On 01/24/2011 05:08 PM, Lonny Clark wrote:
Most point-to-point T1s that are provisioned are now being backhauled over IP. This means that the T1 has to be encapsulated in an IP stream, and the timing for the T1 is also embedded in the stream. Any slips in the timing may be masked by the backhaul method (you won't see an alarm). It also means that now you have the possibility of a dropped or out-of-order packet, which was not possible if the circuit were T1/T3 all the way through. The possibility of timing issues would also be increased by the fact that now you are encapsulating twice, once in the point-to-point T1, and also at the SIP-to-PSTN interface (assuming the PSTN link is TDM). The fax-to-email service itself has to also demux the data, and then interprets the result of that as an analog stream using software.
Are you really seeing that much PWE3 pseudowire out there? Every time I've looked it has no benefit over SONET other than more trouble tickets, unless you really, really have decent QoS and almost no need for TDM service in an area you simply can't get SONET to. That being said, on the CLEC side, I have seen no real demand for PWE3 psuedowire out there at all. It's just more expensive and problematic than the time tested, more widely available TDM gear. Not saying there isn't a huge demand for IP to the edges, but replacing perfectly good sonet with IP, just to carry less traffic due to IP and PWE3 overhead? I'm just not seeing it. -Paul

My company is a CLEC + ISP, and we are using a lot of pseudowire. Any customer that wants high-speed IP that is not in range of the CO gets pseudowire (about 2 miles). We split some of them into voice/data channels using Adtran gear, and MUX others for higher bandwidth. Customers that are in range of the CO get DSL/bonded DSL/EOC. We require one of the latter two options for SIP. We do our business in mostly rural areas, no fiber except to the city core. We have leased fiber connecting our POPs for out IP backbone. We have good QoS inside of our network, but not in the outbound long-distance trunks. That is the reason we see more errors there, which we don't see inside our footprint. Once upon a time the telecom network was the heart of this company, and data services plugged into it. Now the exact opposite is true, the IP network is the heart, and the telecom circuits are appendages to that. The time-tested/widely-available TDM gear is still used - at the customer's premises, but our core network is IP-based. Lonny Clark On Mon, Jan 24, 2011 at 2:25 PM, Paul Timmins <paul at timmins.net> wrote:
On 01/24/2011 05:08 PM, Lonny Clark wrote:
Most point-to-point T1s that are provisioned are now being backhauled over IP. This means that the T1 has to be encapsulated in an IP stream, and the timing for the T1 is also embedded in the stream. Any slips in the timing may be masked by the backhaul method (you won't see an alarm). It also means that now you have the possibility of a dropped or out-of-order packet, which was not possible if the circuit were T1/T3 all the way through. The possibility of timing issues would also be increased by the fact that now you are encapsulating twice, once in the point-to-point T1, and also at the SIP-to-PSTN interface (assuming the PSTN link is TDM). The fax-to-email service itself has to also demux the data, and then interprets the result of that as an analog stream using software.
Are you really seeing that much PWE3 pseudowire out there? Every time I've looked it has no benefit over SONET other than more trouble tickets, unless you really, really have decent QoS and almost no need for TDM service in an area you simply can't get SONET to.
That being said, on the CLEC side, I have seen no real demand for PWE3 psuedowire out there at all. It's just more expensive and problematic than the time tested, more widely available TDM gear.
Not saying there isn't a huge demand for IP to the edges, but replacing perfectly good sonet with IP, just to carry less traffic due to IP and PWE3 overhead? I'm just not seeing it.
-Paul

I've got to agree with Lonny. I spent some time inside one of the Bells, and they were installing MPLS in their core like crazy... -B On Mon, Jan 24, 2011 at 4:50 PM, Lonny Clark <lclarkpdx+voiceops at gmail.com<lclarkpdx%2Bvoiceops at gmail.com>
wrote:
My company is a CLEC + ISP, and we are using a lot of pseudowire. Any customer that wants high-speed IP that is not in range of the CO gets pseudowire (about 2 miles). We split some of them into voice/data channels using Adtran gear, and MUX others for higher bandwidth.
Customers that are in range of the CO get DSL/bonded DSL/EOC. We require one of the latter two options for SIP.
We do our business in mostly rural areas, no fiber except to the city core. We have leased fiber connecting our POPs for out IP backbone. We have good QoS inside of our network, but not in the outbound long-distance trunks. That is the reason we see more errors there, which we don't see inside our footprint.
Once upon a time the telecom network was the heart of this company, and data services plugged into it. Now the exact opposite is true, the IP network is the heart, and the telecom circuits are appendages to that. The time-tested/widely-available TDM gear is still used - at the customer's premises, but our core network is IP-based.
Lonny Clark
On Mon, Jan 24, 2011 at 2:25 PM, Paul Timmins <paul at timmins.net> wrote:
On 01/24/2011 05:08 PM, Lonny Clark wrote:
Most point-to-point T1s that are provisioned are now being backhauled over IP. This means that the T1 has to be encapsulated in an IP stream, and the timing for the T1 is also embedded in the stream. Any slips in the timing may be masked by the backhaul method (you won't see an alarm). It also means that now you have the possibility of a dropped or out-of-order packet, which was not possible if the circuit were T1/T3 all the way through. The possibility of timing issues would also be increased by the fact that now you are encapsulating twice, once in the point-to-point T1, and also at the SIP-to-PSTN interface (assuming the PSTN link is TDM). The fax-to-email service itself has to also demux the data, and then interprets the result of that as an analog stream using software.
Are you really seeing that much PWE3 pseudowire out there? Every time I've looked it has no benefit over SONET other than more trouble tickets, unless you really, really have decent QoS and almost no need for TDM service in an area you simply can't get SONET to.
That being said, on the CLEC side, I have seen no real demand for PWE3 psuedowire out there at all. It's just more expensive and problematic than the time tested, more widely available TDM gear.
Not saying there isn't a huge demand for IP to the edges, but replacing perfectly good sonet with IP, just to carry less traffic due to IP and PWE3 overhead? I'm just not seeing it.
-Paul
_______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

On Jan 24, 2011, at 4:08 PM, Lonny Clark wrote:
Most point-to-point T1s that are provisioned are now being backhauled over IP
I'm curious about this assertion. I'm currently involved with a good sized VoIP project, and it sure looks like all of the T-1s are coming off OC level TDM muxes. Not saying that it won't be true, but I'm just not seeing it yet. --Chris

On 01/24/2011 08:03 PM, Chris Boyd wrote:
On Jan 24, 2011, at 4:08 PM, Lonny Clark wrote:
Most point-to-point T1s that are provisioned are now being backhauled over IP
I'm curious about this assertion. I'm currently involved with a good sized VoIP project, and it sure looks like all of the T-1s are coming off OC level TDM muxes. Not saying that it won't be true, but I'm just not seeing it yet.
Me too. I haven't seen any evidence of this in the CLEC crowd down here. Nobody thinks TDMoE gear or other pseudowire approaches work worth a damn... -- Alex Balashov - Principal Evariste Systems LLC 260 Peachtree Street NW Suite 2200 Atlanta, GA 30303 Tel: +1-678-954-0670 Fax: +1-404-961-1892 Web: http://www.evaristesys.com/

On Mon, 24 Jan 2011, Alex Balashov wrote:
On 01/24/2011 08:03 PM, Chris Boyd wrote:
On Jan 24, 2011, at 4:08 PM, Lonny Clark wrote:
Most point-to-point T1s that are provisioned are now being backhauled over IP
I'm curious about this assertion. I'm currently involved with a good sized VoIP project, and it sure looks like all of the T-1s are coming off OC level TDM muxes. Not saying that it won't be true, but I'm just not seeing it yet.
Me too. I haven't seen any evidence of this in the CLEC crowd down here. Nobody thinks TDMoE gear or other pseudowire approaches work worth a damn...
The people buying the TA5000 platform may be using it for example. They have the right cards to do it and the platform is pretty popular. We only used the gear for testing and light production use but I see the chassis in every major CO around here with a good mix of cards. They chain them together via gigE. I have also used media convertors to carry TDM DS3s in larger buildings. We still have a large TDM network though but keep thinking about TDM over IP quite often in the near future. Sticking a DS3 sfp in a switch and putting M13 muxes against it sounds pretty appealing to me when you have solid QoS to go with it. matt
-- Alex Balashov - Principal Evariste Systems LLC 260 Peachtree Street NW Suite 2200 Atlanta, GA 30303 Tel: +1-678-954-0670 Fax: +1-404-961-1892 Web: http://www.evaristesys.com/ _______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops

On 01/24/2011 11:46 PM, Matt Yaklin wrote:
The people buying the TA5000 platform may be using it for example. They have the right cards to do it and the platform is pretty popular. We only used the gear for testing and light production use but I see the chassis in every major CO around here with a good mix of cards. They chain them together via gigE.
I have also used media convertors to carry TDM DS3s in larger buildings.
We still have a large TDM network though but keep thinking about TDM over IP quite often in the near future. Sticking a DS3 sfp in a switch and putting M13 muxes against it sounds pretty appealing to me when you have solid QoS to go with it.
I think about this, but then think 'Why would I use a technology that can't put 28 T1s in 45 megabit of bandwidth?' I get it for stuff like g.shdsl metro-e, but in building or on glass, might as well keep it TDM, and save the capex and tickets. I'm sure technology may mature down the road but for now I just don't see the payoff that it has, unlike VoIP does, where you actually save bandwidth doing it as VoIP

On Jan 25, 2011, at 9:21 AM, Paul Timmins wrote:
I think about this, but then think 'Why would I use a technology that can't put 28 T1s in 45 megabit of bandwidth?'
I looked at a system to do TDM over IP a year or so ago. The advantages: Ease of provisioning - go to a box and tell it take this T1 over there. Do the same at the other end. No mucking about with intervening muxes. Ease of provisioning - You only need to keep track of a few huge pipes over the optical network, instead of all those tiny ones. Restoration - You have an additional layer of IP route restoration above any optical redundancy you have. It's worth looking at for brand new networks where you have lots of capacity to spare. I sure do get a weird deja vu with all this MPLS, VoIP, and ToIP. ATM was gonna do all this stuff too :-) --Chris

What also happens when your preferred vendor of SONET gear decides to no longer make that product line? (Huawei for example, M1600 was discontinued and the rest in 2012). Is it time to find a new SONET vendor and have a mix of gear or just go all IP which is where the industry seems to be headed? As in bring in the WDMs and just build another network using your existing dark fiber from CO to CO. On top of that.. an OC48 sure aint what it used to be bandwidth wise when you have key locations that use it up so quickly. Sell a couple of OC3/OC12s, provision your own data needs, and you have this little chunk left for the future... 10 gig has to be considered pretty quickly and I really have no wish to replace OC48s with 192s.. Not only do most of our chassis need to be replaced to support the 192 cards.. we have OC48 cards just sitting and collecting dust. Give me 10G ethernet instead. matt On Tue, 25 Jan 2011, Chris Boyd wrote:
On Jan 25, 2011, at 9:21 AM, Paul Timmins wrote:
I think about this, but then think 'Why would I use a technology that can't put 28 T1s in 45 megabit of bandwidth?'
I looked at a system to do TDM over IP a year or so ago. The advantages:
Ease of provisioning - go to a box and tell it take this T1 over there. Do the same at the other end. No mucking about with intervening muxes.
Ease of provisioning - You only need to keep track of a few huge pipes over the optical network, instead of all those tiny ones.
Restoration - You have an additional layer of IP route restoration above any optical redundancy you have.
It's worth looking at for brand new networks where you have lots of capacity to spare. I sure do get a weird deja vu with all this MPLS, VoIP, and ToIP. ATM was gonna do all this stuff too :-)
--Chris
_______________________________________________ VoiceOps mailing list VoiceOps at voiceops.org https://puck.nether.net/mailman/listinfo/voiceops
participants (8)
-
abalashov@evaristesys.com
-
bethjohnson5060@gmail.com
-
cboyd@gizmopartners.com
-
lclarkpdx+voiceops@gmail.com
-
mark@noc.mainstreet.net
-
myaklin@g4.net
-
paul@timmins.net
-
peter@4isps.com