The optical packet switching technology has been identified as a promising technology to identify a new generation of systems and networks during the past few years. However, the lack of optical memory, and the complexity of optical packet switching systems offering functionalities comparable to full electronic systems, has motivated the constructors of equipment to envisage new directions: reconfigurable optical add/drop multiplexers, currently deployed in the field or hybrid packet switching systems. In the early 2000s, the hybrid approach combining the best of optics and electronics has been then identified as a new promising direction for optical packet switching. Several research laboratories have investigated new concepts limiting this technology to fast space switches [based on pure spatial technique or exploiting the combination of tunable lasers arrayed waveguide gratings (AWG)] and some processing functions. The capability of the optical technology to switch ultrahigh capacities has concentrated its efforts on systems for the metro core or backbones where the capacity was very challenging. But the recent reorientation of information communication technologies (ICT) toward its cloudification forces a repositioning of this technology in new network segments, closer to the users where new key performance indicators (KPIs) are asked. The new challenges are then to find a technology offering ultralow latencies, with systems more simple, cheaper, and less power consuming than existing products. This new direction then creates a need for new low-cost systems, ecodesigned at the convergence of new network concepts and of the emergent technologies driven in particular by the GPON2 and the DATA COM communities.
The optical packet switching technology has been identified as a promising technology to identify a new generation of systems and networks during the past few years. However, the lack of optical memory, and the complexity of optical packet switching systems offering functionalities comparable to full electronic systems, has motivated the constructors of equipment to envisage new directions: reconfigurable optical add/drop multiplexers, currently deployed in the field or hybrid packet switching systems. In the early 2000s, the hybrid approach combining the best of optics and electronics has been then identified as a new promising direction for optical packet switching. Several research laboratories have investigated new concepts limiting this technology to fast space switches [based on pure spatial technique or exploiting the combination of tunable lasers arrayed waveguide gratings (AWG)] and some processing functions. The capability of the optical technology to switch ultrahigh capacities has concentrated its efforts on systems for the metro core or backbones where the capacity was very challenging. But the recent reorientation of information communication technologies (ICT) toward its cloudification forces a repositioning of this technology in new network segments, closer to the users where new key performance indicators (KPIs) are asked. The new challenges are then to find a technology offering ultralow latencies, with systems more simple, cheaper, and less power consuming than existing products. This new direction then creates a need for new low-cost systems, ecodesigned at the convergence of new network concepts and of the emergent technologies driven in particular by the GPON2 and the DATA COM communities.
This chapter gives a positioning of optical switching technologies in the next generation of systems and networks. After giving two introduction scenarios, one for the metro part and the other for the backbone part, this chapter addresses feasibility issues. For the metro part, three introduction scenarios are presented. The first one exploits the well-known circuit switching techniques that can be introduced rapidly on the market. The second one proposes a packet switching technique to have a better bandwidth exploitation. Finally, with the emergence of new optical functions/devices, the third scenario describes how it is possible to propose a full flexible packet ring network that is really competitive with respect to other electronic alternatives. For the backbone part, the first scenario will be probably in the core of large routers, competing with current smart routers (router + cross-connects). The second scenario is for a new network concept, being disruptive with what exists but pushing toward a transparent compliant with a multiservice environment and fully evolutive in capacity. Finally, the last scenario describes how it is possible to go into an all-optical approach through the description of key optical functions required to make this concept realistic at a lower cost.
With the introduction of the Internet protocol in the network, the telecommunication domain has turned a new corner. The broadband access to this new technology, opening the way to many residential applications, creates a revolution for the next generation of switching systems. The first revolution is the traffic volume. Personal computers becoming more and more powerful are generating a traffic through files that could not be envisaged even 2 years before. The second revolution is probably the traffic profile evolution, moving from a constant bit rate to a variable bit rate, always driven by personal computer capabilities (video applications, high definition TV (HDTV), net shopping, net courses, games, etc.).
Optical technologies could appear in the next 4 years as an important technology to grow the capacity of systems while preserving the simplicity, reliability, and performance of the systems. But, more importantly, optical packet switching technologies could become efficient techniques to really fit with the statistical behavior of the traffic profile to preserve bandwidth utilization as much as possible. One of the key issues in such packet-switched networks is the identification of the best packet format (variable packets or fixed packets). Several European projects have concentrated their efforts on this important topic such as the RACE 2039 ATMOS project, the ACTS 043 KEOPS project, and more recently the IST DAVID project.
Thus, in this chapter, after a positioning of optical switching in the next generation of systems and networks, the benefits of multiplexed architectures will be presented. Solutions for a progressive introduction of this technology in the metro are described, highlighting the required technology and addressing physical feasibility as well as performance issues. Opportunities for the backbone are also presented with the objective of highlighting the most promising approaches. For a pragmatic approach, criteria introducing this technology on the market are listed, but, more importantly, a basic cost approach leading to the winning solution is mentioned. Finally, a conclusion is drawn.
In this section, we will present the advantage of optics with respect to electronics, but, more importantly, how optics can be exploited to complement electronic technology to really make the most of both technologies.
To give some arguments, we must list the advantages and drawbacks of optics.
The main advantages of optics are as follows:
The main drawbacks of optics are as follows:
In summary, optics is very interesting when the switching granularity is high, exploiting the WDM dimension to make simple structures. In electronics, we need to demultiplex at the wavelength level and then at the bit rate level; in optics, we simply need one device.
To switch at the WDM granularity, we have commercially available devices such as optomechanical switches, thermo-optical switches, electro-optical switches, and micro electro mechanical systems (MEMs) for slow switching applications and digital optical switches for fast switching SOA.
In optics, we can switch wavebands (group of wavelengths), wavelengths, or optical packets.
The switching of wavebands is particularly interesting in the following cases:
The waveband switching really exploits the potential of optics because it reduces the complexity of the switching process with respect to electronic techniques. This technique has to be exploited as much as possible to make a system or a network concept really competitive to electronic techniques but only when the traffic matrix is stable enough not to penalize the average load of the waveband.
The switching of wavelengths is particularly interesting as follows:
The switching of optical packets can be processed at the wavelength level or at the waveband level. The only difference with respect to wavelength or waveband switching is that the ON state is relatively short, on the order of a few hundred nanoseconds or microseconds. The switching of optical packets is particularly interesting as follows:
Among the different switching techniques, the optical packet switching technique is probably the most promising technique for the next generation of networks. The main indicator is the natural evolution of the traffic profile versus packet techniques. Driven mainly by the Internet protocol, we need to cope more with a traffic profile than a traffic matrix as could have been the case in the past. The main reason is the drastic change of telecommunication applications moving from telephony to data. In addition, the rapid introduction of personal computers (PCs) at home as multimedia machines pushes telecom companies to find solutions to offer a higher quality of service at a lower cost. This new form of traffic imposes new infrastructures capable of handling the required capacity and to provide at the same time the required flexibility to offer low-cost connections.
When analyzing the traffic profile at the output of a local area networks (LAN), the sporadic behavior of the traffic, often modeled with self-similar functions, clearly points to the problem. We then need to adapt the network concepts to the traffic nature coming from the access. And how do we adapt such a variability of the traffic profile with circuit connections while having a good efficiency? The answer lies in the high aggregation level that requires a grouping of different LANs, and this is not always possible. Another solution is to cut the circuit into pieces (packets) trying to follow the traffic evolution at a scale comparable to the scale of the incoming traffic. Even if the technique is more complex to manage, it is undoubtedly the most efficient way to optimize the bandwidth utilization, more in the time domain today, than in the volume domain in the recent past.
There is still a debate on the choice of the packet size: for main arguments in favor of the variable packet, the optical packet size always follows the incoming packet size, whereas for the fixed packet, the optical packet format contributes to a better management of the performance. In both cases, arguments are acceptable, but the reality is more complex.
It is evident to say that where the contention can be managed easily (a case of small or simple topologies), the variable packet has to be envisaged. But in the case of a large topology (meshed or other), the resolution of the contention is then local in each node, and the control of the traffic profile inside the network becomes fundamental. In that case, the fixed packet format is the only reasonable format. Another alternative, probably the best, is the adoption of a concatenated packet. For best effort, the concatenation is created to really follow the incoming packet profile. For high priority traffic where the delay is fundamental, small packets will always experience the smallest delay in the network. In this way, the technique can be adapted to a multiservice environment (Figure 3.1).
Figure 3.1 Different optical packet types that can be adopted.
Figure 3.2 Comparison: variable optical packets versus fixed optical packets versus concatenated optical slots.
Figure 3.2 gives a comparison among fixed, variable, and concatenated packets.
The WDM technique is often assimilated to the transmission domain. But, in fact, this technique is very useful in optics for many purposes.
The WDM technique can be exploited for the following objectives:
In multiplexed architectures, the WDM dimension is exploited for different purposes.
In the following, we will describe where the WDM dimension can be exploited efficiently and what kind of benefit we can expect.
In the case of optical rings, the WDM dimension can be exploited first to provide an upgradability of the network in terms of allocated resources, simply by attributing progressively bands of wavelengths. The advantage of the band is mainly to relax the filtering constraints during the cascade of several nodes. This advantage was raised many times in circuit switching platforms. If we want to exploit the same WDM infrastructure while introducing packet techniques, the notion of band can then be advantageously exploited to reduce the latency in the transmitting parts. In fact, we can exploit the statistical time multiplexing of packets over a group of wavelengths to increase the chance to insert a packet in the line when we can have access to a group of wavelengths instead of one.
In summary, we can see that for optical rings, the WDM dimension can relax physical constraints and, in addition, improve the performance in packet rings in terms of latency.
To illustrate the benefit of the WDM dimension, the IST DAVID project is proposing a multiring optical packet MAN exploiting the WDM dimension for these two main aspects.
In backbone networks where the topology is generally meshed, the WDM dimension is exploited for four main reasons:
To take a pragmatic approach, it is fundamental to identify what could be the introduction scenario of a technology at the time. These paragraphs intend to position the optical switching technology for the metro in a timescale.
Before developing these scenarios, it is also important to list the specificity of the metro part.
In the metro, it is clear that the cost is probably the most important parameter together with the performance. The cost addresses the hardware part but also the means adopted to exploit the bandwidth in the best way. Due to the traffic profile coming from the access, the important point is really to preserve the bandwidth. Thus, packet techniques will be preferred in this part of the network.
The WDM dimension is expensive in the metro, but there are also some arguments in favor of the introduction of the WDM in this part of the network.
Due to the current traffic profile, not constraining the bandwidth utilization too much (because the native bit rate is still quite low), a circuit platform is interesting. To reduce the cost of the WDM dimension while having enough resources to cope with the traffic volume and profile, probably the waveband approach is the best one. It guarantees upgradability (sub-band per sub-band), physical performance (relaxing filtering constraints), and simplicity (circuit switching) with an existing infrastructure (fibers already installed). But if the native bit rate increases together with the variance, there will be a need to go into a lower granularity to have a better utilization of the optical bandwidth. Then the optical packet technique could be easily introduced making use of an existing infrastructure. The gap becomes natural.
Thus, in the following presentation, we will introduce the circuit switching technique as a first step with a progressive migration toward optical packet switching techniques, which lead to a really efficient platform.
Due to the current traffic profile, optical circuit switching could be rapidly introduced in the market for many reasons:
The particularity of this approach is in the optical add/drop multiplexing structure. We can list mainly as follows:
The passive structure can be used when the traffic matrix is very stable, whereas the dynamic structure can allow some adaptation of the allocated resources to follow at least the envelope of the traffic matrix. This last type of the active structure is particularly interesting when the time constants are in the range of a few hours.
Figure 3.4a and b describes the two main structures of add/drop multiplexers.
In the metro part, the WDM granularity for the upgradation of the network capacity will depend on the cost of the intervention. And in some cases, the upgradation of one wavelength is not the most cost-saving solution. For that purpose, it is important to identify the minimum WDM granularity for the upgradation. This minimum granularity, which can be on the order of few wavelengths (2 or 4), can impact dramatically the network infrastructure. If we take into account the physical limitations in the cascade of filters, it is clear that the sub-band approach is an interesting approach. Figure 3.5 gives an overview of a ring MAN adopting a sub-band strategy.
The optical packet switching technology could be introduced as a required second step simply to face the traffic profile evolution. In that case, a packet technique will be required on the basis of the infrastructure already installed. The upgradation is made simply by changing the optical ring access node. Two new sub-blocks are then mandatory: the opto-electronic part and the electronic interface compliant with different classes of services.
The more pragmatic approach is the adoption of a commercially available technology. Several concepts are proposed, all based on the adoption of ILMs and synchronous optical NETwork (SONET)-like receivers. The resilient packet ring (RPR) concept is probably the most representative one.
Figure 3.3 (a) Conventional ring topology. (b) Multiring topology.
In the case of long-term approaches, the adoption of an advanced technology is then mandatory.
In particular, two important components are a fast tunable source and a fast wavelength selector. These two components have already been demonstrated feasible (e.g., agility for the tunable sources, nippon telegraph and telephone (NTT) for the wavelength selectors).
These components are:
Figure 3.4 (a) Fixed optical add/drop multiplexer. (b) Dynamic optical add/drop multiplexer: an optical switching element guarantees the dynamic behavior.
Figure 3.5 Sub-band introduction in an optical ring.
The technology required to introduce these concepts can be split into two categories:
In the following, we will list the required commercial devices, and we will describe in more detail the potential new advanced technologies that open the way to really attractive system functionalities.
There are two types of devices: passive devices and active devices
This is the most promising technology to really propose something new and disruptive with respect to what exists today.
Figure 3.6 (a) Structure of the ring access node for an upgrade of the circuit switching platform into a packet switching platform. (b) Structure of the optical add/drop multiplexer including the required functionality to fully manage the transit traffic.
In the following, we will illustrate the four main components:
A SOA is basically a laser in which the facets have been treated in order to eliminate the resonant cavity. Only the amplification medium is exploited. To be used as an optical gate, the SOA requires a high-frequency driver interconnected to the SOA. The driver sends a control signal, which can be forced at the ON or OFF state. Because of the short carrier lifetime, a SOA can be switched with response times on the order of few tens of nanoseconds. At present, this component is exploited by many laboratories and has been demonstrated to be feasible for many applications.
In the metro area, and according to the two network model described previously, different SOA structure are particularly interesting.
There are roughly three types of SOA: SOA with high confinement factor or long active section to achieve wavelength conversion at high bit rate, SOA with low confinement factor or short active section to exploit more the linear characteristic of the gain, and SOA with an internal clamping to have a strictly linear response not to create distortion on the signal crossing the component.
As for fiber amplifiers, there are three classes of SOAs: preamplifiers, in-line amplifiers, and boosters.
A SOA interconnected at the output of a modulated source is the basic schematic we can envisage.
The advantage of this solution is that it is commercially available today. The main drawback is that there is a need to have control of the cross-gain modulation. One interesting solution is the use of a clamped-gain SOA to avoid any cross-gain modulation. The SOA used only in its linear characteristic will provide enough gain to guarantee a sufficiently high ON/OFF ratio with no degradation of the pulse shape. This solution is currently studied in different laboratories to analyze network concepts based on a packet transmission.
The SOA gate array is located in front of a laser array. At the output, a multiplexer and an integrated modulator are interconnected. The SOAs see only continuous waves and can be switched, thus selecting the wavelength that must be transmitted. The SOAs are in a stable regime because the input power is a constant. Therefore, they do not experience any cross-gain modulation as could be the case in the previous use. The preservation of the signal quality (no degradation of the extinction ratio and no distortion of the bits) makes this SOA array an important device for the building of hybrid tunable sources.
Figure 3.7a illustrates the structure of a tunable source based on a gate array, and Figure 3.7b shows an integrated four gate-array (OPTO+ realization).
Another solution to build a fast tunable source is to use an Simple Grating Distributed Bragg Reflector (SG-DBR) laser while integrating a SOA section and an electro-absorption section. The following schematic illustrates the structure of the source. As for the hybrid tunable source, the SOA sees only a continuous wave that again prevents cross-gain modulation. The structure is currently studied by agility (Figure 3.8).
Figure 3.7 (a) Structure of a hybrid tunable source using a SOA gate array. (b) Photo of a four gate-array, key building block of the hybrid tunable source.
Figure 3.8 Schematic of an integrated tunable source integrating a SOA section for amplification of the signal or optical gating.
The wavelength selector is probably one of the other key devices because it can be comparable to a fast tunable filter. The principle of operation is very simple. A first demultiplexer demultiplexes the wavelengths, then each wavelength is selected or not, depending on the orders coming from the control part, and finally an output multiplexer regroups the wavelengths selected and contributes to reject the wideband amplified spontaneous emission coming from the SOAs. In principle, only one wavelength is selected among a group in a normal scenario. However, in the case of the third network scenario, the number of output wavelengths can vary from 1 to N (N being the total number of wavelengths at the input of the device).
Figure 3.9a shows the basic structure of a wavelength selector.
Figure 3.9 Structure of a wavelength selector.
In the case of the third scenario, another important technology is required, not at the optical level but more at the electronic level: the packet mode receiver.
The packet mode receiver has to experience a continuous or noncontinuous packet stream exhibiting different packet phases (when aligned to a common reference clock) and suffering from a packet power dispersion. Such kinds of receivers are currently studied in different laboratories. We can cite NTT, Nichiden (NEC), Lucent, Alcatel, etc.
Currently, this technology has been demonstrated to be feasible for use in different coding formats: Manchester but also return to zero (RZ) or non return to zero (NRZ).
Figure 3.10a and b shows the performance obtained with a packet mode receiver operating at 10 Gbit s−1.
The performance will depend on the packet format adopted.
Figure 3.10 (a) Characteristics of a packet mode receiver. Large power dynamic ranges and fully transparent to the packet phase fluctuation. (b) Eye diagram recorded: before and after the 10 Gbit s−1 packet mode receiver. The phase is preserved and the amplitude completely equalized.
Therefore, the problem is not the same for both cases, and the impact on the performance is not the same.
Therefore, concerning the performance, the variable packet format exhibits a poorer performance compared to the fixed packet. This is the reason why the concatenated approach is probably the optimum solution, providing performance and reliability (packet rhythm present in the ring to ease monitoring aspects).
In this part, the objective is to draw a progressive introduction of optical switching techniques for the backbone. For the short term, a cross-connect interconnected to a router could be envisaged to reach high-throughputs and high throughput routers. Both approaches are of prime importance because they correspond to two reality cases and two classes of products required to cover different network specificity. As a medium-term approach, we will present a multiservice Opto/Electro/Opto (O/E/O) network concept based on new features. Finally, we will describe how an all-optical packet switching network could become a reality in the longer term because of attractive features.
Short-term introduction: smart router (router + cross-connect) versus multi-terabit class routers/switches.
Smart routers are required for the dorsal network and where the aggregation is forced by the poor connectivity of the nodes and by the huge amount of traffic that must be transported. This type of product is particularly interesting when the traffic matrix is stable enough to make semipermanent connections realistic and efficient. This is particularly the case in the United States when connections have to be established between states. The router is mandatory to collect the traffic coming from regional networks or national networks.
High-throughput routers are required when it is not possible to process a part of the traffic with semi-permanent connections through optical cross-connects. It can be the case for metro core networks, where the dynamicity of the traffic could require a transfer mode at the packet level to increase the network efficiency while optimizing the resource cost.
Figure 3.11 shows the global structure of a smart router. The router collects the traffic at the packet granularity. Packets are put into queues and are sent on a specific wavelength. The optical cross-connect has to manage high throughputs. The structure can be based on a MEM technology. The approach is very interesting when the traffic is strong enough to open large pipes, thus enabling the establishment of a waveband. The cross-connection at a waveband level is the best guarantee of simplicity and reliability without creating breakthrough between the transmission system and the switching cross-connect.
Figure 3.11 Smart router schematic.
Figure 3.12a shows the generic structure of a high-throughput router (multi-terabit-class router) exploiting an optical core in its center part, whereas Figure 3.12b and c shows a prototype realized and the bit error ratio (BER) curve. The optical matrix is basically a fast space switch, creating connections at the packet level. The burst card is responsible for the packet format adaptation, whereas the line card is used to manage the incoming traffic. Buffers are at the input and the output of the optical matrix, and in the burst and line cards. An internal speedup can be exploited to guarantee the full functionality even in the case of failure of one of the switching planes. This approach is currently adopted by many constructors.
Both approaches are really important and demonstrate the potential of optical switching for basic functions, mainly focused on space switching: slow or fast.
In the previous cases, solutions are based on a traffic profile assumption, which enables the circuit switching technique for long-haul networks or enables the packet technique for small-scale backbones. However, the problem will occur when the application bit rate is increased together with the variance. This can rapidly happen if merging low bandwidth connections coming from mobile phones and high bandwidth coming from more and more powerful PCs reinforced by an optical connection giving access to very high bit rates per user. In this particular case, the variance can be increased leading to huge problems for efficient aggregation and forcing the telecommunications companies to think differently toward an optical packet platform for the backbone.
This means that, due to the traffic profile evolution, the packet will be used extensively. And the key question could be as follows: how to realize an efficient network capable of handling the required capacity while providing a mandatory flexibility?
In the network concept, we mainly exploit the edges to prepare the traffic in such a way that the traffic constraints inside the core of the network are relaxed. This means that all the complex functions are located in the edges such as aggregation, switching per destination, classification, packetization, traffic shaping, load balancing, and admission control. The traffic profile, having a better shape, is then sent to the network. The core nodes will be responsible for the synchronization, the contention resolution, and the switching. In this case, the packet being created in the edges only simplifies the structure of the core router.
Figure 3.12 (a) High-throughput router schematic. A fast optical switching matrix could be adopted in the center of the architecture. (b) Photo of a 640 Gbit s−1 throughput optical matrix. (c) BER performance.
As a first introduction, and if possible, it can be envisaged to introduce an existing packet format. The G709 framing is currently being investigated to identify the potential of the concept, but other packet formats could be considered.
For a second introduction, it seems clear that a more smaller packet size is required to relax the problems of aggregation. This also imposes a standard on a packet that does not exist today.
It must be noted that some universities and laboratories are currently studying the possibility of managing variable packets called bursts. The advantage of this approach is that the edge part is simplified in its functionality, but the core nodes are more complex to control, and the overall performance is affected by a highly sporadic traffic profile.
Figure 3.13 Schematic of a core router in the concept of a packet network.
The core node is strongly simplified with respect to the first approach because all the complex functions are located in the edge nodes.
Figure 3.13 describes a representative structure of such a core router.
It can be noticed that the particularity of this architecture is to have synchronization stages and memory stages before and after the optical matrix in the core of the high-throughput routers. This optical matrix has been introduced in the first introduction scenario, so the step is quite easy to cross. The challenge is greater at the management level than at the node level.
In the previous scenario, we still needed a lot of costly O/E conversion. One key question is as follows: can we efficiently reduce the number of O/E conversion stages in the core routers?
For this, we need to solve three key problems:
In electronics, we have bit memories making the synchronization process very simple. The information is stored at the distant clock rhythm, and it is extracted at the local clock rhythm. However, the structure is quite complex because the process is done at low bit rate, thus imposing one stage of WDM demultiplexing and one stage of bit rate demultiplexing.
As in optics, we do not have bit memories, and therefore, it is important to think differently. By imposing in the packet format a sufficiently large guard band, we simply need to preserve the phase between consecutive packets in order not to have collisions. This means that we need simple synchronization structures capable of having a resolution so as to avoid the problem mentioned. Typically, the resolution that can be handled is in the range of few nanoseconds. It is exactly what we will adopt for the synchronization.
The first problem is the thermal effect in the fiber modifying the index and creating variable delays during the propagation of data depending on the average temperature of the fiber. This means that all the WDM multiplexes will be affected. A WDM structure could bring a solution to this problem.
The second problem is that we do not have control of the time jitter created in any optical switching fabric. This packet jitter can be a blocking point in the cascade of several nodes. We need to control the packet jitter packet per packet, which indicates that the control must be done at the wavelength level but not at the WDM level.
Therefore, in summary, we can easily solve the problem of the synchronization by using one or two stages and combining a processing at the WDM level and a processing at the wavelength level. In both cases, we operate at the line bit rate. The gain is in the simplicity of the synchronization process and in the complexity of the structure, making this synchronizer reliable.
The regeneration stage is mandatory if we want to exploit the maximum throughput of the optical switching matrix. The regeneration separates the two systems: the transmission and the switching in order to lead to a maximum throughput for the nodes. It is also the only way to cascade nodes when the line bit rate is high. Once again, as the processing is done at the line bit rate level, the structure of the optical regenerator is really simple, being a guarantee of simplicity and robustness.
The contention resolution is still an issue in optical architectures because we do not have any efficient optical memory. To solve the problem, we will exploit the WDM dimension and more particularly the statistical multiplexing over the different available wavelengths. Therefore, the technique adopted is to avoid collision by reaffecting the wavelength to the packet at the output of the switching fabric. As the number of wavelengths per fiber can be limited, a recirculation buffer is then mandatory to solve the contention properly. By combining both techniques, the performance can easily reach the performance of a classical electronic switch but offering here all the switching capacity in one unique stage.
If photodiodes have very low sensitivity, it is not the case when introducing all-optical interfaces. Therefore, the optical switching fabric must be adapted to these optical interfaces by providing the required power. Once again, different techniques can be proposed to achieve this goal.
A generic structure for an all-optical packet core is described in Figure 3.14. The particularity of this architecture is that there is no O/E conversion except for the electronic control and the memory, making this architecture cost-effective. Based on the previous concept, this architecture is simply an evolution of the core node exploiting optical functions for a better efficiency, and a potential line bit rate increases at a lower cost. This approach is fully compatible with future point-to-point transmission systems at 40 Gbit s−1.
For the short- and medium-term approaches, we need the following:
Figure 3.14 Structure of an optical core router exploiting the optical resources but including an electronic memory stage in recirculation to guarantee the performance.
To realize compact systems, there is first a need for an integrated technology.
To realize the key sub-blocks presented previously, we need the following:
In the case of opto-electronic interfaces, the fast optical switching matrix (introduction scenario for the short and medium terms), the constraints are quite relaxed because the sensitivity of the receivers allows the design of switches with small output powers. Typically, reception powers as low as −10 dBm can be considered at the output of the switching fabrics. This also means that the amplification is limited in the core of the switch, leading to very compact and less power-consuming architectures.
One typical matrix is the SOA-based matrix, requiring simply an amplification stage before the splitting stage is the broadcast-and-select architecture. Another one is using tunable lasers and a wavelength router in the center. Both are represented in Figure 3.15.
The first architecture (Figure 3.15a) takes advantage of broadcasting functionalities and exploits robust devices such as ILMS or SOAs. However, it is limited in capacity, mainly due to large losses that have to be compensated by amplification. The optical signal to noise ratio (OSNR) affected mainly limits the capacity.
The second architecture (Figure 3.15b) has a priori a larger potential in terms of capacity because the architecture simply includes a tunable source and a passive wavelength router. However, this architecture is not adapted to the broadcast of the packets, and the fast tunable laser is probably the most challenging switching element.
In the perspective of the long-term scenario, with optical interfaces, the constraint comes from the output power that must be high enough to be compatible with optical interfaces. In addition, the polarization is responsible for problems in the optical regenerative structures, it is then fundamental to transform a switched packet stream into a packet stream in a transmission-like configuration. This is the reason why an optical conversion is mandatory in the switching matrix.
Figure 3.15 (a) Optical matrix-based on SOAs. (b) Optical matrix-based on tunable lasers.
Figure 3.16 SOA-based matrix compatible for optical interfaces.
Figure 3.16 shows an optical matrix based on a SOA technology but including a new element: the wavelength selection/conversion stage, as it is studied in the frame of the 1ST DAVID project.
Figure 3.17 A 32 SOA gate-array module.
The SOAs have been used in different system applications for amplification but also for wavelength conversions or for optical gating. To realize large systems as described previously, there is a need for a large amount of components. In this case, the integration is then mandatory to make such a matrix very compact. OPTO+ has designed and realized 32 SOA gate array modules. The module shown in Figure 3.17 includes 32 SOAs and their respective drivers. It has been used to realize a 640 Gbit s−1 switching matrix.
The tunable source is a key component for many system applications.
In the case of slow switching, we can identify the tunable wavelength conversion to provide the required flexibility to achieve a best utilization of the wavelengths in a network. Another evident application is the replacement of ILMs with tunable sources. The advantage is mainly in the spare cost: instead of duplicating all the sources, the objective is to have only one source capable of emitting at any wavelength of the comb exploited in the WDM system. Finally, another application is the monitoring of optical switching systems. In this particular case, we need a compact structure capable of testing the different wavelengths and paths of a switching system. To be compatible with the system constraints, the requirements are switching times in the range of milliseconds or more (for monitoring or for sources), large tunability, high output power, and good extinction ratio.
In the case of fast switching, the main applications are for the metro and the backbone. The tunability is fundamental in providing the required flexibility to exploit the WDM dimension in optical packet switching network concepts. The requirements are fast switching time in the range of a few nanoseconds, small tunability (four or eight channels), high output power, good extinction ratio, and high ON/OFF to guarantee no impact of the cross talk on the signal quality.
For slow structures, a DBR laser has been tested by different laboratories, and feasibility is not an issue.
For fast structures, the main problem is the stability of the wavelength. DBR can be considered if the tunability is small. These components have been demonstrated to be feasible, with switching times in the range of a few tens of nanoseconds. Another alternative is the selective source. Based on the cascade of a laser array, a SOA gate array, a phasar, and an integrated modulator, this structure has been demonstrated to be feasible.
The optical synchronization is probably the most challenging function. The objective is to process the signal, if possible, in the WDM regime or at the wavelength level. The second important point is probably the lack of digital memory that forces designers to think differently. In that context, the synchronization cannot be done at the bit level. We assume that the synchronization can be efficient in a resolution of a few nanoseconds. When we have said that the other point is to identify the source of desynchronization with respect to a reference clock.
The first source of loss of synchronization is the thermal effect in the transmission fiber. With a value between 40 and 200 ps km−1, depending on the mechanical protection scheme adopted for the fiber, thermal effects can dramatically affect the phase of the packet streams. The WDM dimension can be advantageously exploited to make the synchronization stage compact and low cost.
The second source is the loss of synchronization in switching fabrics due to a nonideal path equalization. This occurs at the packet level, imposing a synchronization at the wavelength level.
The structure adopted is shown in Figure 3.18.
The optical regeneration is one of the fundamental functions to make the approach realistic. To build all-optical networks while having optical switches capable of handling terabits/second throughputs, the optical regeneration is then mandatory at the periphery of switching architectures. The main functions are the total reshaping of the pulses in the amplitude and in the time domains. To achieve this reshaping, several techniques can be adopted. We will retain one, particularly adapted to the characteristic of switching fabrics creating strong impairments between pulses or between the groups of pulses. The technique adopted is a total reshuffle of the pulses adopting nonlinear elements such as Mach–Zehnder or Michelson structures.
Figure 3.18 Optical synchronizer as proposed in the frame of the 1ST DAVID project.
The main distortions identified are the following: bits affected at the periphery of packets due to the switching regime, nonlinear effects such as cross-gain modulation and four-wave mixing, cross talk (in-band and out-of-band), patterning effects by crossing active devices, noise accumulation, and jitter accumulation.
To overcome these effects, a structure has been proposed in the frame of the DAVID project. This structure, presented in Figure 3.19, has the following characteristics: by using a cascade of two nonlinear elements, the convoluted function creates a much more nonlinear transfer function, thus limiting the noise transferred in the first cascade. This has an important impact on the OSNR specification, which can be close to the back-to-back value (before the first stage) even in the case of a large number of cascades.
Figure 3.19 Structure of the Re-amplification–Reshaping–Retiming (3R) regenerator as it is studied in the frame of the DAVID project.
To really reshuffle the pulses, different techniques will be adopted. We can retain an amplitude and a phase modulation creating an amplitude modulation in interferometric structures to really enhance the extinction ratio and remove the noise. The second technique adopted is a sampling technique of each pulse with a clock to remove the jitter. The wavelength conversion technique will then be preferred to reallocate the wavelength in the correct wavelength comb of the new system.
The feasibility of the approach was demonstrated for the first time in 1998, at the end of the ACTS KEOPS project.
In this project, we have cascaded 40 network sessions error free at 10 Gbit s−1 per wavelength demonstrating for the first time the possibility of building an all-optical network at a backbone scale.
Figure 3.20 gives the network session tested and put in a loop to demonstrate the concept.
The performance is probably one of the most important indicators for the feasibility of such concepts. When the physical aspects are verified, the challenge becomes the performance in a real traffic environment.
If, in the metro, the capacity is limited, in the backbone this is the major characteristic. To provide the capacity with a technology limited today to 10 Gbit s−1, the only solution is in the exploitation of the WDM dimension.
Figure 3.20 (a) Network session put in a loop to test the feasibility of an all-optical network. (b) Photo of the demonstrator. (c) BER curves giving the physical performance.
Therefore, the WDM dimension will be fully exploited to provide the required capacity but also to avoid collisions due to the natural statistical multiplexing of packets on the wavelengths.
The second particularity is the aggregation. Depending on the traffic profile, circuit switching or packet switching will be preferred.
Circuit switching techniques can be envisaged in the first scenario as a transport layer to provide the capacity of transport.
To be compatible with a DATA traffic, the coupling of a cross-connect with a packet router is, even today, the more pragmatic approach. This is a subject of strong interest for products that are used at present.
However, this solution is not really cost-effective because only two alternatives can be adopted:
In this case, what could be the benefit of the concept proposed?
Figure 3.21 shows a table summarizing the performance in terms of packet loss rate established in the frame of the Reseau Optique Multiservice (ROM) project. It appears that on the three class of services considered, the end-to-end performance can be obtained. From dimensioning issues, it appears that for the WAN, a sum of 30% for Class of Service 1 (CoS 1) and CoS 2, and a best effort (BE) lower than 80% is tolerated.
Figure 3.21 Performance of the all-optical packet switching network concept.
This demonstrates the viability of an all-optical concept and as a consequence the viability of the opto-electronic scenario.
To select a technology to task is not easy, but we can draw some conclusions:
For the cost approach, everything will depend on the aggregation efficiency. In the following, we have computed the relative cost of different approaches, comparing mainly packet switching and circuit switching.
If the average load of a wavelength is high enough, due to an efficient aggregation process, then circuit switching is probably viable. However, if the load is low, below 20%, even if the cost of switches are more expensive, the gain in statistical multiplexing creates a real opportunity for packet techniques making them less expensive than circuit switching techniques.
The main reason for this gain is probably the high cost of the wavelength due to expensive infrastructure costs, pushing all telecommunications companies to prefer an increase of the bit rate rather than an exploitation of the WDM dimension.
Therefore, the tendency is probably packet techniques to decorrelate the bit rate from the granularity of switching and high-bit rates to adopt the most cheap technology while providing the required capacity.
Figure 3.22 shows the areas where optical packet switching is better than circuit switching.
From the figure, the numbers 2, 4, 6, and 8 indicate the ratio in terms of cost per port (wavelength) between an optical packet switch and a cross-connect targeting the same size (256 × 256) and the same technology.
The load of a wavelength is the average load.
Figure 3.22 Need to exploit packet techniques in the near future.
The ratio on the horizontal axis is a ratio between the wavelength transmission cost (including the installation costs) and the cross-connect port.
For example, if the ratio between the cost of a wavelength in the transmission system and the cost of a cross-connect port is equal to 1 (red bar), optical packet switching techniques are interesting:
Therefore, the tendency is the following: If the cost of a wavelength in a transmission system is high (the case of the metro where the installation cost is not negligible), or if the aggregation is not efficient enough forcing an average load very low (this is a serious tendency with the increase of the application bit rate and the sporadicity of the traffic profile), packet switching techniques always exhibit a better performance than circuit switching.
As an example, we have computed, for two levels of aggregation, the load of a network with respect to the distance of the network (for the access to the backbone). It appears that in major cases, if the aggregation process is not enough, even in the backbone, the packet switching technique is the cost-effective solution.
Figure 3.23 shows the importance of packet switching techniques, also for the backbone. This is one of several curves that could be drawn. However, it once again shows a tendency.
The grooming or the aggregation efficiency depends on the traffic profile in large part. Therefore, we plot two indicative curves:
Figure 3.23 Curve showing strong interest to introduce packet switching techniques even in the backbone.
The vertical axis indicates that the required average load of a wavelength is cost-effective. The horizontal axis indicates the average distance of a transmission system with respect to an average network session representative of the network considered. Therefore, the WAN starts for transmissions higher than 100 km. The calculations show the importance of the time multiplexing. If the distance is long, there are a large number of clients sharing the same network infrastructure. Therefore, the cost per client is reduced. In addition, the cost of the installation of a wavelength is considered lower than that of the metro. The reason is that in the WAN, natural infrastructures are exploited to reduce the installation costs (such as highways or railways). If we are under the curve in bold, there is an advantage of introducing packet switching techniques. The grooming tendency gives values for the required load in circuit switching.
For example, in the case of a good grooming efficiency, if the average distance of propagation of a representative network session is lower than 300 km, you will have a cost gain by exploiting packet techniques. In the case of a low grooming efficiency, packet switching techniques are always more efficient.
In this chapter, we have suggested the optical switching technique as a potential technique for the next generation of systems or networks. However, more importantly, an evolution scenario is given for the metro part and the backbone part describing what could be the most promising solutions. Optical packet switching techniques appear very attractive because they really offer a solution compliant with the traffic constraints.
Circuit switching techniques will be introduced as a first step, but we must not forget optical packet switching techniques that will improve the bandwidth utilization.
We have seen that there is no problem building any of the network concepts proposed, because all the functions have already been demonstrated to be feasible. The solution is now in the availability of the technology and in the cost. The progress on this integrated/low-cost optical technology will be fundamental for the future systems and could really provide new advantages with respect to classical solutions exploiting electronics only.
Today, we can imagine two scenarios:
The first one will consist of the introduction of a circuit switching platform to give a concrete answer to an immediate need at a lower cost. Circuit switching is probably the best today. However, we cannot forget the evolution of the traffic profile to increase the bit rate at the access part. Therefore, the migration scenario is an important argument to propose solutions that can be rapidly adapted to packet switching techniques with the best flexibility and upgradability.
The second scenario is in the adoption of packet switching techniques such as RPR for the metro or routers for the backbone. And we then need to think about competitive solutions with serious added values to justify the introduction of optical techniques in the network. Optical packet switching is probably one technique that can emerge. In the metro part, the benefit is mainly in the exploitation of the WDM dimension and in the very simple in-line processing (without any buffer) to reduce the latency and the number of Transceiver (TRX). In the backbone part, the benefit is probably in the adoption of large packets assimilated to containers in order to be able to exploit techniques to reshape and manage the traffic profile in the edge nodes and WDM techniques to reduce mainly the latency without constraining the capacity expansion in the core nodes.
However, to build subsystems, there is also a need for an advanced technology. Without any advanced technology such as tunable sources or tunable filters, there will be no chance to provide the functionality required to be really competitive on other aspects. Therefore, the development of this new technology (components and systems) is then fundamental and will position a constructor of equipment as a leader in the future market.
The author acknowledges his colleagues from Alcatel, the European Commission, and the French ministry for funding for the following projects—RACE 2039 ATMOS, ACTS 043 KEOPS, REPEAT, IST DAVID, RNRT and ROM, particularly T. Atmaca from INT, M. Renaud from Opto+ who provided key results in terms of network performance and optical component illustration, and all the partners involved in these projects.