The technical details of media access are generally implemented in OSI layer 1 and 2. Usually the nature of OSI layer 1 determines the operation conditions and feasible access mechanisms that can be implemented in layer 2.
Media access on full-duplex media is often a simple task because each station can send data independently of each other. On the other hand, media access on half-duplex media is typically nontrivial and requires solutions for the following issues:
Furthermore, most interesting for each access mechanism is the question about media utilization. Typically, there is a natural trade-off between real-time support and maximum media utilization, which might be solved only by increasing the complexity of the system (for example, by introducing a base station which is typically used in wireless data/voice communications).
The subsequent sections provide a detailed problem description, especially considering half-duplex media access in more detail, and explain possible solutions. Historical and current examples are also given.
If the communication medium supports full-duplex access, most communication problems are inherently solved. Full duplex can be achieved using the following methods:
When full-duplex communication is supported, a station willing to transmit a frame does not need to consider the transmission states of the other station. Instead, the current station can immediately initiate the transmission. Any issues regarding QoS, blocking, or real-time requirements must be solved by the actual network node (e.g., an Ethernet switch) using queuing and scheduling to decide what should be forwarded first to the next link.
Consequently, half-duplex media deserve closer attention, especially because several modern systems such as wireless networks are typically half duplex. Furthermore, the half duplex mode is preferred in many real-time communication systems (e.g., Ethernet Powerlink) in order to avoid bridges which would introduce some additional latency.
Access on half-duplex media requires algorithms that select exactly one station at each time for transmission. For a particular station, the problem arises when to send data. The algorithms can be divided in deterministic and statistical approaches.
Obviously, access arbitration is the central mechanism that must be tuned and controlled to achieve or improve fairness, precedence, QoS, and real-time.
A technically simple approach to handle half-duplex media access is to assign a dedicated timeslot of fixed size to each station. If there are N timeslots within a periodic frame, then the media can support up to N stations (however, usually N − 1 because one timeslot is often used for management purposes).
This mode is also known as time division multiple access (TDMA) and is generally best suited for isochronous services such as telephony, where the communication patterns can be described by calls or circuits (steady data rates, blocking, non-bursty). TDMA is implemented, for example, in GSM, UMTS, DECT, or TETRA.
In some cases (e.g., TETRA), multiple stations that wish to communicate with each other can statistically share a particular timeslot. The major disadvantage is that one station cannot utilize unused timeslots easily (without significant increase of the overall management complexity; an example is GSM). Therefore, TDMA is not a good choice to transmit bursty data traffic.
In order to achieve better media utilization, TDMA can be implemented with dynamically negotiated timeslot sizes. Every additional station registering at the central base station will be assigned a dedicated timeslot at the expense of the timeslot sizes of other existing stations. That is, assuming a fair environment, the base station would create new timeslots by dividing the whole time period (typically called a superframe) by the total number of stations.
Alternatively, timeslot sizes may remain constant but the base station may assign a set of D timeslots for particular stations. That is, a single station may be assigned a single timeslot or D timeslots, assuming the superframe consists of N > D timeslots.
This solution achieves a mix of both real-time behavior and good trunk utilization and is therefore common in modern wireless cellular network solutions, including WiMAX, GSM, and UMTS.
The DARPA-founded Aloha media access protocol has been invented in 1971 to interconnect several minicomputers distributed over the islands of Hawaii via a single wireless channel [Abr70]. Although today Aloha is seldom used practically, it is fundamental to understand the Aloha mechanisms (especially the drawbacks) before studying modern concepts such as carrier sense multiple access (CSMA).
All classic Aloha methods described below used fixed frame sizes (cells) and did not support carrier sense (CS).
The basic idea of Aloha is implemented in Pure Aloha:
Due to this rather anarchistic access method, the collision probability is relatively high. Let T denote the fixed message length in seconds and R the rate of both messages and retransmissions. Assuming that the starts of messages and retransmissions can be described via a Poisson process, the probability of zero messages within time T is exp(−RT). However, a collision occurs if a particular message is sent at time t and another message appears within [t − T, t + T]. That is, a collision occurs if another (or more) message(s) appear within a time interval of 2T s. Therefore, the collision probability is 1 − exp(−2TR). Hence the average number of retransmission per second is R[1 − exp(−2TR)] and adding the message rate r leads to R = r + R(1 − exp(−2TR)), or equivalently, r = R exp(−2TR).
Multiplying both sides by T gives a relation between the channel utilization rT and total channel traffic. The channel utilization reaches a maximum value of 1/2e = 0.184 when RT reaches 0.5. That is, the channel is saturated as soon as two messages per message duration time T are sent on average. At maximum, only 18.4% of the channel bandwidth can be utilized.
Since the message rate r is the product of the number of stations k times the average single-station message rate λ, the maximum number of stations is determined solely by λ and T, that is, kmax = 1/2eλT.
The throughput of Aloha can be doubled if all transmissions must take place within fixed timeslots of size T, which is usually equal to the frame size. That is, all stations maintain a synchronized clock and transmissions must start at the beginning of a timeslot. In this situation, collisions can only occur within a single timeslot T and not within 2T as in the case of Pure Aloha. Therefore, using the same considerations as above, the maximum channel utilization is 36.8%.
Today, Slotted Aloha is still used, for example, in RFID networks because the algorithm can be implemented on simple ASIC structures and the throughput requirements are relatively low.
The most important achievement was made by Robert Metcalfe in 1972 when he created Ethernet-based on Carrier Sense Multiple Access with Collision Detection (CSMA/CD). Actually, three important improvements [Cha00] were made compared to the Aloha principle:
This truncated binary exponential backoff algorithm ensures that the probability of repeated collisions is exponentially reduced. The backoff is truncated by k = 10, so up to 1024 potential timeslots can be occupied by stations (theoretically).
These improvements enable Ethernet to reach throughputs close to the physical bandwidth, in particular 95% with small frames and close to 100% with large frames [Bog88]. It is important to understand that collisions in CSMA/CD-based network technologies are part of the arbitration process; the occurrence of collisions does not indicate a bad design.
In certain environments, most importantly wireless media within the RF range, the concept of CD is not implementable because
Therefore, wireless technologies, such as IEEE 802.11 WLANs, utilize a collision avoidance (CA) method with the following properties:
In case of collisions, again a truncated binary exponential backoff algorithm ensures that the probability of repeated collisions is exponentially reduced.
Generally, the throughput of CSMA/CA transmissions is approximately 50% lower than the nominal bit rate because each packet must await an acknowledgment.
Statistic but collision-free medium access methods can be implemented via a token mechanism. The principle is simple: A single “token” (e.g., a bit flag) is handed over from one station to the other, allowing each station that possesses the token to initiate a data transmission. After a transmission, the station must forward the token to the next station.
Logically, the network must follow a ring topology because the single token must “rotate” through all stations. This method has been implemented in IEEE 802.4 (token bus, the physical medium is a bus) or IEEE 802.5 (token ring, also the physical medium is a ring).
Obviously token-based access arbitration provides statistic access to the medium because idle stations simply forward the token without any frame being sent. However, note that the arbitration itself is deterministic, i.e., the order is foreseeable.
In general, token mechanisms prevent channel occupation, i.e., other than with CSMA/CD-based networks, it is not possible that particular stations are preferred by a random backoff algorithm during specific network conditions.
Moreover, the rotating token provides each station periodic transmission opportunities with guaranteed maximum interval duration. This maximum token rotation time (MTRT) depends on the number of stations and the maximum transmission delay per station.
Since the token is always forwarded deterministically and the MTRT is a known constant, token-based access arbitration is well suited in real-time environments. Today, token mechanisms are mainly used in field bus systems, such as Profibus (IEC 61158/IEC 61784).
Polling mechanisms logically resemble token passing; the only difference is that a dedicated central polling station is required (while token mechanisms do not need dedicated stations, except for token monitoring and reestablishment tasks).
Instead of a token that rotates through all stations, a central polling instance polls each station whether it wants to transmit data. If the station does not want to transmit, the central station receives some sort of negative acknowledgment and immediately polls the next station.
The polling arbitration mechanism has the following distinguishing properties:
Polling-based access arbitration has been used with IEEE 802.12 and is still implemented in several real-time systems. The standard IEEE 802.11 defines an optional polling mode for real-time services though it is only seldom implemented because queue-based CSMA/CA is practically sufficient. Another more recent example is Ethernet POWERLINK, an Ethernet-based real-time protocol.
Although the problem of access arbitration is the most important issue, this short section shall highlight other issues that must be considered with each implementation.
Regarding the issue of fairness, the two main questions are
Obviously, only statistic arbitration methods may result in unfair situations. One example for a fairness-enforcing mechanism is to introduce a post-backoff in CSMA/CA-based systems. This concept requires that every station that successfully occupied the channel must start a post-backoff timer after transmission, so that unsuccessful stations are luckier in the next arbitration cycle.
In many applications, several stations or frames (especially control frames) must be given a higher access priority. If statistical access methods are used (e.g., CSMA), priorities can be implemented by tuning backoff times. In deterministic access schemes (e.g., Token) priority information (flags) might be signaled within the token. Obviously, TDM methods provide the same access priority to each station.
Again, the problem of QoS is basically an issue with statistical access schemes because token or polling principles inherently provide constraints on delay. Obviously, QoS is not an issue with (synchronous) TDM techniques.
With statistical access schemes, prioritized queuing and consequently tuned backoff is the only method to implement QoS. For example, IEEE 802.11e defines up to eight queues per station (practically, WiFi MultiMedia only requires four queues) to differentiate higher or lower priority packets. Each queue can be considered as a virtual station with its own backoff constraints. That is, the truncated binary exponential backoff algorithm is implemented for each queue separately while the range of possible random numbers (the contention window, CW) has a different size in each queue, depending on the priority.
This QoS mechanism does not guarantee that high-priority frames are always preferred when low-priority queues are also filled; rather, this mechanism provides statistical QoS—which turned out to be sufficient for practical interactive voice and video services.
In wireless environments, some stations may interfere with sessions between other pairs of stations. This is especially an important issue when the disturbing stations cannot detect the peers of existing sessions.
For example, assume that station A communicates with station B. Another station C can detect A but not B. Therefore, CSMA/CA on C will not take care about B and station A will suffer from collisions caused by the simultaneous transmissions of B and C. In this example, stations B and C are “hidden” from each other.
One (optional) solution defined in IEEE 802.11 (WLAN) is to announce a transmission via a dedicated request-to-send (RTS) frame, which are confirmed by the receiver by a clear-to-send (CTS) frame. Both frames, RTS and CTS, carry an estimation of the total handshake duration. Hidden stations will receive either the RTS or the CTS frame and shall be silent during the announced handshake duration.
That is, the total transaction consists of one RTS, one CTS, one data, and one acknowledgment frame. Clearly, this four-way handshake decreases the throughput of plain CSMA/CA by approximately 50%; only 25% of the bandwidth remains for data throughput.
In reality, the RTS/CTS mechanism is typically only used in point-to-multipoint wireless bridging scenarios, where the concurrent non-headend bridges use directional antennas (pointing to the single head-end bridge) and therefore cannot see each other.