Chapter 4
Medium Access Control Sublayer
|
|
The main theme of the chapter is how to allocate a single broadcast channel
(versus point to point) among competing users. Various approaches using static
and dynamic channel allocation will be examined. Special attention will be given
Ethernet.
4.1.1 Static Channel Allocation in LANs and MANs - Important to
remember bursty nature of data communications. WANs not considered because
normally connected via a point to point channel.
- Frequency Division Multiplexing - Frequency of one channel
divided (usually evenly) among n users. Each user appears to have full
channel of full frequency/n. Wastes bandwidth when user has nothing to
send or receive, which is often the case in data communications. Other users
cannot take advantage of unused bandwidth.
- Time Division Multiplexing - Time of one channel divided (usually
evenly) among n users. Each user appears to have full channel for time/n.
Same problems as FDM.
- Analysis of Static Channel Allocation - Static allocation is
intuitively a bad idea when considering that for an n divisions of a
channel, any one user is limited to only 1/n channel bandwidth whether
other users where accessing the channel or not. By limiting a user to only a
fraction of the available channel, the delay to the user is increased over
that if the entire channel were available.
Intuitively, static allocation results in restricting one user to one
channel even when other channels are available. Consider the following two
diagrams, each with one user wanting to transmit 400 bits over a 1 bit per
second channel. The left diagram would have a delay 4 times greater than the
diagram on the right, delaying 400 seconds using 1 channel versus 100 second
delay using the 4 channels.
Static Allocation
1 channel per user
|
Dynamic Allocation
up to 4 channels per user
|
One user with entire channel
|
Generally we want delay to be small. More formally the mean time delay
for one channel, T is:
C = Channel capacity bps (constant). Larger capacity reduces T.
l = input rate, frames/sec. Smaller input
rate reduces T.
m = average bits per frame. Larger frames
reduces T.
T = 1/ (mC-l)
mean time delay |
For each of n subchannels, the mean delay time using FDM (TFDM)
is:
C/n = Channel capacity bps (constant)
l/n = input rate, frames/sec.
m = average bits per frame
TFDM = 1/ (m(C/n)-(l/n)) = n/(mC-l) = nT |
Generally, dividing a channel statically into n channels increases
the average delay by a factor of n agreeing with the intuitive result
from the diagrams.
4.1.2 Dynamic Channel Allocation in LANs and MANs - Obviously static
allocation of a multi-access channel is not generally desirable when overall
channel usage is low. Note that channel use is
bursty with long periods
of inactivity punctuated by short bursts of activity.
Dynamic Channel Allocation Assumptions
- Station model - n independent stations each generating
frames for transmission. One frame is generated and successfully
transmitted at a time.
- Single channel - A single channel is shared with other
stations.
- Collision - Overlapping transmissions destroy the frames, can
be detected, and require retransmission. Collisions are the only errors.
- Time
- Continuous time - No master clock dividing time,
transmissions can begin at any time.
- Slotted time - Master clock, time is divided into discrete
intervals (slots), transmissions can only begin at the start of a slot.
Slots may contain 0 frames (idle), 1 (a successful transmission), or 2
(a collision).
- Carrier
- Carrier sense - Stations can detect whether the channel is in
use prior to transmitting. If in use waits for channel to become
available.
- No carrier sense - Stations cannot detect whether the channel
is in use prior to transmitting. Transmits and later determines whether
successful.
|
4.2 Multiaccess Protocols - Dynamic channel allocation
4.2.1 ALOHA - Radio broadcast on
common frequency to transmit.
The figure at right illustrates 4 independent stations each attempting to
transmit at the same time.

Because of the use of a common frequency, when more
than one station broadcasts, the signal is effectively destroyed and must be
retransmitted.
Pure ALOHA - The first three assumptions are valid along with the
following two:
- Time
- Continuous time - Each station can broadcast at any time.
- Carrier
- No carrier sense - Before transmitting, a station does not
check for another transmitting.
|
Important points
- Station transmits whenever ready.
- Collision detected through radio feedback. Sending station cannot receive
own transmission on collision with other transmitter.
- On collision, sender waits random time (to prevent a certain collision
when colliding stations retransmitted) before trying again. When a
transmission collides the sender continues sending the full frame to
completion.
- Contention period - The time interval in which frames can overlap
(i.e. collision can occur).
- Assuming fixed size frames, for Pure ALOHA the contention period begins
when one station starts transmitting.
- Because of no carrier sense (i.e. a sender does not check whether
channel in use) the contention period lasts for two frame times,
since a new station first frame bit transmission could overlap the last bit
of the transmitting station's frame.
- Efficiency
- S = channel throughput; the mean new frames generated (per
frame time), if S>1 exceeds channel capacity so 0<S<1 assumed. Successfully
sending one new frame every two frame times would yield S=0.5
throughput.
- G=channel load (i.e. offered traffic); retransmissions (per frame
time) + new frames S, G>= S.
- when no collisions, retransmissions = 0, G = S.
- when high loads, collisions likely then G>S.
- P0=e-2G, probability transmission
does not collide for contention interval of 2 frames.
- S=GP0=Ge-2G (assuming all new frames
get sent eventually). Channel throughput is (retransmitted frames + new
frames)*probability of a collision
- Maximum throughput - At G=0.5, meaning that a frame
transmission is attempted by all stations combined about 1/2 the available
time. Note that 5 stations each transmitting 100% of the possible frame time
would produce G=5.0 offered traffic, far exceeding the channel capacity.
Makes some intuitive sense due to the contention period of 2 frames;
transmitting more than 1/2 the available frame times would guarantee that
most transmissions would result in a collision.
Then maximum throughput S = Ge-2G = 0.5e-2*0.5
= 0.5e-1 = 1/2e = 1/2*2.71 = 0.184 new frames
per frame time. Channel utilization is about 18%.
Pure ALOHA Contention Period is 2 Frame Times
 |
Slotted ALOHA - The assumption of continuous time has been changed
from
Pure Aloha:
- Time
- Slotted time - A station can broadcast only at the beginning
of a slot.
- Carrier
- No carrier sense - A station does not detect another
transmitting.
|
Important points different from Pure ALOHA
- Slot beginning defined by a central synchronization signal received by all
stations simultaneously.
- Station transmits whenever ready at the beginning of a slot.
- Contention period - The time interval in which frames can overlap
(i.e. collision can occur).
- Assuming fixed size frames, for Slotted ALOHA the contention period
begins when slot starts.
- Because of a common reference at the slot start the contention period
lasts for one frame time. Two stations that begin transmission
simultaneously will overlap.
- Efficiency
- P0=e-G, greater probability
transmission does not collide since contention interval is now only 1 frame
time.
- S=GP0=Ge-G throughput
(assuming all new frames get sent eventually).
- Maximum throughput S - At G=1.0 meaning that a frame
transmission is attempted by all stations combined each possible time slot
which makes intuitive sense because G>1 means all frames cannot be sent
while G<1 means that opportunity to send frames is missed.
Then S = Ge-G = 1e-1
= 1/e = 1/2.71 = 0.368 new frames per frame time. Channel
utilization is about 36% or double that of Pure Aloha due to halving the
cost of a collision.
Slotted ALOHA Contention Period is 1 Frame Time
 |
- Expected number of transmissions per new frame actually transmitted -
Recall G=channel load (total number of frames attempted, new and
retransmissions, per frame time). The number of transmissions required to send
a new frame is E = eG, growing exponentially with G. Small
changes in the load G can then drastically impact a channel's throughput
performance S, as illustrated by the following graph.
Throughput S versus Attempts G of Aloha
 |
4.2.2 Carrier Sense Multiple Access Protocols - Persistent and
Non-persistent. The first three assumptions are valid along with the following 4
and 5:
- Station model - n independent stations each generating
frames for transmission. One frame is generated and successfully
transmitted at a time.
- Single channel - A single channel is shared with other
stations.
- Collision - Overlapping transmissions destroy the frames, can
be detected, and require retransmission. Collisions are the only errors.
|
- Time
- Slotted time - A station can broadcast only at the beginning
of a slot. The slot here being a contention slot defined as a time when
no station is transmitting. There is no central slot definition.
- Carrier
- Carrier sense - Stations can detect whether the channel is in
use prior to transmitting. If in use a station waits for channel to
become available.
|
- 2t (Minimum time to detect a collision)
where t =
propagation time between the farthest points of a channel. Suppose Station 1
below transmits a frame that takes t time to arrive at Station 2. Just before
frame arrival, Station 2 detects an idle channel and starts transmitting, a
collision occurs and is detected by Station 2 almost immediately. However,
another t time must pass before Station 1 detects the collision.
Stations must transmit for 2t to
guarantee collision detection.
- Slot size - Minimum time to detect a collision, 2t.
- Seized channel - After a station has broadcast for 2t time it has exclusive use
of the channel since all other stations have had time to sense the channel is
in use and will not transmit until the channel is sensed to be idle.
Collision of Station 1 frame with Station 2 frame after
t time

|
Throughput S versus Attempts G comparison
 |
- Which protocol has the best channel utilization?
- What about delay?
|
CSMA Protocol Comparison
Characteristic |
1-persistent - constantly check for
carrier, send whenever channel idle |
nonpersistent - randomly check for
carrier, send when channel idle |
p-persistent for slotted - check for
carrier, if idle send with probability of p (or defer to next slot with
q=1-p) |
carrier sense |
send when channel idle |
send when channel idle |
send with p-probability when channel idle |
busy |
wait for carrier, checking continuously |
wait random time to check carrier |
treated like a collision |
collision |
wait random time to check carrier again |
wait random time to check carrier |
wait random time to check carrier |
utilization |
above ALOHA since only send when channel idle |
above 1-persistent since not all stations constantly
checking for idle channel at same time, higher probability of a collision |
depends upon p; above non-persistent since always checks
for channel idle but only send with probability of p, fewer collisions |
delay
low load |
small since always trying to send when channel becomes idle |
small since station will send whenever channel found idle
but longer than 1-persistent since only checks randomly when busy |
large when small p since station will not always send when
channel idle |
delay
high load |
high due to collisions |
longer than 1-persistent since only checks randomly when
busy |
large when small p since low probability of sending when
channel idle and channel rarely idle |
CSMA/CD (Carrier Sense Multiple Access with Collision Detection) -
CSMA
improved over ALOHA by only transmitting when the channel was idle,
but still transmitted the full frame because did not detect a collision occurred
to stop. A station can transmit a frame that was much longer than the contention
slot
2t time, for
example, Ethernet has a contention slot size of 64 bytes but a maximum frame
size of 1526 bytes.
Collision detection serves to stop transmitting as soon as a collision
occurs, reducing wasted time due to the collision to a maximum of
2t.
- Problem - What is the minimum frame size (slot size) in bits for:
- L = channel length = 5 km
- p = propagation time per meter = 5*10-10 sec/m
- b = bit rate = 4 Mb/sec = 4*106 bit/sec
- Bit time = 1/b = 1 bit/4*106 bit/sec = .25*10-6
sec/bit
- 2t = contention time = 2*L*p =
2*5000m*5*10-10sec/m = 5*10-6 sec.
- S = contention slot size = 2*L*p/(1/b) = 2t/(1/b) = 5*10-6 sec/.25*10-6
sec/bit = 20 bits
- Problem - Suppose bit rate was increased from 4 Mb/sec to 10
Mb/sec?
- Bit time = 1/107 sec/bit = 1*10-7 sec/bit
- S = contention slot size = 2t/(1/b) =
5*10-6 sec/1*10-7 sec/bit = 50 bits
- Problem - What is S for 802.3 standard (Ethernet)
- L = 2.5 km.
- p = 1024*10-11 sec/m
- b = 10 Mb/sec = 1*107 bits/sec.
- Bit time = 1*10-7 sec/bit
- 2t = contention time =
2*2.5 km*1024*10-11 sec/m =5120000m*10-11sec./m.=
512*10-7sec.
- S = contention slot size = 2t/(1/b)
= 512*10-7sec/10-7sec/bit = 512 bit slot
- Problem - Long channel length (L) is desirable as is higher
bit rates (b) however large slot sizes increase the cost due to
collisions.
What happens to S, the slot size, as L or b increases?
S = 2t/(1/b) =
2*L*p/(1/b) |
4.2.3 Collision-Free Protocols - Each of N stations assumed to have an
address 0 to N-1. Examine which station gets channel after successful
transmission.
Basic bit map - Contention slot used to make reservation
for transmitting by placing a 1 if station wants to transmit. Stations 1, 3,
and 7 make reservation during contention slot, transmit in numerical order
during appropriate data slot. All stations can see reservations
broadcast by other stations.
- Utilization high under heavy load since can approach 100% minus the
contention slots.
- Constant delay under light load because a single transmitter must wait
through the contention slot time.
|
Binary countdown - Reduces delay caused by contention slot
period. Station wanting to send broadcasts address as bit string starting
with high order bit. Addresses from all stations are ORed as arrive (i.e. a
system with 256 stations would require an 8-bit address with bits broadcast
7-0; all bit 7's of four station addresses broadcast at same time, ORing
0+0+0+1 = 1). Any station seeing a 1 in a position where their address is 0
drops out and must wait for next contention slot. After all address bits are
broadcast the last remaining station transmits.
- Utilization can be 100% under heavy load.
- Delay is function of address, station 1111 would enjoy short delay
compared to station 0000.
|
Collision-Free Protocols Comparison
|
Bit-Map Protocol |
Binary Countdown |
Format |
Contention slots - Each station has reservation slot into
which it marks (0 or 1) that it wants to transmit. Slots come around 0, 1,
2, 3, ...
Data slots - All stations with a reservation transmit in order of
reservation. |
Station wanting to send broadcasts address as bit string
starting with high order bit. Addresses from all stations are ORed as arrive
(i.e. all bit 7s sent at same time, 0+0+0+1 = 1). Any station seeing a 1 in
a position where their address is 0 must wait till next time. After all
address bits are broadcast the last remaining station transmits. |
Delay - Low Load |
A station wants to send, on average, when contention slot
is at N/2 position. Low number stations are too late and must wait another N
length again to send. High number send after N/2 delay. Average delay is
1.5N for low numbers, 0.5N for high numbers. |
log2 N/2 for number of bits of an N station
address number. With one station sending, delay is 1 bit time, worst case is
log2 N. For 256 station, 8-bit address delay 8 bit times. |
Utilization - Low Load |
d/(d+N) since N reservation bits for each d data whether
sending or not |
d/(d+log2 N) overhead is the log2 N
reservation bits. By sending reservation as senders address first in the
message becomes d/d or 100%. |
Utilization - High Load |
d/(d+1) when each station sending data only 1 reservation
bit overhead |
Same as low load. |
4.2.6 Wireless LAN Protocols
Key problems using CSMA:
- Single frequency creates potential of collision.
- hidden station (a) - Station B is in range of station A and C but A
and C are out of each other's range and cannot sense each other's
transmission. C transmitting would collide at B but not sensed by A.
- exposed station (b) - With station A
and D out of each others range, station B can transmit to A; station C can
transmit to D without interference. However, C falsely concludes it cannot
transmit because it senses B transmission.
- CSMA works on wire because all stations receive transmission,
generally not on wireless because only a subset may receive transmission.
MACA
Multiple Access with Collision Avoidance - Sender stimulates receiver
to output short frame that is detected by adjacent stations, preventing them
from transmitting.
- A sends RTS (Request To Send) to B, a 30 byte frame with the length of
data frame to follow.
- B responds with a CTS (Clear To Send) with the copied length of data frame
as an acknowledgement.
- Any station other than B receiving RTS must remain silent long enough
for CTS to be transmitted by B. Station C and B receives RTS from A, D out
of range.
- Any station other than A receiving CTS must remain silent long enough
for data frame to be immediately transmitted by A. Station D and A receives
CTS from B with data frame length, C out of range.
- A receives CTS and starts sending data frame. C can transmit also after
waiting for but not receiving CTS since out of range of B.
- Collisions can occur if A receives RTS from C and B simultaneously (RTS
collision). A does not send CTS causing both B and C use binary exponential
backup algorithm to delay a random time before resending RTS.
4.3 IEEE 802 Standards
4.3.1 Ethernet 802.3
Collision detection at sender A can take as long as 2t
Ethernet 802.3
- CSMA/CD 1-persistent.
- 2.5 km maximum network length.
- Contention size is 512 bits = 64 bytes minimum frame size.
- Repeaters - Extend cabling by connecting segments, amplify and
retransmit (physical layer). Maximum length still 2.5 km and 4 repeaters
maximum (each introduces a small delay).
- Bridges - Can connect two or more networks (e.g. Ethernet to
Ethernet). Arbitrary number of bridges possible in theory.
- Hub - Used to connect stations to a common contention
channel when using 10Base-T wiring.
- Switch - Used to connect stations to an separate
contention channel when using 10Base-T wiring.
|

Most common kinds of Ethernet cabling

Three kinds of Ethernet cabling: (a) 10Base5 or Thick (b) 10Base2 or Thin
(c) 10Base-T
Cable topologies
 |
- Manchester encoding - To define bit boundaries. Using straight
binary encoding, 0000000 could be a long series of low voltages, difficult to
determine between bit boundaries since sender and receiver clocks may drift
apart. Each bit using Manchester encoding has a voltage change, a 1 might be a
high to low voltage and a 0 a low to high voltage. Differential Manchester
encoding has better noise immunity by relating the start voltage of a bit to
the end voltage of the previous, a 1 is represented by a voltage transition at
the start, a 0 by the lack of a transition. Requires double the bandwidth of
straight binary encoding.
- Frame format - The frame must be a minimum of 64 bytes due to
collision detection. It contains:
- Preamble - 7 bytes for synchronization (10101010101...)
- Start - Framing byte of 10101011
- Destination address - 2 or 6 (unique NIC card address) for
receiver to recognize its own frames.
- Source address - 2 or 6 (unique NIC card address) for receiver to
recognize the sender.
- Data Length - 2 bytes allows data up 65,535 bytes but limit is
1500.
- Data - 0 to 1500 bytes from higher layer.
- Padding - 0 to 46 bytes in case frame needs to be filled out to
64 bytes.
- Checksum - 4 bytes based on a cyclic redundancy check (CRC)
algorithm.
- Example Ethernet frame Station 1 sending an 'a' to Station 2

- Binary Exponential Backoff - Dynamically adapts to number of
stations trying to transmit. As a station experiences repeated collisions the
probability that it will attempt to transmit when the channel is idle
decreases.
- On a collision each station waits a random number of slots (a slot is
minimum 64 bytes long).
- To prevent all stations randomly picking the same number of slots,
on repeated collisions each station randomly picks from a progressively
larger number of slots to wait.
- On a collision the range of slots to wait grows to 2n for the
nth collision.
- On the 1st collision a station waits a random number of slots between
0 and 1 (i.e. 21 slots), the probability of two colliding
stations will attempt to transmit is then 1/2*1/2=1/4, as is the
probability that neither will attempt. The probability that only one will
attempt is 1/4+1/4=1/2.
- On the 6th collision, a station waits a random number of slots between
0 and 63 (i.e. 26 slots), the probability of two stations that
have collided 6 times will both attempt to transmit is then
1/64*1/64=1/4096.
- The average wait can become very long with the maximum wait interval
reached after 10 collisions at 0 to 1023 slots (i.e. 210 slots
with a minimum contention interval of 51.2 msec
(one slot time) yields a maximum delay of 1024*51.2 msec
=0.524288 sec. or on average about 0.261244 sec. for this attempt in
addition to previous attempts).
- Acknowledgments - CSMD/CD offers no acknowledgements, another frame
must be sent as a receipt creating additional network traffic.
- Performance - Normally each station has a different transmit
probability, the assumption here is that all stations are the same.
- contention interval - The interval during which frames collide.
For a slot 64 bytes long, a series of 5 collisions before transmitting a
frame results in a contention period of 5 slots * 64 bytes or 320 bytes. For
a data rate of 10 Mbps and 51.2 msec. slot
interval the contention interval would be 5*51.2 msec.=256
msec.
- k - number of stations always ready to transmit
- p - probability a station is ready to transmit
- A - probability of no collision, some station acquires slot.
- A = kp(1-p)k-1 when k=1, A=1*p(1-p)1-1
=p (1 station to transmit)
-
A is maximum when p=1/k
For k=4 stations:
p=1/4=0.25
A = kp(1-p)k-1=4*0.25(1-0.25)4-1=1*(1-0.25)3
= 0.753 = 0.421
For k=100 stations:
p=1/100=0.01
A = kp(1-p)k-1=100*0.01(1-0.01)100-1=1*(1-0.01)99
= 0.9999 = 0.369
A -> 1/e = 1/2.71 = 0.367 as k -> infinity |
- A(1-A)j-1 is probability contention interval has
exactly j stations attempting to transmit. A, the probability of no
collision, (1-A) the probability of one collision, (1-A)j-1 the
probability of j-1 collisions.
Delay based on average number of slots per
contention interval. For j=0 to infinity SjA(1-A)j-1
= 1/A the average number of stations attempting to transmit
during one contention interval. The smaller the probability A of
no collision, the greater the delay. |
- w mean contention interval - For a slot of duration 2t, the mean contention interval is w =2t/A. For optimal p the average contention slots is
never greater than e, so w <= 2te
= 5.4t.
Channel efficiency = P/(P+2t/A).
P is mean time to transmit a frame. As t
(slot size) increases the efficiency goes down. Recall what
affects t, the channel length and the
bit rate, both of which are desirable to increase.
Channel efficiency = 1/(1+ 2BLe/cF). As frame length
increases efficiency increases but delay would also increase (e.g. if
the delay for a given frame size were 1, doubling the frame size would
increase delay to 2). From the equation, as bandwidth B or
channel length L increase, efficiency decreases due to larger
contention slots and corresponding cost of a collision. Increasing F
or c increases channel efficiency.
- F - frame length
- B - bandwidth
- L - channel length
- c - propagation rate
- e - optimal case of e contention slots per frame
|
Efficiency of 802.3 with 64 byte contention slots and different size
frames.
Larger frames are more efficient; smaller frames have a higher percentage
of overhead and require more transmissions, increasing the probability of a
collision and the overall cost of transmitting the frame.
 |
- Switched 802.3 LANs - Provides collision free transmission between
switched elements.
- Common collision domain - Multiple ports connected to common
switch point using normal collision handling. Switch points are
connected collision free using high speed back plane (1 Gbps).
- Individual collision domain - Each port individual collision
domain so no collisions can occur. Port transmission is buffered and sent
directly to port on same board or across back plane collision free.
- Connected switches - Switches connected over Ethernet would share
a collision domain. For example, A1-F1 can communicate collision free across
the switch back plane but A1-F1 communicates with A2-F2 over the usual
Ethernet collision domain.
4.3.7 Fast Ethernet 802.3u
100Base-TX
- Category 5 UTP wiring
- Full duplex using two twisted pairs
Category 5 UTP - Can handle clock rater of 125 MHz., maximum distance
for 100Mbps is 100m.
4.3.8 Gigabit Ethernet 802.3z
- Backward compatible with original 802.3 (10 Mbps)
- Uses switch or hub, switched only discussed here.
- Point-to-point between two stations or more often a switch (hub not
discussed here)
- Switch creates point-to-point connection with individual collision
domains, no contention so CSMA/CD not required.
Token Ring 802.5
- Topology - Point-to-point though all stations can see traffic as
passes through each computer interface, ring is unidirectional.

|

The conceptual flow of bits during a transmission on a token ring
network. Except for the sender, computers on the network pass bits of the
frame to the next station. The destination makes a copy. |
- Bit time - 1 Mbps channel has bit time of 1 bit /1,000,000
bits/second = 1 msecond/bit.
- Bit length - With a propagation rate of 200 meters/msecond, a 1 Mbps channel has bit length of 200 meters
(i.e. a bit time of 1 msecond on a medium with
propagation at 200 meters/1 msecond). A 1000 meter
ring can then hold 5 bits on wire. Each active station adds a one bit delay
due to buffering one bit by interface, a 1000 meter ring with 10 active
stations would have a 15 bit delay.
- Token - Token is 24 bits of a special pattern used to claim
channel. Ring must have sufficient delay to contain token, since ring changes
as machines turned ON and OFF, an artificial delay may need to be added
by RING MONITOR. Note in the diagram below that the first 2 bytes of the token
are the first 3 bytes of the data frame.
- Ring Monitor - Central control for ring management.
Responsible for:
- adding delays when necessary for ring to hold token,
- removing orphan (sets bits in frame that its been seen first time),
- removes checksum error frames,
- regenerates token if none seen after n*token holding time, etc.
- Occasionally transmits an active monitor present frame.
- Monitor selection
- First machine turned on normally becomes monitor after transmitting
CLAIM TOKEN frame and receiving it back..
- Should monitor crash other stations can transmit CLAIM TOKEN frame, if
it returns before any others seen, that monitor becomes monitor.
- Transmission - Station claims token by setting a single bit in the
24-bit token (recall each station has a 1-bit buffer through which all ring
transmissions pass). Can then transmit as many frames as possible until
token holding time expires.
- Acknowledge - Receiving station passes message on, bit-by-bit,
marking acknowledge bit of frame as it passes. How would Ethernet acknowledge?
- Draining - Since each bit passed completely around ring back to
transmitter, transmitter responsible for removing or draining bits off ring
and regenerating token. Token and frame header differ by only 1-bit (note the
first three bytes of token and data).
- Token holding time - Normally 10 millisecond or 10-2
sec, the time limit that one station can transmit after claiming token. Any
number of frames can be sent in that time. On a 1 Mbps ring or 106
bps for 10-2 sec, 104 bits or 10,000 bits could be
transmitted. Most token-ring systems run at 16 Mbps so the 160,000 bits could
be sent.
- Delay - To transmit, a station must claim the token. The token is
passed from station to station, each may claim the token and transmit for
token holding time.
- The minimum delay for a station that has just finished transmitting to
transmit again is the time to reclaim the token, or at least 24-bit time.
What is the minimum delay on Ethernet? |
- The maximum delay to a station that just finished transmitting to
transmit again is a factor of the number of stations on the ring and the
token holding time. Assuming all N stations transmit for the token
holding time of 10 milliseconds, the delay is approximately N*10
milliseconds.
What is the maximum delay on Ethernet for N stations trying
to transmit. Would acknowledgments affect either calculation? |
- Monitor Selection - Usually first station turned on. If monitor
dies, any other station can send a CLAIM TOKEN control frame after no ACTIVE
MONITOR PRESENT frame has been seen in the frame control field. If the CLAIM
TOKEN returns that station is the new monitor. Danger is that monitor make not
die but only fail to do parts of its job.
- Breaks - Wiring center, Multi-Access Unit, closes relays when
stations connect to ring, opens to maintain the ring when disconnect through
power off, resets, or channel breaks.
- Efficiency - Can approach 100% under heavy load when every station
transmitting. How does Ethernet perform under heavy load?
Ethernet
- No priorities
- Non deterministic (not absolutely predictable), as speed increases
(bandwidth) efficiency drops because larger contention frame size
produces larger minimum packets, greater loss on collisions. Collisions
increase with number of transmitting stations.
- No delay when only one transmitter. Delay not deterministic under
high loads.
- Distributed control via CMSA/CD.
|
Token Ring
- Priorities possible
- Deterministic, performance easily calculated by knowing number of
stations and token holding time. No collisions occur. Efficiency always
high, increasing with number of transmitting stations.
- Delay fixed when one transmitter. Delay deterministic under any load
when no priorities.
- Centralized control requiring monitor to control ring management.
Possibility monitor could go berserk and no way to remove monitor.
|
4.3.9 802.2 Logical Link Control - The top half of the
data link
layer that interfaces to the
network layer. Hides the details of whether
802.3 or 802.5 at the
medium access layer from the network layer to
provide a consistent interface to Ethernet,

Token-Ring, etc.
- LLC header add
ed on sender:
- Sequence number
- Acknowledgement number
- LLC header copied into MAC frame
- Receiver removes LLC header
- Three service options
- unreliable datagram service
- acknowledged datagram service
- connection oriented service
4.4 WIRELESS LANS
4.4.1 802.11 Protocol Stack
802.11 Protocol Stack
4.4.2 Physical Layer
802.11b
-
Operates in 2.4-GHz ISM short-range radio band, same as garage
door openers and cordless telephones.
-
HR-DSSS (High Rate Direct Sequence Spread Spectrum)
communicates at maximum of 11 Mbps. using CDMA. Slower than 802.11a but range
about 7 times greater.
-
CDMA uses entire 2.4-GHz spectrum by:
-
Each station assigned unique m-bit code called
chip-sequence that corresponds to a binary 1.
-
When multiple stations transmit at same time, the signals
add rather than garble.
-
To send binary 0 station sends one's complement of
chip-sequence.
-
Example: Convenient to think in bipolar form of 0=-1 and
1=+1.
-
Chip sequence assigned to 4 stations.
-
Chip sequence in bipolar form. C chip-sequence = 01011100
= -1+1-1+1+1+1-1-1 in bipolar form.
-
Six samples.
-
Recovery of C transmission of binary 1 from S2.
-
Receiver computes inner product of C and S2,
multiplies then adds, and divides by 8 (the number of chips).
-
-2
0 0 0
+2 +2 0 -2
Received
-
* -1 +1
-1 +1 +1 +1
-1 -1 C chip sequence in bipolar
form
-
+2 0
0 0 +2
+2 0 -2 = 8/8 = 1
-
Chip sequence chosen so all other chip sequences are
orthogonal (i.e. their inner product is 0).

4.4.3 802.11 MAC Sublayer Protocol
4.7 Data Link Layer Switching - Bridges
Bridges connect two or more LANs by forwarding frames received.
Operate at
data link layer so is best effort, if frame cannot be
delivered, too bad.

- Reasons for a bridge
- Connect 802.x to 802.y, the bridge would require two NICs each connected
to the respective LAN.
- Load balancing among several LANs. A heavily congested LAN can be split
into two or more LANs.
- Distance, 802.3 has 2.5 km limit
- Reliability, isolate LAN segments from each other's faults. Suppose an
Ethernet machine goes transmit wild.
- Security, can filter addresses allowing only those specified across the
bridge.
- Logical Link Control
- Provides consistent interface to network layer regardless of 802.x
protocol.
- Placed in data load of 802.x frame, consists of header, sequence and
acknowledge numbers. Supports unreliable and acknowledged datagram and
connection-oriented services.
- Allows connection of 802.x to 802.y LANs. Figure 4-35.
- 802.x to 802.y Connections
- 802.3 to 802.3, only problem occurs due to collisions on destination LAN
preventing delivery of frame.
- 802.11 to 802.3, use different frame formats requiring reformatting and
calculation of checksum, takes time.
- 802.3z to 802.3, 1000Mbps to 10 Mbps
- 802.3 to 803.5, must generate priority bit.
- 802.5 to 802.3
- Must lie about Acknowledge and Copy bit, since bridge must pass along
received bits immediately before sending 802.3 frame.
- Arbitrarily large 802.5 frame may not fit on 802.3 and would be
discarded.
- 802.5 to 802.5, again must lie about Acknowledge and Copy bit.

Transparent Bridges
- Administration - none, just connect to LAN.
- Frame - standard with no special forwarding information.
- Forwarding
- if destination and source LAN same discard since no need to bridge.
- if destination and source LAN different, forward on to a known
destination LAN segment.
- if destination LAN unknown, flood on all but arrival LAN.
- Backward learning - when frame arrives the source address and LAN
is marked in table so future frames to that destination can be forwarded.
Note that does not learn silent host addresses, non-transmitting
destinations are always flooded.
- Table updates - to accommodate topology changes, addresses are updated
as new source frame addresses arrive.
- Table purges - periodically table entries are purged so that machines
can be moved, otherwise the learned address of machine that has been moved
is wrong until it transmits.
Spanning Tree Bridges
- Cycles - when multiple paths through network due to redundant bridges,
possible that flooding can cause frames to be
passed endlessly between bridges.
- Spanning tree (text p.369) - a tree where every LAN is reachable from a
root. It is desirable that this be a minimal spanning tree based upon a metric
such as channel speed, cost, etc. Recall that a tree has no cycles.
- Construction - the lowest numbered NIC is selected as tree root for which
the minimum path to all LANs is determined. Minimum path is unique hence
redundant paths are eliminated.
- Changes - bridges continue to run spanning tree algorithm since topology
may change as bridges or channels added or removed.
The following illustrates a network with multiple transparent bridges and a
spanning tree following the minimum path to all LANs for the network at right.
The numbers on the connecting LANs represent some metric such as channel cost or
bandwidth that could be used to determine the minimal spanning tree. The
spanning tree must still connect all network segments but without cycles a
necessary condition to prevent flooded frames from generating redundant frames.
When multiple bridges connect the same LAN, such as LAN O connected by B
24
and B
11 and LAN G connected by B
12 and B
11, one
of the bridges will not transmit to that LAN, in this case B
11 is
effectively disconnected from both LAN G and O.
Bridges with cycles between hosts H1 and H2
 |
Spanning tree with B21 as root
 |
Backward learning - Each bridge builds a table as frames arrive
containing the
source host and
arrival LAN. For example, host H
2
on LAN Y sending a frame to host H
1 on LAN X would cause the frame to
be flooded across the
spanning tree and an entry to be added in each of
the bridge routing tables along the spanning tree:
Bridge |
B12 |
B14 |
B21 |
B22 |
B23 |
B24 |
B31 |
B32 |
B33 |
H2 Entry |
F |
I |
Y |
E |
K |
N |
A |
D |
B |
When H
1 responds by transmitting a frame onto LAN X the bridge B
24
forwards the frame
toward H
2 on LAN N. Each bridge along
the path from H
1 to H
2 would record the source host H
1
on arrival LAN in its table. The table below reflects routing tables after the
transmissions by hosts H
1 and H
2.
Bridge |
B12 |
B14 |
B21 |
B22 |
B23 |
B24 |
B31 |
B32 |
B33 |
H2 Entry |
F |
I |
Y |
E |
K |
N |
A |
D |
B |
H1 Entry |
|
N |
B |
I |
|
X |
|
E |
D |
What changes occur in the table after H1 transmits to a host
on LAN G? |
a) Interconnected LANs b) A spanning tree covering the
LANs.
Transparent Bridges - Advantages/Disadvantages
-
No administrative setup or maintenance when topology changes
due to reorganization or failures.
-
Frame unchanged, contains only data payload.
-
Not optimal use of redundant bridges under heavy traffic
since some are removed to build spanning tree.
-
Bridge software relatively complex to maintain table,
backward learning, updating, spanning tree algorithm. Host software simple.
Source Routing Bridges
- Administration, must number each bridge uniquely for a LAN and each LAN
uniquely for network.
- Data frame from host contains complete path from source to destination
as bridge and LAN number.
- Forwarding, bridge searches frame path for its own LAN and address,
forwards to next LAN in path.
- Route learning
- Source host sends special discovery frame which is flooded by
bridges on all but arrival LAN, each bridge adding their address allowing
frame can be discarded if seen again (i.e. if bridge sees a frame with its
own address means that the frame is in a cycle).
- Destination host sends back the received frame, which now includes
source to destination path, back to source, giving it source to
destination path. Possible only first frame responded to, since arriving
first it would have presumably followed best path.
The following figure shows one possible discovery frame created for host H2
to transmit to H1. The discovery frame contains the source and
destination hosts accumulating the bridges crossed as the frame is
flooded. Each bridge would broadcast the discovery frame on all but the
originating LAN. Presumably the discovery frame reaching host H1
first would be returned as the best route between H1and H2.
Note that 27 discovery frames would converge on H1.
One possible discovery frame for host H2 to transmit to H1

Source Routing Bridges - Advantages/Disadvantages
- Requires administration on setup and whenever topology changes for any
reason. Difficult to detect duplicate server or LAN numbers since problems
evident only on certain paths.
- Server simple, hosts complex. Many more hosts than servers. Each host
must be capable of source routing.
- Flooded frames grows exponentially. If network contained 9 bridges in a
heirarchy as below, a discovery frame from the H host is copied by
each of the three level 3 bridges, flooding nine frames to level 2 bridges,
which flood 27 frames to level 1 bridges:

- On bridge failure, host does not know whether destination or bridge
failed, must timeout and send new discovery frame. Since discovery frame is
flooded, early timeouts could incur serious overhead. Major bridge failure
forces many simultaneous discovery frames from affected hosts.
- Presumably allows multiple paths with multiple bridges.
- Splitting an existing LAN requires making changes to all host software
to enable source routing.
Remote Bridges
Connect two or more distant LANs using bridges at connection of LAN to a
point-to-point line. Common for bridge to place complete MAC frame in payload
field of PPP and transmit on point-to-point line. Receiving bridge copies
payload field onto destination LAN. Only problem is getting frames to correct
LAN, requires translation of MAC address to an IP address when using PPP.
FDDI - Fiber Distributed Data Interface

- 100 Mbps token ring LAN up to 200 km and 1000 stations.
- Commonly used as backbone to join copper LANs via bridge with 802.x and
FDDI NIC.
- Uses cheaper multimode fiber since relatively low speed. Multimode fiber
uses inexpensive and reliable LEDs rather than laser as source and less
precise fiber. Cannot transmit as far or in as high a bandwidth as a laser.
- Consists of two rings, one in each direction. If break occurs in one the
other is used as backup. If both break, each station can connect the two
rings, forming a new ring with about twice the length.
- For fault tolerance, class A stations connect to both rings and can heal
breaks, class B stations connect to one ring and cannot heal breaks.
- Does not use Manchester encoding for synchronization on every bit, which
is self-clocking but requires double the bandwidth. Instead, uses a long
synchronizing preamble and requires clock with not more than 0.005 per cent
drift, ensuring that 4500 byte frames can be sent.
- Follows 802.5 closely but does not have to wait for frame to return before
regenerating token. Since 200 km and 1000 stations introduce considerable
delay waiting for frame to return before regenerating token, this allows
multiple frames to exist on ring simultaneously.
- Also supports synchronous frames for circuit-switched PCM and ISDN traffic
generated every 125 microseconds by a master station. Each frame has 16 bytes
of non-circuit switched data and 96 circuit-switched data bytes (for 4 PCM 24
channel frames). Stations reserve a time slot within a synchronous frame,
holding it till released.
Fast Ethernet - 802.u
- Backward compatible with 802.3 10Base-T wiring but supports 100Mbps
Three wiring standards
100Base-T4, 100m, half duplex?, using all 8 category 3 UTP wires
(internal phone wire) to hub that forms one collision domain.
100Base-TX, 100m, full duplex, only 4 of the category 5 wires to hub
used, hub forms one collision domain.
100Base-FX, 2000m, full duplex, using fiber optics to switched hub.
Since frames are too long for collision algorithm, with switched hub each
connection is it own collision domain, allowing all stations to receive and
transmit simultaneously.
No comments:
Post a Comment