Q2023
1 Section 1
1.1 What are the disadvantages of connectionless service? In spite of the disadvantages, why do you like to use connectionless services? What actions you would like to take to eliminate the problems, but still use connectionless services?
Disadvantages of connectionless services:
- Unreliable data delivery: Packets may be lost, arrive out of order, or be duplicated[1][3].
- No guarantee of packet delivery or sequencing[3].
- Lack of error checking, flow control, and retransmission mechanisms[3].
- Potential for network congestion[1].
- Longer data fields required in each packet for routing information[1].
Despite these disadvantages, connectionless services are preferred for:
- Speed: Data can be sent immediately without connection setup[2][4].
- Efficiency: Less bandwidth usage due to no connection management[2].
- Simplicity: Easier to implement and maintain[2].
- Scalability: Can support large numbers of devices and users[2].
- Low overhead: No time required for circuit setup[1].
To mitigate issues while using connectionless services:
- Implement application-layer error correction protocols[4].
- Use hybrid approaches combining connectionless and connection-oriented protocols.
- Employ quality of service (QoS) mechanisms to prioritize critical packets.
- Implement packet sequencing and reassembly at the application layer.
- Use redundant transmission for important data.
Citations: [1] https://www.tutorialspoint.com/Connectionless-Services [2] https://www.ioriver.io/terms/connectionless-protocol [3] https://www.devx.com/terms/connectionless-service/ [4] https://www.techtarget.com/searchnetworking/definition/connectionless [5] https://en.wikipedia.org/wiki/Connectionless_communication
1.2 Suppose all laptops in a large city are to communicate using radio transmissions from a high antenna tower. Is the data link layer or network layer more appropriate for this situation? Now suppose the city is covered by a large number of small antennas covering smaller areas. Which layer is more appropriate?
- Scenario with a Single High Antenna Tower:
- When all laptops in a large city communicate using radio transmissions from a single high antenna tower, the data link layer is more appropriate.
- This is because the communication occurs over a single shared medium (radio channel), and the data link layer is responsible for managing access to this shared medium. It handles framing, error detection, and medium access control (e.g., avoiding collisions in wireless communication), which are critical in such a setup[1].
- Scenario with Multiple Small Antennas Covering Smaller Areas:
- When the city is covered by multiple small antennas, forming smaller cells, the network layer becomes more appropriate.
- This is because the network layer handles routing and addressing, which are essential for managing communication across multiple interconnected cells. Each antenna would act as part of a subnet, and the network layer would ensure that data packets are routed correctly between these subnets or cells[1].
In summary: - Use the data link layer for single shared communication mediums. - Use the network layer for scenarios involving multiple interconnected networks or subnets.
Citations: [1] https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/collection_7992c11f-3397-42c0-b0a2-90a30e8a96c5/679ca887-cd85-42a1-aad0-65ab91482909/Computer-Networks-by-Tanenbaum-5th-edition-Andrew-S-Tanenbaum-5-2011-Annas-Archive.pdf
1.3 Design and describe an application-level protocol to be used between an automatic teller machine and a bank’s centralized computer. Your protocol should allow a user’s card and password to be verified, the account balance to be queried, and an account withdrawal to be made. Your protocol entities should be able to handle the all-too-common case in which there is not enough money in the account to cover the withdrawal. Specify your protocol by listing the messages exchanged and the action taken by the automatic teller machine or the bank’s centralized computer on transmission and receipt of messages. Sketch with a diagram the operation of your protocol for the case of a simple withdrawal with no errors.
Application-Level Protocol Design for ATM-Bank Communication
Entities:
- ATM (Automatic Teller Machine)
- Bank Centralized Computer (BCC)
Protocol Messages:
AUTH_REQ(card_number, password)- Authentication Request
AUTH_RESP(status)- Authentication Response
BAL_REQ(account_number)- Balance Inquiry Request
BAL_RESP(balance)- Balance Response
WITHDRAW_REQ(account_number, amount)- Withdrawal Request
WITHDRAW_RESP(status, updated_balance)- Withdrawal Response
Message Exchange and Actions:
- Authentication Phase
- ATM → BCC:
AUTH_REQ(card_number, password)
- BCC verifies credentials:
- If valid → BCC → ATM:
AUTH_RESP(SUCCESS)
- If invalid → BCC → ATM:
AUTH_RESP(FAILURE)(Terminate session)
- If valid → BCC → ATM:
- ATM → BCC:
- Balance Inquiry (Optional)
- ATM → BCC:
BAL_REQ(account_number)
- BCC fetches balance → BCC → ATM:
BAL_RESP(balance)
- ATM → BCC:
- Withdrawal Transaction
- ATM → BCC:
WITHDRAW_REQ(account_number, amount)
- BCC checks balance:
- If
balance ≥ amount: Deduct amount, update balance →WITHDRAW_RESP(SUCCESS, updated_balance)
- If
balance < amount: →WITHDRAW_RESP(INSUFFICIENT_FUNDS, balance)
- If
- ATM → BCC:
- Termination
- ATM displays the outcome and ejects the card.
Diagram:
ATM Bank Centralized Computer
| |
|----AUTH_REQ---------------->|
|<---AUTH_RESP(SUCCESS)-------|
| |
|----BAL_REQ----------------->|
|<---BAL_RESP(balance)--------|
| |
|----WITHDRAW_REQ------------>|
|<---WITHDRAW_RESP(SUCCESS)---|
| |
1.4 There are 10 stations in a time slotted LAN always having constant load and ready to transmit. During any particular contention slot each station transmits with a probability of 0.1. If the average frame takes 122 ms to transmit, what is the channel efficiency, if round trip time is 51.2 micro seconds?
To calculate the Channel Efficiency in a Time Slotted LAN under constant load conditions, follow these steps:
Given Data:
- Number of Stations = 10
- Probability of Transmission by each station per slot = 0.1
- Average Frame Transmission Time = 122 ms
- Round Trip Time (Propagation Delay) = 51.2 μs = 0.0512 ms
1. Find the Probability of Successful Transmission in a Slot:
A successful transmission occurs if exactly one station transmits, and the others remain silent.
Probability of a station transmitting = 0.1 Probability of a station not transmitting = 1 - 0.1 = 0.9
Probability of exactly one station transmitting: \[ P_{success} = 10 \cdot (0.1) \cdot (0.9^9) \]
\[ P_{success} = 10 \cdot 0.1 \cdot (0.3874) \]
\[ P_{success} = 0.3874 \]
2. Find the Channel Efficiency Formula:
Channel Efficiency is the fraction of time used to successfully transmit data.
\[ Efficiency = \frac{P_{success} \cdot T_{frame}}{T_{frame} + T_{prop}} \]
Where: - \(T_{frame}\) = 122 ms - \(T_{prop}\) = 0.0512 ms
Substitute the values:
\[ Efficiency = \frac{0.3874 \cdot 122}{122 + 0.0512} \]
\[ Efficiency = \frac{47.2528}{122.0512} \]
\[ Efficiency ≈ 0.3872 \text{ or } 38.72\% \]
✅ Final Answer:
The Channel Efficiency is 38.72%.
2 Section 2
2.1 Suppose we want to transmit the message 1011001001001011 and protect it from errors using the CRC-8 polynomial x8+x2+x+1. Determine the message that should be transmitted. Suppose the leftmost bit of the message is inverted due to noise. What is the result of the receiver’s CRC calculation?
To transmit the message using the CRC-8 polynomial \(P(x) = x^8 + x^2 + x + 1\) (binary: 100000111), follow these steps:
Message Representation: The original message is: \(M(x) = 1011001001001011\).
Append 8 Zero Bits: Append 8 zero bits to the message: \(101100100100101100000000\).
CRC Calculation (Modulo-2 Division): Divide the message \(101100100100101100000000\) by the polynomial \(100000111\) using binary long division. The remainder (CRC checksum) is 11101110.
Transmitted Message: Append the CRC checksum to the original message: 101100100100101111101110.
Error Detection:
If the leftmost bit (first bit) is inverted due to noise, the received message becomes: 001100100100101111101110
Perform CRC division on this received message by the polynomial. The remainder will be non-zero, indicating that an error is detected.
Conclusion:
- Transmitted message: 101100100100101111101110
- CRC remainder with noise: Non-zero (Error detected).
2.2 If the maximum sequence number is 17, then obtain sender and receiver window size in case of selective repeat. What happens if sender and receiver window size are greater than that of obtained value?
In Selective Repeat (SR) ARQ protocol, the sender and receiver window sizes are calculated as follows:
Calculation:
Given: - Maximum sequence number (N) = 17
Sender window size (SWS) and Receiver window size (RWS) are given by:
\[ SWS = RWS = \frac{N}{2} \]
Substitute the value:
\[ SWS = RWS = \frac{17}{2} = 8 \]
What Happens if Window Size Exceeds the Obtained Value?
If SWS or RWS exceeds \(\frac{N}{2}\), the following issues arise:
- Ambiguity in Acknowledgment: The receiver might misinterpret the received frames, as the same sequence number can be reused before the old sequence is acknowledged.
- Data Corruption: Overlapping sequence numbers could cause out-of-order delivery of packets.
- Protocol Failure: Reliable delivery is compromised, violating the primary function of the protocol.
Hence, both sender and receiver window sizes must not exceed \(\frac{N}{2}\) to ensure proper data flow and avoid ambiguity.
2.3 Frames of 1000 bits are sent over a 106 bps duplex link between two hosts. The round trip propagation time is 25 ms. Frames are to be transmitted into this link to maximally pack them in transit. What is the minimum number of bits required to represent sequence numbers distinctly?
Given:
- Frame size = 1000 bits
- Link bandwidth = 106 bps
- Round Trip Time (RTT) = 25 ms = 0.025 s
Step 1: Calculate Bandwidth-Delay Product
Bandwidth-Delay Product represents the number of bits that can be transmitted during the round-trip propagation time.
\[ \text{Bandwidth-Delay Product} = \text{Link Bandwidth} \times RTT \]
\[ = 106 \times 0.025 \]
\[ = 2.65 \text{ bits} \]
Step 2: Number of Frames in Transit
Number of frames in transit is given by:
\[ N = \frac{\text{Bandwidth-Delay Product}}{\text{Frame Size}} \]
\[ N = \frac{2.65}{1000} \approx 3 \]
Therefore, the minimum number of frames required to maximally pack the link is 3.
Step 3: Sequence Number Bits
To represent N distinct sequence numbers, the minimum number of bits required is:
\[ \text{Bits} = \lceil \log_2(N+1) \rceil \]
\[ = \lceil \log_2(3+1) \rceil \]
\[ = \lceil \log_2(4) \rceil = 2 \]
Final Answer:
Minimum number of bits required = 2
2.4 A disadvantage of broadcast subnet is the capacity wasted due to collisions. Suppose the time is divided into discrete slots with each of the n hosts attempting to use the channel with probability p during each slot. What fraction of the slots is wasted due to collisions?
In a broadcast subnet with time-slotted access, a collision occurs when two or more hosts attempt to transmit simultaneously. The probability of exactly one host transmitting in a given slot without collision is:
\[ P(\text{Success}) = n p (1-p)^{n-1} \]
The probability of no host transmitting is:
\[ P(\text{Idle}) = (1-p)^n \]
Therefore, the probability of a collision occurring (i.e., two or more hosts transmitting) is:
\[ P(\text{Collision}) = 1 - P(\text{Success}) - P(\text{Idle}) \]
Substitute the expressions:
\[ P(\text{Collision}) = 1 - n p (1-p)^{n-1} - (1-p)^n \]
Hence, the fraction of slots wasted due to collisions is equal to \(P(\text{Collision})\).
2.5 Consider a 100 kbps satellite link with 550 msec roundtrip propagation delay. A sliding window protocol with 5 bit sequence number is used on the link. The frame size is 1000 bits. Find out the percentage of time the sender is blocked.
Given:
- Link bandwidth = 100 kbps
- Roundtrip propagation delay = 550 msec
- Sliding window protocol with 5-bit sequence number → Maximum window size \(W = 2^5 - 1 = 31\)
- Frame size = 1000 bits
Step 1: Transmission Time
Transmission time per frame:
\[
T_{tx} = \frac{\text{Frame size}}{\text{Bandwidth}} = \frac{1000 \text{ bits}}{100000 \text{ bps}} = 0.01 \text{ sec} = 10 \text{ msec}
\]
Step 2: Propagation Delay
Roundtrip propagation delay:
\[
T_{prop} = 550 \text{ msec}
\]
Step 3: Utilization Calculation
The maximum number of frames that can be transmitted during the roundtrip delay:
\[
N = \frac{T_{prop}}{T_{tx}} = \frac{550}{10} = 55
\]
Since the window size \(W = 31\) is smaller than \(N\), the sender will be blocked after sending 31 frames and must wait for an acknowledgment.
Utilization (U) is given by:
\[
U = \frac{W}{N} = \frac{31}{55} \approx 0.5636 = 56.36\%
\]
Step 4: Percentage of Time Sender is Blocked
Blocked time percentage:
\[
100\% - U = 100\% - 56.36\% = 43.64\%
\]
Final Answer: 43.64%
3 Section 3
3.1 Compare source routing with hop-by-hop routing with respect to packet header overhead, routing table size, flexibility in route selection, and QoS support for both datagram and virtual circuit networks.
Comparison of Source Routing and Hop-by-Hop Routing
| Aspect | Source Routing | Hop-by-Hop Routing |
|---|---|---|
| Packet Header Overhead | High, as the entire route information is included in each packet header. | Low, only destination address is included, with intermediate routers deciding the next hop. |
| Routing Table Size | Small, as routers require minimal routing information. | Large, as each router maintains a detailed routing table with all possible destinations. |
| Flexibility in Route Selection | High, as the sender can dictate the entire route based on network conditions or policies. | Low, as the route is determined dynamically by intermediate routers based on routing algorithms. |
| QoS Support | Better QoS support since the sender can select routes with desired characteristics. | Limited QoS support as routing decisions are made independently at each hop, potentially leading to unpredictable paths. |
| Datagram Networks | Rarely used, due to excessive packet overhead and complexity. | Commonly used due to simplicity and efficiency. |
| Virtual Circuit Networks | Occasionally used to establish fixed routes for the entire session. | Frequently used, as routers can optimize routes during the connection establishment phase. |
3.2 You are a network administrator and have been assigned the IP address of 201.222.5.0. You need to have 20 subnets with 5 hosts per subnet. What is the subnet mask, address of the first and last subnet and the broadcast address?
Given the IP address 201.222.5.0 (Class C) and the requirement for 20 subnets with 5 hosts per subnet, the subnet mask, subnet addresses, and broadcast addresses are calculated as follows:
- Subnet Mask Calculation:
- Minimum hosts required per subnet = 5
- Number of host bits required = 3 (2³ - 2 = 6 usable hosts)
- Subnet mask = 255.255.255.248 (/29)
- Number of Subnets Possible:
- Number of subnet bits = 5 (Class C default mask is /24, and /29 uses 5 bits for subnets)
- Number of subnets = 2⁵ = 32 (sufficient to cover the required 20 subnets)
- First Subnet Address:
- Address: 201.222.5.0
- Broadcast Address: 201.222.5.7
- Usable Range: 201.222.5.1 to 201.222.5.6
- Last Subnet Address (20th subnet):
- Address: 201.222.5.152
- Broadcast Address: 201.222.5.159
- Usable Range: 201.222.5.153 to 201.222.5.158
Hence, the subnet mask is 255.255.255.248, and the network accommodates the required subnets and hosts efficiently.
3.3 Suppose a router receives an IP packet with identification number 100 containing 600 B data and has to forward the packet to a network with MTU 200 B. Assume that the IP header is 20 B long. Show the fragments that the router creates and specify the relevant values in each fragment header (i.e., total length, identification number, fragment offset and MF bit).
To fragment the IP packet with 600 B data and 20 B header (total 620 B), the router needs to create multiple fragments, as the MTU = 200 B.
- First Fragment:
- Payload = 180 B (200 B MTU - 20 B header)
- Total Length = 200 B
- Identification Number = 100
- Fragment Offset = 0 (0 × 8 = 0)
- MF Bit = 1 (More Fragments)
- Second Fragment:
- Payload = 180 B
- Total Length = 200 B
- Identification Number = 100
- Fragment Offset = 180 ÷ 8 = 22
- MF Bit = 1
- Third Fragment:
- Payload = 180 B
- Total Length = 200 B
- Identification Number = 100
- Fragment Offset = 360 ÷ 8 = 45
- MF Bit = 1
- Fourth Fragment (Last Fragment):
- Remaining Payload = 60 B
- Total Length = 80 B (60 B payload + 20 B header)
- Identification Number = 100
- Fragment Offset = 540 ÷ 8 = 67
- MF Bit = 0 (No More Fragments)
Thus, the router creates 4 fragments, each with appropriate header values ensuring proper reassembly at the destination.
3.4 A token bucket is used for traffic shaping. A new token is put into the bucket every 5 μsec. Each token is good for one short packet, which contains 48 bytes of data. What is the maximum sustainable data rate?
To calculate the maximum sustainable data rate for the token bucket traffic shaper:
- Token Generation Interval:
- New token every 5 μsec (5 * 10⁻⁶ sec).
- Data per Token:
- Each token allows transmission of 48 bytes.
- Data Rate Calculation:
\[ \text{Data Rate} = \frac{\text{Data per Token}}{\text{Token Interval}} \]
\[ = \frac{48 \text{ bytes}}{5 \times 10^{-6} \text{ sec}} \]
Convert bytes to bits:
\[ 48 \text{ bytes} \times 8 = 384 \text{ bits} \]
\[ \text{Data Rate} = \frac{384 \text{ bits}}{5 \times 10^{-6} \text{ sec}} = 76.8 \times 10^6 \text{ bits/sec} = 76.8 \text{ Mbps} \]
Final Answer: The maximum sustainable data rate is 76.8 Mbps.
3.5 Why connectionless service is less efficient to stop congestion setting up, but better to have when congestion setup?
Connectionless service is less efficient to stop congestion setting up because:
- No Prior Resource Reservation: It does not reserve network resources like bandwidth or buffer space before data transmission, which means data packets are sent without checking network capacity.
- No Flow Control Coordination: Each packet is routed independently, making it difficult to regulate the data flow based on network conditions.
- Unpredictable Routing: Different packets may take different paths, causing potential bottlenecks and congestion at some routers.
However, connectionless service is better to have when congestion is set up because:
- Adaptive Routing: Packets can be dynamically routed through less congested paths, reducing the overall congestion effect.
- Stateless Nature: Routers do not maintain session information, making packet processing faster and more scalable under heavy traffic.
- Graceful Degradation: If some packets are dropped due to congestion, the rest of the transmission can still continue without terminating the session.
Hence, connectionless service sacrifices efficiency in preventing congestion but offers resilience and adaptability once congestion occurs.
4 Section 4
4.1 For a host machine that uses the token bucket algorithm for congestion control, the bucket has a capacity of 1 megabytes and the maximum output rate is 20 megabytes per second. Tokens arrive at a rate to sustain output at a rate of 10 megabytes per second. The machine needs to send 12 megabytes of data. What is the minimum time required to transmit entire data?
Given:
- Bucket capacity = 1 MB
- Maximum output rate = 20 MB/s
- Token arrival rate = 10 MB/s
- Data to be transmitted = 12 MB
Solution:
- Initial Tokens in Bucket: The bucket is assumed to be full initially, containing 1 MB of tokens.
- Immediate Transmission: The machine can transmit 1 MB instantly using the tokens.
Remaining Data = 12 MB - 1 MB = 11 MB
- Token Generation and Transmission:
Tokens are generated at 10 MB/s. Hence, in 1 second, 10 MB of tokens are added.
The machine can transmit 10 MB using these tokens.
Remaining Data = 11 MB - 10 MB = 1 MB
- Final Transmission: The remaining 1 MB will be transmitted in the next second using 1 MB of newly generated tokens.
Total Time Required = 1 s (initial) + 1 s (10 MB) + 1 s (final) = 3 seconds
4.2 Host A sends a UDP datagram containing 8880 bytes of user data to host B over an Ethernet LAN. Ethernet frames may carry data up to 1500 bytes (i.e., MTU = 1500 bytes). Size of UDP header is 8 bytes and size of IP header is 20 bytes. There is no option field in IP header. How many IP fragments will be transmitted and what will be the contents of offset field and total length field for all fragments?
Given:
- User data size = 8880 bytes
- UDP header size = 8 bytes
- IP header size = 20 bytes
- MTU (Ethernet frame) = 1500 bytes
- Maximum data in each IP fragment = 1500 - 20 = 1480 bytes
- Total data to transmit = 8880 + 8 = 8888 bytes
Step 1: Number of Fragments Calculation
Total Data = 8888 bytes
Maximum Data per Fragment = 1480 bytes
Number of Full Fragments = ⌊8888 / 1480⌋ = 6 full fragments
Remaining Data = 8888 % 1480 = 448 bytes
Total Fragments = 6 + 1 = 7 fragments
Step 2: Fragment Details
| Fragment Number | Data Size | Total Length Field | Offset Field |
|---|---|---|---|
| 1 | 1480 | 1500 | 0 |
| 2 | 1480 | 1500 | 185 (1480/8) |
| 3 | 1480 | 1500 | 370 |
| 4 | 1480 | 1500 | 555 |
| 5 | 1480 | 1500 | 740 |
| 6 | 1480 | 1500 | 925 |
| 7 (last) | 448 | 468 | 1110 |
Explanation:
- Total Length Field = Data Size + IP Header Size
- Offset Field is in units of 8 bytes.
- The More Fragment (MF) flag will be set for all fragments except the last one.
4.3 The IP network 192.168.130.0 is using the subnet mask 255.255.255.224. What subnet are the following hosts on? 192.168.130.67, 192.168.130.222, 192.168.130.250
Given:
- Network Address: 192.168.130.0
- Subnet Mask: 255.255.255.224
- Subnet Mask in Binary: /27 (224 → 11100000)
- Block Size: 256 - 224 = 32
Subnet Ranges:
- 192.168.130.0 - 192.168.130.31
- 192.168.130.32 - 192.168.130.63
- 192.168.130.64 - 192.168.130.95
- 192.168.130.96 - 192.168.130.127
- 192.168.130.128 - 192.168.130.159
- 192.168.130.160 - 192.168.130.191
- 192.168.130.192 - 192.168.130.223
- 192.168.130.224 - 192.168.130.255
Host Subnet Calculation:
- 192.168.130.67 → Subnet 192.168.130.64 - 192.168.130.95
- 192.168.130.222 → Subnet 192.168.130.192 - 192.168.130.223
- 192.168.130.250 → Subnet 192.168.130.224 - 192.168.130.255
4.4 Congestion control is better implemented in the network layer, but Internet is implementing it in the transport layer. Why?
Congestion control is implemented in the transport layer in the Internet instead of the network layer due to the following reasons:
End-to-End Principle: The transport layer operates on an end-to-end basis between sender and receiver, making it more suitable to detect and react to congestion as it directly monitors data flow between the two endpoints.
Network Heterogeneity: The Internet is a heterogeneous network with various underlying technologies and administrative domains. Implementing congestion control at the network layer would require uniform mechanisms across different networks, which is impractical.
Scalability: The transport layer solution (like TCP) scales better as congestion control decisions are made independently by each connection, without burdening the intermediate routers.
Application Awareness: The transport layer has knowledge of application requirements (e.g., reliability, flow rate), allowing it to perform congestion control in a way that aligns with application needs.
Modular Design: Separating congestion control from the network layer allows easier upgrades and modifications without altering the core network infrastructure.
Hence, TCP-based congestion control in the transport layer provides a more flexible, scalable, and effective solution for managing network congestion.
5 Section 5
5.1 Host A sends a TCP segment (sequence number=43, acknowledgement number=103) with payload of 14 bytes. The host B successfully received the segment and wants to send a segment with payload 14 bytes. What will be the value of sequence number and acknowledgement number field in the reply from host B? Assume that host A sends the first segment to the host B after connection setup.
The reply segment from Host B will have the following values:
Sequence Number: 103
Host B’s sequence number represents the byte number of its own data stream. Since Host B has not sent any data before, the sequence number starts from the acknowledgment number received (103).Acknowledgement Number: 57
Host B acknowledges the next byte expected from Host A. Host A’s sequence number was 43 with a payload of 14 bytes. Therefore, the next expected byte is 43 + 14 = 57.
Hence, the reply segment will have Sequence Number = 103 and Acknowledgement Number = 57.
5.2 Consider building a CSMA/CD network running at 1 Gbps over a 1 Km cable with no repeaters. The signal speed in the cable is 200000 Km/s. What should be the minimum frame size?
To calculate the minimum frame size for a CSMA/CD network, we use the propagation time and the transmission speed.
Step 1: Calculate Propagation Time
Propagation time \(t_{prop}\) is the time taken for the signal to travel from one end of the cable to the other.
\[ t_{prop} = \frac{\text{Distance}}{\text{Signal Speed}} \]
\[ t_{prop} = \frac{1 \text{ Km}}{200000 \text{ Km/s}} = 5 \mu s \]
Step 2: Calculate Round-Trip Time
The round-trip time is:
\[ 2 \times t_{prop} = 2 \times 5 \mu s = 10 \mu s \]
Step 3: Minimum Frame Size
For CSMA/CD, the minimum frame size is the number of bits that can be transmitted during the round-trip time.
\[ \text{Minimum Frame Size} = \text{Data Rate} \times \text{Round-Trip Time} \]
\[ = (1 \times 10^9 \text{ bps}) \times (10 \times 10^{-6} s) \]
\[ = 10000 \text{ bits} \]
Converting to bytes:
\[ \frac{10000}{8} = 1250 \text{ bytes} \]
Final Answer:
The minimum frame size is 1250 bytes.
5.3 There are 10 stations in a time slotted LAN always having constant load and ready to transmit. During any particular contention slot each station transmit with probability 0.1. If the average frame takes 122 msec to transmit, what is the channel efficiency, if round trip time is 51.2 micro second?
To calculate the channel efficiency of a time-slotted LAN, follow these steps:
Given Data:
- Number of stations, \(N = 10\)
- Probability of transmission per station, \(p = 0.1\)
- Average frame transmission time, \(T_f = 122 \text{ ms} = 122 \times 10^{-3} \text{ s}\)
- Round trip time (slot time), \(T_s = 51.2 \text{ μs} = 51.2 \times 10^{-6} \text{ s}\)
Step 1: Probability of Successful Transmission
Probability of exactly one station transmitting in a slot:
\[ P_{success} = Np(1-p)^{N-1} \]
Substitute the values:
\[ P_{success} = 10(0.1)(1-0.1)^{9} \]
\[ = 10(0.1)(0.9^9) \]
\[ = 1 \times 0.3874 \]
\[ P_{success} = 0.3874 \]
Step 2: Efficiency Formula
Efficiency is the fraction of time spent transmitting data:
\[ \text{Efficiency} = \frac{P_{success} \times T_f}{P_{success} \times T_f + T_s} \]
Substitute the values:
\[ \text{Efficiency} = \frac{0.3874 \times 122 \times 10^{-3}}{(0.3874 \times 122 \times 10^{-3}) + 51.2 \times 10^{-6}} \]
\[ = \frac{0.0473}{0.0473 + 0.0000512} \]
\[ = \frac{0.0473}{0.0473512} \]
\[ \text{Efficiency} \approx 0.999 \]
Final Answer:
The channel efficiency is approximately 99.9%.
5.4 How the following policies affect congestion in the network?
- packet queuing and service,
- packet discard, and
- timeout determination
The effect of the mentioned policies on network congestion is explained as follows:
i. Packet Queuing and Service:
- Effect on Congestion: Proper queuing and service discipline (e.g., FIFO, priority queuing, or fair queuing) help manage congestion by determining the order of packet processing.
- Impact: Priority-based queuing can reduce congestion for critical packets but may starve low-priority packets. Fair queuing ensures equal service, preventing any flow from dominating the network.
ii. Packet Discard:
- Effect on Congestion: When buffers are full, discarding packets can help alleviate congestion by freeing resources.
- Impact: Random Early Detection (RED) or tail drop methods discard packets selectively. RED helps avoid global synchronization and improves performance, while tail drop can cause TCP synchronization, worsening congestion.
iii. Timeout Determination:
- Effect on Congestion: Timeout policies determine when a sender should retransmit a packet if no acknowledgment is received.
- Impact: Short timeouts can cause frequent retransmissions, increasing congestion. Long timeouts improve congestion control but may delay data delivery. Adaptive timeout mechanisms optimize retransmission and prevent unnecessary congestion.
Conclusion: Proper design of these policies improves congestion management, enhancing overall network performance and fairness.
6 Section 6
6.1 What problem does TCP Reno have with multiple packet losses from a window? Give an example to illustrate.
TCP Reno struggles with multiple packet losses from a window due to its reliance on Fast Retransmit and Fast Recovery mechanisms, which only detect and recover a single packet loss per round-trip time (RTT). When multiple packets are lost from the same window, TCP Reno can only detect the first lost packet using triple duplicate acknowledgments (ACKs), retransmit it, and wait for its acknowledgment before proceeding to detect further losses. This leads to inefficient loss recovery and increased retransmission delay.
Example:
Consider a TCP window of 6 packets: P1, P2, P3, P4, P5, P6.
If packets P2 and P4 are lost:
- The receiver acknowledges
P1repeatedly (ACK2), indicatingP2is lost. - TCP Reno retransmits
P2after receiving three duplicate ACKs. - Once
P2is acknowledged, TCP Reno will detect the loss ofP4through new duplicate ACKs and retransmit it in the next RTT.
This serial retransmission process causes longer delays and low throughput, especially in networks with high packet loss rates.
6.2 You are hired to design a reliable byte-stream protocol that uses a sliding window (like TCP). This protocol will run over a 1-Gbps network. The RTT of the network is 100 msec, and the maximum segment lifetime is 30 sec. How many bits you include in the window size and sequence number fields of your protocol?
To design the reliable byte-stream protocol, the number of bits in the window size and sequence number fields is determined as follows:
1. Window Size Calculation:
The window size must accommodate the maximum amount of data in transit during one RTT.
Bandwidth-Delay Product (BDP):
\[
BDP = Bandwidth \times RTT
\]
Given:
- Bandwidth = 1 Gbps = \(10^9\) bits/sec
- RTT = 100 ms = 0.1 sec
\[ BDP = 10^9 \text{ bits/sec} \times 0.1 \text{ sec} = 10^8 \text{ bits} \]
Convert to bytes:
\[ 10^8 \text{ bits} \div 8 = 1.25 \times 10^7 \text{ bytes} \]
The window size must be at least 12.5 MB to avoid underutilization.
2. Sequence Number Field Calculation:
The sequence number must cover the range of bytes in the Maximum Segment Lifetime (MSL) to prevent ambiguity.
Total bytes that can be sent in MSL:
\[ \text{Total Bytes} = Bandwidth \times MSL \]
Given:
- MSL = 30 sec
\[ Total Bytes = 10^9 \text{ bits/sec} \times 30 \text{ sec} = 3 \times 10^{10} \text{ bits} \]
Convert to bytes:
\[ 3 \times 10^{10} \text{ bits} \div 8 = 3.75 \times 10^9 \text{ bytes} \]
The sequence number field must cover at least 3.75 GB.
The smallest number of bits required is:
\[ \text{Bits} = \lceil \log_2 (3.75 \times 10^9) \rceil = 32 \text{ bits} \]
Conclusion:
- Window Size Field: At least 25 bits (to represent 12.5 MB)
- Sequence Number Field: 32 bits
6.3 What is the maximum data rate at which a host can send 1000 byte TCP payload if the packet lifetime is 100seconds without having sequence number wrap-around? Assume TCP header of 20 bytes and IP header of 20 bytes.
To calculate the maximum data rate without sequence number wrap-around, follow these steps:
1. Total Sequence Number Space:
TCP uses a 32-bit sequence number field.
\[ 2^{32} = 4,294,967,296 \text{ bytes} \]
2. Maximum Data Transmitted in Packet Lifetime:
The maximum data that can be transmitted without sequence number wrap-around is the entire sequence number space during the Maximum Segment Lifetime (MSL).
\[ \text{Maximum Data Rate} = \frac{2^{32} \text{ bytes}}{\text{MSL}} \]
Given:
MSL = 100 seconds
\[ \text{Maximum Data Rate} = \frac{4,294,967,296 \text{ bytes}}{100 \text{ sec}} \]
\[ = 42,949,672.96 \text{ bytes/sec} \]
Convert to bits per second:
\[ 42,949,672.96 \times 8 = 343,597,383.68 \text{ bps} \]
3. Conclusion:
The maximum data rate without sequence number wrap-around is approximately 343.6 Mbps.
6.4 Why do lost TCP acknowledgements not necessarily force retransmission? TCP entity opens a connection and uses slow start. Approximately how many round-trip times are required before TCP can send N segments?
1. Lost TCP Acknowledgements and Retransmission:
Lost TCP acknowledgments do not necessarily force retransmission because TCP uses cumulative acknowledgments. Each acknowledgment confirms the receipt of all previous data up to a certain byte. If an ACK is lost, the sender will still receive future acknowledgments for higher sequence numbers, implicitly acknowledging the lost ACK without requiring retransmission.
For example, if packets P1, P2, and P3 are sent, and the ACK for P1 is lost but the ACK for P3 arrives, the sender knows that all three packets were successfully received.
2. TCP Slow Start Round-Trip Times:
During slow start, TCP starts by sending 1 segment and doubles the congestion window size every RTT until the window reaches the threshold or available capacity.
- RTT 1: 1 segment
- RTT 2: 2 segments
- RTT 3: 4 segments
- RTT 4: 8 segments
In general, the number of segments sent after \(\(k\)\) RTTs is:
\[ 2^{k-1} \]
To find the number of RTTs required to send N segments:
\[ 2^{k-1} \geq N \]
Taking logarithms:
\[ k \geq \log_2(N) + 1 \]
Conclusion:
TCP acknowledgments are not retransmitted due to cumulative ACKs, and the approximate number of RTTs required to send N segments during slow start is:
\[ \lceil \log_2(N) + 1 \rceil \]
6.5 How does a newly booted machine acquire IP address from DHCP server in distant network?
A newly booted machine acquires an IP address from a DHCP server on a distant network using the following DHCP (Dynamic Host Configuration Protocol) process:
1. DHCP Discovery (Broadcast)
The client sends a DHCPDISCOVER message as a broadcast packet (0.0.0.0 to 255.255.255.255) to locate available DHCP servers. This packet is forwarded by DHCP relay agents if the server is on a distant network.
2. DHCP Offer
DHCP servers that receive the DHCPDISCOVER message respond with a DHCPOFFER message, including an available IP address, subnet mask, lease duration, and other configuration parameters.
3. DHCP Request
The client selects one of the offered IP addresses and sends a DHCPREQUEST message to the chosen server, requesting the offered configuration.
4. DHCP Acknowledgment
The DHCP server responds with a DHCPACK message, confirming the lease and providing necessary network configuration (IP address, gateway, DNS, etc.).
If no acknowledgment is received, the client must restart the process.
Conclusion:
The DHCP relay agent helps the client communicate with a distant DHCP server by forwarding messages, enabling automatic IP configuration across different networks.