Javascript required
Skip to content Skip to sidebar Skip to footer

Computer Networks a Systems Approach 4th Edition Solution Manual Pdf

1 Computer Networks: A Systems Approach Fourth Edition Solutions Manual Larry Peterson and Bruce Davie

2 Dear Instructor: This Instructors Manual contains solutions to most of the exercises in the fourth edition of Peterson and Davie s Computer Networks: A Systems Approach. Exercises are sorted (roughly) by section, not difficulty. While some exercises are more difficult than others, none are intended to be fiendishly tricky. A few exercises (notably, though not exclusively, the ones that involve calculating simple probabilities) require a modest amount of mathematical background; most do not. There is a sidebar summarizing much of the applicable basic probability theory in Chapter 2. An occasional exercise is awkwardly or ambiguously worded in the text. This manual sometimes suggests better versions; also see the errata at the web site. Where appropriate, relevant supplemental files for these solutions (e.g. programs) have been placed on the textbook web site, Useful other material can also be found there, such as errata, sample programming assignments, PowerPoint lecture slides, and EPS figures. If you have any questions about these support materials, please contact your Morgan Kaufmann sales representative. If you would like to contribute your own teaching materials to this site, please contact our Associate Editor Rachel Roumeliotis, We welcome bug reports and suggestions as to improvements for both the exercises and the solutions; these may be sent to Larry Peterson Bruce Davie February, 2007

3 Chapter 1 1 Solutions for Chapter 1 3. Success here depends largely on the ability of one s search tool to separate out the chaff. The following are representative examples Mbone ATM MPEG IPv6 Ethernet 5. We will count the transfer as completed when the last data bit arrives at its destination. An alternative interpretation would be to count until the last ACK arrives back at the sender, in which case the time would be half an RTT (50 ms) longer. (a) 2 initial RTT s (200ms) KB/1.5Mbps (transmit) + RTT/2 (propagation) Mbit/1.5Mbps = sec = 5.58 sec. If we pay more careful attention to when a mega is 10 6 versus 2 20, we get 8,192,000 bits/1,500,000bits/sec = 5.46 sec, for a total delay of 5.71 sec. (b) To the above we add the time for 999 RTTs (the number of RTTs between when packet 1 arrives and packet 1000 arrives), for a total of = (c) This is 49.5 RTTs, plus the initial 2, for 5.15 seconds. (d) Right after the handshaking is done we send one packet. One RTT after the handshaking we send two packets. At n RTTs past the initial handshaking we have sent n = 2 n+1 1 packets. At n = 9 we have thus been able to send all 1,000 packets; the last batch arrives 0.5 RTT later. Total time is RTTs, or 1.15 sec. 6. The answer is in the book. 7. Propagation delay is m/( m/sec) = sec = 10µs. 100 bytes/10µs is 10 bytes/µs, or 10 MB/sec, or 80 Mbit/sec. For 512-byte packets, this rises to Mbit/sec. 8. The answer is in the book. 9. Postal addresses are strongly hierarchical (with a geographical hierarchy, which network addressing may or may not use). Addresses also provide embedded routing information. Unlike typical network addresses, postal addresses are long and of variable length and contain a certain amount of redundant information. This last attribute makes them more tolerant of minor errors and inconsistencies. Telephone numbers are more similar to network addresses (although phone numbers are nowadays apparently more like network host names than addresses): they are (geographically) hierarchical, fixed-length, administratively assigned, and in more-or-less one-to-one correspondence with nodes.

4 Chapter One might want addresses to serve as locators, providing hints as to how data should be routed. One approach for this is to make addresses hierarchical. Another property might be administratively assigned, versus, say, the factoryassigned addresses used by Ethernet. Other address attributes that might be relevant are fixed-length v. variable-length, and absolute v. relative (like file names). If you phone a toll-free number for a large retailer, any of dozens of phones may answer. Arguably, then, all these phones have the same non-unique address. A more traditional application for non-unique addresses might be for reaching any of several equivalent servers (or routers). 11. Video or audio teleconference transmissions among a reasonably large number of widely spread sites would be an excellent candidate: unicast would require a separate connection between each pair of sites, while broadcast would send far too much traffic to sites not interested in receiving it. Trying to reach any of several equivalent servers or routers might be another use for multicast, although broadcast tends to work acceptably well for things on this scale. 12. STDM and FDM both work best for channels with constant and uniform bandwidth requirements. For both mechanisms bandwidth that goes unused by one channel is simply wasted, not available to other channels. Computer communications are bursty and have long idle periods; such usage patterns would magnify this waste. FDM and STDM also require that channels be allocated (and, for FDM, be assigned bandwidth) well in advance. Again, the connection requirements for computing tend to be too dynamic for this; at the very least, this would pretty much preclude using one channel per connection. FDM was preferred historically for TV/radio because it is very simple to build receivers; it also supports different channel sizes. STDM was preferred for voice because it makes somewhat more efficient use of the underlying bandwidth of the medium, and because channels with different capacities was not originally an issue Gbps = 10 9 bps, meaning each bit is 10 9 sec (1 ns) wide. The length in the wire of such a bit is 1 ns m/sec = 0.23 m 14. x KB is x bits. y Mbps is y 10 6 bps; the transmission time would be x/y 10 6 sec = 8.192x/y ms. 15. (a) The minimum RTT is 2 385, 000, 000 m / m/sec = 2.57 sec. (b) The delay bandwidth product is 2.57 sec 100 Mb/sec = 257Mb = 32 MB. (c) This represents the amount of data the sender can send before it would be possible to receive a response.

5 Chapter 1 3 (d) We require at least one RTT before the picture could begin arriving at the ground (TCP would take two RTTs). Assuming bandwidth delay only, it would then take 25MB/100Mbps = 200Mb/100Mbps = 2.0 sec to finish sending, for a total time of = 4.57 sec until the last picture bit arrives on earth. 16. The answer is in the book. 17. (a) Delay-sensitive; the messages exchanged are short. (b) Bandwidth-sensitive, particularly for large files. (Technically this does presume that the underlying protocol uses a large message size or window size; stop-and-wait transmission (as in Section 2.5 of the text) with a small message size would be delay-sensitive.) (c) Delay-sensitive; directories are typically of modest size. (d) Delay-sensitive; a file s attributes are typically much smaller than the file itself (even on NT filesystems). 18. (a) One packet consists of 5000 bits, and so is delayed due to bandwidth 500µs along each link. The packet is also delayed 10 µs on each of the two links due to propagation delay, for a total of 1020µs. (b) With three switches and four links, the delay is 4 500µs µs = 2.04ms (c) With cut-through, the switch delays the packet by 200 bits = 20 µs. There is still one 500µs delay waiting for the last bit, and 20µs of propagation delay, so the total is 540µs. To put it another way, the last bit still arrives 500µs after the first bit; the first bit now faces two link delays and one switch delay but never has to wait for the last bit along the way. With three cut-through switches, the total delay would be: = 600 µs 19. The answer is in the book. 20. (a) The effective bandwidth is 10 Mbps; the sender can send data steadily at this rate and the switches simply stream it along the pipeline. We are assuming here that no ACKs are sent, and that the switches can keep up and can buffer at least one packet. (b) The data packet takes 2.04 ms as in 18(b) above to be delivered; the 400 bit ACKs take 40µs/link for a total of 4 40 µs+4 10 µs = 200µsec = 0.20 ms, for a total RTT of 2.24 ms bits in 2.24 ms is about 2.2 Mbps, or 280 KB/sec. (c) bytes / 12 hours = bytes/( sec) 1.5 MByte/sec = 12 Mbit/sec

6 Chapter (a) bits/sec 10 6 sec = 100 bits = 12.5 bytes (b) The first-bit delay is 520 µs through the store-and-forward switch, as in 18(a) bits/sec sec = 5200 bits. Alternatively, each link can hold 100 bits and the switch can hold 5000 bits. (c) bits/sec sec = 75,000 bits = 9375 bytes (d) This was intended to be through a satellite, i.re. between two ground stations, not to a satellite; this ground-to-ground interpretation makes the total one-way travel distance 2 35,900,000 meters. With a propagation speed of c = meters/sec, the one-way propagation delay is thus 2 35,900,000/c = 0.24 sec. Bandwidth delay is thus bits/sec 0.24 sec = 360,000 bits 45 KBytes 22. (a) Per-link transmit delay is 10 4 bits / 10 7 bits/sec = 1000 µs. Total transmission time = = 2075 µs. (b) When sending as two packets, here is a table of times for various events: T=0 start T=500 A finishes sending packet 1, starts packet 2 T=520 packet 1 finishes arriving at S T=555 packet 1 departs for B T=1000 A finishes sending packet 2 T=1055 packet 2 departs for B T=1075 bit 1 of packet 2 arrives at B T=1575 last bit of packet 2 arrives at B Expressed algebraically, we now have a total of one switch delay and two link delays; transmit delay is now 500µs: Smaller is faster, here = 1575 µs. 23. (a) Without compression the total time is 1 MB/bandwidth. When we compress the file, the total time is compression time + compressed size/bandwidth Equating these and rearranging, we get bandwidth = compression size reduction/compression time = 0.5 MB/1 sec = 0.5 MB/sec for the first case, = 0.6 MB/2 sec = 0.3 MB/sec for the second case. (b) Latency doesn t affect the answer because it would affect the compressed and uncompressed transmission equally. 24. The number of packets needed, N, is 10 6 /D, where D is the packet data size. Given that overhead = 100 N and loss = D (we have already counted the lost packet s header in the overhead), we have overhead+loss = /D +D.

7 Chapter 1 5 D overhead+loss The optimal size is 10,000 bytes which minimizes the above function. 25. Comparison of circuits and packets result as follows : (a) Circuits pay an up-front penalty of 1024 bytes being sent on one round trip for a total data count of n, whereas packets pay an ongoing per packet cost of 24 bytes for a total count of 1024 n/1000. So the question really asks how many packet headers does it take to exceed 2048 bytes, which is 86. Thus for files 86,000 bytes or longer, using packets results in more total data sent on the wire. (b) The total transfer latency for packets is the sum of the transmit delays, where the per-packet transmit time t is the packet size over the bandwidth b (8192/b), introduced by each of s switches (s t), total propagation delay for the links ((s + 2) 0.002), the per packet processing delays introduced by each switch (s 0.001), and the transmit delay for all the packets, where the total packet count c is n/1000, at the source (c t). Resulting in a total latency of (8192s/b) s (8.192n/b) = ( n) seconds. The total latency for circuits is the transmit delay for the whole file (8n/b), the total propagation delay for the links, and the setup cost for the circuit which is just like sending one packet each way on the path. Solving the resulting inequality (n/b) > (n/b) for n shows that circuits achieve a lower delay for files larger than or equal to 987,000 B. (c) Only the payload to overhead ratio size effects the number of bits sent, and there the relationship is simple. The following table show the latency results of varying the parameters by solving for the n where circuits become faster, as above. This table does not show how rapidly the performance diverges; for varying p it can be significant.

8 Chapter 1 6 s b p pivotal n 5 4 Mbps Mbps Mbps Mbps Mbps Mbps Mbps Mbps Mbps Mbps Mbps Mbps Mbps (d) Many responses are probably reasonable here. The model only considers the network implications, and does not take into account usage of processing or state storage capabilities on the switches. The model also ignores the presence of other traffic or of more complicated topologies. 26. The time to send one 2000-bit packet is 2000 bits/100 Mbps = 20 µs. The length of cable needed to exactly contain such a packet is 20 µs m/sec = 4,000 meters. 250 bytes in 4000 meters is 2000 bits in 4000 meters, or 50 bits per 100 m. With an extra 10 bits/100 m, we have a total of 60 bits/100 m. A 2000-bit packet now fills 2000/(.6 bits/m) = 3333 meters. 27. For music we would need considerably more bandwidth, but we could tolerate high (but bounded) delays. We could not necessarily tolerate higher jitter, though; see Section We might accept an audible error in voice traffic every few seconds; we might reasonably want the error rate during music transmission to be a hundredfold smaller. Audible errors would come either from outright packet loss, or from jitter (a packet s not arriving on time). Latency requirements for music, however, might be much lower; a severalsecond delay would be inconsequential. Voice traffic has at least a tenfold faster requirement here. 28. (a) bytes/sec = 26.4 MB/sec (b) = 96,000 bytes/sec = 94KB/sec (c) 650MB/75 min = 8.7 MB/min = 148 KB/sec (d) pixels = 414,720 bits = 51,840 bytes. At 14,400 bits/sec, this would take 28.8 seconds (ignoring overhead for framing and acknowledgments). 29. The answer is in the book.

9 Chapter (a) A file server needs lots of peak bandwidth. Latency is relevant only if it dominates bandwidth; jitter and average bandwidth are inconsequential. No lost data is acceptable, but without real-time requirements we can simply retransmit lost data. (b) A print server needs less bandwidth than a file server (unless images are extremely large). We may be willing to accept higher latency than (a), also. (c) A file server is a digital library of a sort, but in general the world wide web gets along reasonably well with much less peak bandwidth than most file servers provide. (d) For instrument monitoring we don t care about latency or jitter. If data were continually generated, rather than bursty, we might be concerned mostly with average bandwidth rather than peak, and if the data really were routine we might just accept a certain fraction of loss. (e) For voice we need guaranteed average bandwidth and bounds on latency and jitter. Some lost data might be acceptable; e.g. resulting in minor dropouts many seconds apart. (f) For video we are primarily concerned with average bandwidth. For the simple monitoring application here, relatively modest video of Exercise 28(b) might suffice; we could even go to monochrome (1 bit/pixel), at which point frames/sec requires 12KB/sec. We could tolerate multisecond latency delays; the primary restriction is that if the monitoring revealed a need for intervention then we still have time to act. Considerable loss, even of entire frames, would be acceptable. (g) Full-scale television requires massive bandwidth. Latency, however, could be hours. Jitter would be limited only by our capacity absorb the arrivaltime variations by buffering. Some loss would be acceptable, but large losses would be visually annoying. 31. In STDM the offered timeslices are always the same length, and are wasted if they are unused by the assigned station. The round-robin access mechanism would generally give each station only as much time as it needed to transmit, or none if the station had nothing to send, and so network utilization would be expected to be much higher. 32. (a) In the absence of any packet losses or duplications, when we are expecting the Nth packet we get the Nth packet, and so we can keep track of N locally at the receiver. (b) The scheme outlined here is the stop-and-wait algorithm of Section 2.5; as is indicated there, a header with at least one bit of sequence number is needed (to distinguish between receiving a new packet and a duplication of the previous packet). (c) With out-of-order delivery allowed, packets up to 1 minute apart must be distinguishable via sequence number. Otherwise a very old packet might

10 Chapter 1 8 arrive and be accepted as current. Sequence numbers would have to count as high as bandwidth 1 minute /packet size 33. In each case we assume the local clock starts at (a) Latency: 100. Bandwidth: high enough to read the clock every 1 unit tiny bit of jitter: latency = (b) Latency=100; bandwidth: only enough to read the clock every 10 units. Arrival times fluctuate due to jitter latency = latency = (c) Latency = 5; zero jitter here: we lost Generally, with MAX PENDING =1, one or two connections will be accepted and queued; that is, the data won t be delivered to the server. The others will be ignored; eventually they will time out. When the first client exits, any queued connections are processed. 36. Note that UDP accepts a packet of data from any source at any time; TCP requires an advance connection. Thus, two clients can now talk simultaneously; their messages will be interleaved on the server.

11 Solutions for Chapter 2 1. Bits NRZ Clock Manchester NRZI Bits See the figure below. 3. The answer is in the book. 4. One can list all 5-bit sequences and count, but here is another approach: there are 2 3 sequences that start with 00, and 2 3 that end with 00. There are two sequences, and 00100, that do both. Thus, the number that do either is = 14, and finally the number that do neither is = 18. Thus there would have been enough 5-bit codes meeting the stronger requirement; however, additional codes are needed for control sequences. 5. The stuffed bits (zeros) are in bold: The marks each position where a stuffed 0 bit was removed. There were no stuffing errors detectable by the receiver; the only such error the receiver could identify would be seven 1 s in a row The answer is in the book , DLE, DLE, DLE, ETX, ETX 9. (a) X DLE Y, where X can be anything besides DLE and Y can be anything except DLE or ETX. In other words, each DLE must be followed by either DLE or ETX. (b)

12 10. (a) After 48 8=384 bits we can be off by no more than ±1/2 bit, which is about 1 part in 800. (b) One frame is 810 bytes; at STS Mbps speed we are sending /(8 810) = about 8000 frames/sec, or about 480,000 frames/minute. Thus, if station B s clock ran faster than station A s by one part in 480,000, A would accumulate about one extra frame per minute. 11. Suppose an undetectable three-bit error occurs. The three bad bits must be spread among one, two, or three rows. If these bits occupy two or three rows, then some row must have exactly one bad bit, which would be detected by the parity bit for that row. But if the three bits are all in one row, then that row must again have a parity error (as must each of the three columns containing the bad bits). 12. If we flip the bits corresponding to the corners of a rectangle in the 2-D layout of the data, then all parity bits will still be correct. Furthermore, if four bits change and no error is detected, then the bad bits must form a rectangle: in order for the error to go undetected, each row and column must have no errors or exactly two errors. 13. If we know only one bit is bad, then 2-D parity tells us which row and column it is in, and we can then flip it. If, however, two bits are bad in the same row, then the row parity remains correct, and all we can identify is the columns in which the bad bits occur. 14. We need to show that the 1 s-complement sum of two non-0x0000 numbers is non-0x0000. If no unsigned overflow occurs, then the sum is just the 2 scomplement sum and can t be 0000 without overflow; in the absence of overflow, addition is monotonic. If overflow occurs, then the result is at least 0x0000 plus the addition of a carry bit, i.e. 0x Let s define swap([a,b]) = [B,A], where A and B are one byte each. We only need to show [A, B] + [C, D] = swap([b, A] + [D, C]). If both (A+C) and (B+D) have no carry, the equation obviously holds. If A+C has a carry and B+D+1 does not, [A, B] + [C, D] = [(A+C) & 0xEF, B+D+1] swap([b, A] + [D, C]) = swap([b+d+1, (A+C) & 0xEF]) = [(A+C) & 0xEF, B+D+1] (The case where B+D+1 has also a carry is similar to the last case.) If B+D has a carry, and A+C+1 does not, [A, B] + [C, D] = [A+C+1, (B+D) & 0xEF]. swap([b, A] + [D, C]) = swap([(b+d) & 0xEF, A+C+1]) = [A+C+1, (B+D) & 0xEF]. 10

13 Chapter 2 11 If both (A+C) and (B+D) have a carry, [A, B] + [C, D] = [((A+C) & 0xEF) + 1, ((B+D) & 0xEF) + 1] swap([b, A] + [D, C]) = swap([((b+d) & 0xEF) + 1, ((A+C) & 0xEF) + 1] = [((A+C) & 0xEF) + 1, ((B+D) & 0xEF) + 1] 16. Consider only the 1 s complement sum of the 16-bit words. If we decrement a low-order byte in the data, we decrement the sum by 1, and can incrementally revise the old checksum by decrementing it by 1 as well. If we decrement a high-order byte, we must decrement the old checksum by Here is a rather combinatorial approach. Let a, b, c, d be 16-bit words. Let [a, b] denote the 32-bit concatenation of a and b, and let carry(a, b) denote the carry bit (1 or 0) from the 2 s-complement sum a+b (denoted here a+ 2 b). It suffices to show that if we take the 32-bit 1 s complement sum of [a, b] and [c, d], and then add upper and lower 16 bits, we get the 16-bit 1 s-complement sum of a, b, c, and d. We note a + 1 b = a + 2 b + 2 carry(a, b). The basic case is supposed to work something like this. First, Adding in the carry bit, we get [a, b] + 2 [c, d] = [a + 2 c + 2 carry(b, d), b + 2 d] [a, b] + 1 [c, d] = [a + 2 c + 2 carry(b, d), b + 2 d + 2 carry(a, c)] (1) Now we take the 1 s complement sum of the halves, a + 2 c + 2 carry(b, d) + 2 b + 2 d + 2 carry(a, c) + (carry(wholething)) and regroup: = a + 2 c + 2 carry(a, c) + 2 b + 2 d + 2 carry(b, d) + (carry(wholething)) = (a + 1 c) + 2 (b + 1 d) + carry(a + 1 c, b + 1 d) = (a + 1 c) + 1 (b + 1 d) which by associativity and commutativity is what we want. There are a couple annoying special cases, however, in the preceding, where a sum is 0xFFFF and so adding in a carry bit triggers an additional overflow. Specifically, the carry(a, c) in (1) is actually carry(a, c, carry(b, d)), and secondly adding it to b + 2 d may cause the lower half to overflow, and no provision has been made to carry over into the upper half. However, as long as a + 2 c and b+ 2 d are not equal to 0xFFFF, adding 1 won t affect the overflow bit and so the above argument works. We handle the 0xFFFF cases separately. Suppose that b + 2 d = 0xFFFF = 2 0. Then a + 1 b + 1 c + 1 d = a + 1 c. On the other hand, [a, b] + 1 [c, d] = [a + 2 b, 0xFFFF] + carry(a, b). If carry(a, b) = 0, then adding upper and lower halves together gives a + 2 b = a + 1 b. If

14 Chapter 2 12 carry(a, b) = 1, we get [a, b] + 1 [c, d] = [a + 2 b + 2 1, 0] and adding halves again leads to a + 1 b. Now suppose a + 2 c = 0xFFFF. If carry(b, d) = 1 then b + 2 d 0xFFFF and we have [a, b] + 1 [c, d] = [0, b + 2 d + 2 1] and folding gives b + 1 d. The carry(b, d) = 0 case is similar. Alternatively, we may adopt a more algebraic approach. We may treat a buffer consisting of n-bit blocks as a large number written in base 2 n. The numeric value of this buffer is congruent mod (2 n 1) to the (exact) sum of the digits, that is to the exact sum of the blocks. If this latter sum has more than n bits, we can repeat the process. We end up with the n-bit 1 s-complement sum, which is thus the remainder upon dividing the original number by 2 n 1. Let b be the value of the original buffer. The 32-bit checksum is thus b mod If we fold the upper and lower halves, we get (b mod (2 32 1)) mod (2 16 1), and, because is divisible by , this is b mod (2 16 1), the 16-bit checksum. 18. (a) We take the message , append 000 to it, and divide by The remainder is 011; what we transmit is the original message with this remainder appended, or (b) Inverting the first bit gives ; dividing by 1001 (x 3 + 1) gives a quotient of and a remainder of The answer is in the book. 20. (b) p q C q (c) The bold entries 101 (in the dividend), 110 (in the quotient), and in the body of the long division here correspond to the bold row of the preceding table

15 Chapter (a) M has eight elements; there are only four values for e, so there must be m 1 and m 2 in M with e(m 1 ) = e(m 2 ). Now if m 1 is transmuted into m 2 by a two-bit error, then the error-code e cannot detect this. (b) For a crude estimate, let M be the set of N-bit messages with four 1 s, and all the rest zeros. The size of M is (N choose 4) = N!/(4!(N 4)!). Any element of M can be transmuted into any other by an 8-bit error. If we take N large enough that the size of M is bigger than 2 32, then as in part (a) there must for any 32-bit error code function e(m) be elements m 1 and m 2 of M with e(m 1 ) = e(m 2 ). To find a sufficiently large N, we note N!/(4!(N 4)!) > (N 3) 4 /24; it thus suffices to find N so (N 3) 4 > N 600 works. Considerably smaller estimates are possible. 22. Assume a NAK is sent only when an out-of-order packet arrives. The receiver must now maintain a RESEND NAK timer in case the NAK, or the packed it NAK ed, is lost. Unfortunately, if the sender sends a packet and is then idle for a while, and this packet is lost, the receiver has no way of noticing the loss. Either the sender must maintain a timeout anyway, requiring ACKs, or else some zero-data filler packets must be sent during idle times. Both are burdensome. Finally, at the end of the transmission a strict NAK-only strategy would leave the sender unsure about whether any packets got through. A final out-of-order filler packet, however, might solve this. 23. (a) Propagation delay = m/( m/sec) = 100µs. (b) The roundtrip time would be about 200µs. A plausible timeout time would be twice this, or 0.4 ms. Smaller values (but larger than 0.2 ms!) might be reasonable, depending on the amount of variation in actual RTTs. See Section of the text. (c) The propagation-delay calculation does not consider processing delays that may be introduced by the remote node; it may not be able to answer immediately. 24. Bandwidth (roundtrip)delay is about 125KB/sec 2.5 sec, or 312 packets. The window size should be this large; the sequence number space must cover twice this range, or up to bits are needed. 25. The answer is in the book. 26. If the receiver delays sending an ACK until buffer space is available, it risks delaying so long that the sender times out unnecessarily and retransmits the frame. 27. For Fig 2.19(b) (lost frame), there are no changes from the diagram in the text. The next two figures correspond to the text s Fig 2.19(c) and (d); (c) shows a lost ACK and (d) shows an early timeout. For (c), the receiver timeout is shown

16 Chapter 2 14 slightly greater than (for definiteness) twice the sender timeout. Sender Receiver Sender Receiver Frame Frame[N] ACK ACK[N] Timeout Frame Timeout Frame[N] Ignored Frame[N+1] duplicate frame; ignored; receiver still waits for timeout on Frame[N+1] Timeout Frame ACK Ignored Timeout ACK[N+1] Timeout for Frame[N+1] cancelled (c) Here is the version of Fig 2.19(c) (lost ACK), showing a receiver timeout of approximately half the sender timeout. (d) Sender Receiver Frame[N] ACK Timeout cancelled ACK Frame[N+1] ACK Timeout; receiver retransmits before sender times out Yet another Timeout (possible, depending on exact timeout intervals)

17 Chapter (a) The duplications below continue until the end of the transmission. Sender Receiver Frame[1] original frame response to duplicate ACK original frame response to duplicate ACK Frame[1] ACK[1] ACK[1] Frame[2] Frame[2] ACK[2] ACK[2] Frame[3] Frame[3]... original ACK response to duplicate frame original ACK response to duplicate frame original ACK response to duplicate frame (b) To trigger the sorcerer s apprentice phenomenon, a duplicate data frame must cross somewhere in the network with the previous ACK for that frame. If both sender and receiver adopt a resend-on-timeout strategy, with the same timeout interval, and an ACK is lost, then both sender and receiver will indeed retransmit at about the same time. Whether these retransmissions are synchronized enough that they cross in the network depends on other factors; it helps to have some modest latency delay or else slow hosts. With the right conditions, however, the sorcerer s apprentice phenomenon can be reliably reproduced. 29. The following is based on what TCP actually does: every ACK might (optionally or not) contain a value the sender is to use as a maximum for SWS. If this value is zero, the sender stops. A later ACK would then be sent with a nonzero SWS, when a receive buffer becomes available. Some mechanism would need to be provided to ensure that this later ACK is not lost, lest the sender wait forever. It is best if each new ACK reduces SWS by no more than 1, so that the sender s LFS never decreases. Assuming the protocol above, we might have something like this: T=0 Sender sends Frame1-Frame4. In short order, ACK1...ACK4 are sent setting SWS to 3, 2, 1, and 0 respectively. The Sender now waits for SWS>0. T=1 Receiver frees first buffer; sends ACK4/SWS=1. Sender slides window forward and sends Frame5. Receiver sends ACK5/SWS=0. T=2 Receiver frees second buffer; sends ACK5/SWS=1. Sender sends Frame6; receiver sends ACK6/SWS=0. T=3 Receiver frees third buffer; sends ACK6/SWS=1.

18 Chapter 2 16 Sender sends Frame7; receiver sends ACK7/SWS=0. T=4 Receiver frees fourth buffer; sends ACK7/SWS=1. Sender sends Frame8; receiver sends ACK8/SWS= Here is one approach; variations are possible. If frame[n] arrives, the receiver sends ACK[N] if NFE=N; otherwise if N was in the receive window the receiver sends SACK[N]. The sender keeps a bucket of values of N>LAR for which SACK[N] was received; note that whenever LAR slides forward this bucket will have to be purged of all N LAR. If the bucket contains one or two values, these could be attributed to out-oforder delivery. However, the sender might reasonably assume that whenever there was an N>LAR with frame[n] unacknowledged but with three, say, later SACKs in the bucket, then frame[n] was lost. (The number three here is taken from TCP with fast retransmit, which uses duplicate ACKs instead of SACKs.) Retransmission of such frames might then be in order. (TCP s fast-retransmit strategy would only retransmit frame[lar+1].) 31. The right diagram, for part (b), shows each of frames 4-6 timing out after a 2 RTT timeout interval; a more realistic implementation (e.g. TCP) would probably revert to SWS=1 after losing packets, to address both congestion control and the lack of ACK clocking.

19 Chapter 2 17 Sender Receiver Sender Receiver Frame[1] Frame[1] Frame[2] Frame[2] 1 RTT 2 RTT ACK[1] ACK[2] ACK[3] Frame[3] Frame[4] Frame[5] Frame[6] Frame[4] lost We might resend ACK[3] here. 1 RTT 2 RTT ACK[1] ACK[2] ACK[3] Frame[3] Frame[4] Frame[5] Frame[6] Frames [4]-[6] lost 3 RTT Timeout ACK[6] Frame[4] Frame[7] cumulative ACK 3 RTT Timeout Timeout Timeout ACK[4] ACK[5] ACK[6] Frame[4] Frame[5] Frame[6] Frame[7] The answer is in the book. 33. In the following, ACK[N] means that all packets with sequence number less than N have been received. 1. The sender sends DATA[0], DATA[1], DATA[2]. All arrive. 2. The receiver sends ACK[3] in response, but this is slow. The receive window is now DATA[3]..DATA[5]. 3. The sender times out and resends DATA[0], DATA[1], DATA[2]. For convenience, assume DATA[1] and DATA[2] are lost. The receiver accepts DATA[0] as DATA[5], because they have the same transmitted sequence number. 4. The sender finally receives ACK[3], and now sends DATA[3]-DATA[5]. The receiver, however, believes DATA[5] has already been received, when DATA[0] arrived, above, and throws DATA[5] away as a duplicate. The protocol now continues to proceed normally, with one bad block in the received stream. 34. We first note that data below the sending window (that is, <LAR) is never sent again, and hence because out-of-order arrival is disallowed if DATA[N] arrives at the receiver then nothing at or before DATA[N-3] can arrive later. Similarly, for ACKs, if ACK[N] arrives then (because ACKs are cumulative) no ACK

20 Chapter 2 18 before ACK[N] can arrive later. As before, we let ACK[N] denote the acknowledgment of all data packets less than N. (a) If DATA[6] is in the receive window, then the earliest that window can be is DATA[4]-DATA[6]. This in turn implies ACK[4] was sent, and thus that DATA[1]-DATA[3] were received, and thus that DATA[0], by our initial remark, can no longer arrive. (b) If ACK[6] may be sent, then the lowest the sending window can be is DATA[3]..DATA[5]. This means that ACK[3] must have been received. Once an ACK is received, no smaller ACK can ever be received later. 35. (a) The smallest working value for MaxSeqNum is 8. It suffices to show that if DATA[8] is in the receive window, then DATA[0] can no longer arrive at the receiver. We have that DATA[8] in receive window the earliest possible receive window is DATA[6]..DATA[8] ACK[6] has been received DATA[5] was delivered. But because SWS=5, all DATA[0] s sent were sent before DATA[5] by the no-out-of-order arrival hypothesis, DATA[0] can no longer arrive. (b) We show that if MaxSeqNum=7, then the receiver can be expecting DATA[7] and an old DATA[0] can still arrive. Because 7 and 0 are indistinguishable mod MaxSeqNum, the receiver cannot tell which actually arrived. We follow the strategy of Exercise Sender sends DATA[0]...DATA[4]. All arrive. 2. Receiver sends ACK[5] in response, but it is slow. The receive window is now DATA[5]..DATA[7]. 3. Sender times out and retransmits DATA[0]. The receiver accepts it as DATA[7]. (c) MaxSeqNum SWS + RWS. 36. (a) Note that this is the canonical SWS = bandwidth delay case, with RTT = 4 sec. In the following we list the progress of one particular packet. At any given instant, there are four packets outstanding in various states. T=N Data[N] leaves A T=N+1 Data[N] arrives at R T=N+2 Data[N] arrives at B; ACK[N] leaves T=N+3 ACK[N] arrives at R T=N+4 ACK[N] arrives at A; DATA[N+4] leaves. Here is a specific timeline showing all packets in progress: T=0 Data[0]...Data[3] ready; Data[0] sent T=1 Data[0] arrives at R; Data[1] sent T=2 Data[0] arrives at B; ACK[0] starts back; Data[2] sent T=3 ACK[0] arrives at R; Data[3] sent T=4 ACK[0] arrives at A; Data[4] sent T=5 ACK[1] arrives at A; Data[5] sent...

21 Chapter 2 19 (b) T=0 Data[0]...Data[3] sent T=1 Data[0]...Data[3] arrive at R T=2 Data arrive at B; ACK[0]...ACK[3] start back T=3 ACKs arrive at R T=4 ACKs arrive at A; Data[4]...Data[7] sent T=5 Data arrive at R 37. T=0 A sends frames 1-4. Frame[1] starts across the R B link. Frames 2,3,4 are in R s queue. T=1 Frame[1] arrives at B; ACK[1] starts back; Frame[2] leaves R. Frames 3,4 are in R s queue. T=2 ACK[1] arrives at R and then A; A sends Frame[5] to R; Frame[2] arrives at B; B sends ACK[2] to R. R begins sending Frame[3]; frames 4,5 are in R s queue. T=3 ACK[2] arrives at R and then A; A sends Frame[6] to R; Frame[3] arrives at B; B sends ACK[3] to R; R begins sending Frame[4]; frames 5,6 are in R s queue. T=4 ACK[3] arrives at R and then A; A sends Frame[7] to R; Frame[4] arrives at B; B sends ACK[4] to R. R begins sending Frame[5]; frames 6,7 are in R s queue. The steady-state queue size at R is two frames. 38. T=0 A sends frames 1-4. Frame[1] starts across the R B link. Frame[2] is in R s queue; frames 3 & 4 are lost. T=1 Frame[1] arrives at B; ACK[1] starts back; Frame[2] leaves R. T=2 ACK[1] arrives at R and then A; A sends Frame[5] to R. R immediately begins forwarding it to B. Frame[2] arrives at B; B sends ACK[2] to R. T=3 ACK[2] arrives at R and then A; A sends Frame[6] to R. R immediately begins forwarding it to B. Frame[5] (not 3) arrives at B; B sends no ACK. T=4 Frame[6] arrives at B; again, B sends no ACK. T=5 A TIMES OUT, and retransmits frames 3 and 4. R begins forwarding Frame[3] immediately, and enqueues 4. T=6 Frame[3] arrives at B and ACK[3] begins its way back. R begins forwarding Frame[4]. T=7 Frame[4] arrives at B and ACK[6] begins its way back. ACK[3] reaches A and A then sends Frame[7]. R begins forwarding Frame[7]. 39. Ethernet has a minimum frame size (64 bytes for 10Mbps; considerably larger for faster Ethernets); smaller packets are padded out to the minimum size. Protocols above Ethernet must be able to distinguish such padding from actual data.

22 Chapter Hosts sharing the same address will be considered to be the same host by all other hosts. Unless the conflicting hosts coordinate the activities of their higher level protocols, it is likely that higher level protocol messages with otherwise identical demux information from both hosts will be interleaved and result in communication breakdown. 41. One-way delays: Coax: 1500m 6.49 µs link: 1000m 5.13 µs repeaters two 1.20 µs transceivers six 1.20 µs (two for each repeater, one for each station) drop cable 6 50m 1.54 µs Total: µs The roundtrip delay is thus about 31.1 µs, or 311 bits. The official total is 464 bits, which when extended by 48 bits of jam signal exactly accounts for the 512-bit minimum packet size. The 1982 Digital-Intel-Xerox specification presents a delay budget (page 62 of that document) that totals bit-times, leaving 20 nanoseconds for unforeseen contingencies. 42. A station must not only detect a remote signal, but for collision detection it must detect a remote signal while it itself is transmitting. This requires much higher remote-signal intensity. 43. (a) Assuming 48 bits of jam signal was still used, the minimum packet size would be bits = 586 bytes. (b) This packet size is considerably larger than many higher-level packet sizes, resulting in considerable wasted bandwidth. (c) The minimum packet size could be smaller if maximum collision domain diameter were reduced, and if sundry other tolerances were tightened up. 44. (a) A can choose k A =0 or 1; B can choose k B =0,1,2,3. A wins outright if (k A, k B ) is among (0,1), (0,2), (0,3), (1,2), (1,3); there is a 5/8 chance of this. (b) Now we have k B among If k A =0, there are 7 choices for k B that have A win; if k A =1 then there are 6 choices. All told the probability of A s winning outright is 13/16. (c) P(winning race 1) = 5/8>1/2 and P(winning race 2) = 13/16>3/4; generalizing, we assume the odds of A winning the ith race exceed (1 1/2 i 1 ). We now have that P(A wins every race given that it wins races 1-3) (1 1/8)(1 1/16)(1 1/32)(1 1/64)... 3/4.

23 Chapter 2 21 (d) B gives up on it, and starts over with B (a) If A succeeds in sending a packet, B will get the next chance. If A and B are the only hosts contending for the channel, then even a wait of a fraction of a slot time would be enough to ensure alternation. (b) Let A and B and C be contending for a chance to transmit. We suppose the following: A wins the first race, and so for the second race it defers to B and C for two slot times. B and C collide initially; we suppose B wins the channel from C one slot time later (when A is still deferring). When B now finishes its transmission we have the third race for the channel. B defers for this race; let us suppose A wins. Similarly, A defers for the fourth race, but B wins. At this point, the backoff range for C is quite high; A and B however are each quickly successful typically on their second attempt and so their backoff ranges remain bounded by one or two slot times. As each defers to the other for this amount of time after a successful transmission, there is a strong probability that if we get to this point they will continue to alternate until C finally gives up. (c) We might increase the backoff range given a decaying average of A s recent success rate. 46. If the hosts are not perfectly synchronized the preamble of the colliding packet will interrupt clock recovery. 47. Here is one possible solution; many, of course, are possible. The probability of four collisions appears to be quite low. Events are listed in order of occurrence. A attempts to transmit; discovers line is busy and waits. B attempts to transmit; discovers line is busy and waits. C attempts to transmit; discovers line is busy and waits. D finishes; A, B, and C all detect this, and attempt to transmit, and collide. A chooses k A =1, B chooses k B =1, and C chooses k C =1. One slot time later A, B, and C all attempt to retransmit, and again collide. A chooses k A =2, B chooses k B =3, and C chooses k C =1. One slot time later C attempts to transmit, and succeeds. While it transmits, A and B both attempt to retransmit but discover the line is busy and wait. C finishes; A and B attempt to retransmit and a third collision occurs. A and B back off and (since we require a fourth collision) once again happen to choose the same k < 8. A and B collide for the fourth time; this time A chooses k A =15 and B chooses k B = slot times later, B transmits. While B is transmitting, A attempts to transmit but sees the line is busy, and waits for B to finish.

24 Chapter Many variations are, of course, possible. The scenario below attempts to demonstrate several plausible combinations. D finishes transmitting. First slot afterwards: all three defer (P=8/27). Second slot afterwards: A,B attempt to transmit (and collide); C defers. Third slot: C transmits (A and B are presumably backing off, although no relationship between p-persistence and backoff strategy was described). C finishes. First slot afterwards: B attempts to transmits and A defers, so B succeeds. B finishes. First slot afterwards: A defers. Second slot afterwards: A defers. Third slot afterwards: A defers. Fourth slot afterwards: A defers a fourth time (P=16/81 20%). Fifth slot afterwards: A transmits. A finishes. 49. (a) The second address must be distinct from the first, the third from the first two, and so on; the probability that none of the address choices from the second to the one thousand and twenty-fourth collides with an earlier choice is (1 1/2 48 )(1 2/2 48 ) (1 1023/2 48 ) 1 ( )/2 48 = 1 1, 047, 552/( ). Probability of a collision is thus 1, 047, 552/( ) The denominator should probably be 2 46 rather than 2 48, since two bits in an Ethernet address are fixed. (b) Probability of the above on million tries is (c) Using the method of (a) yields (2 30 ) 2 /( ) = 2 11 ; we are clearly beyond the valid range of the approximation. A better approximation, using logs, is presented in Exercise Suffice it to say that a collision is essentially certain. 50. (a) Here is a sample run. The bold backoff-time binary digits were chosen by coin toss, with heads=1 and tails=0. Backoff times are then converted to decimal. T=0: hosts A,B,C,D,E all transmit and collide. Backoff times are chosen by a single coin flip; we happened to get k A =1, k B =0, k C =0, k D =1, k E =1. At the end of this first collision, T is now 1. B and C retransmit at T=1; the others wait until T=2. T=1: hosts B and C transmit, immediately after the end of the first collision, and collide again. This time two coin flips are needed for each backoff; we

25 Chapter 2 23 happened to get k B = 00 = 0, k C = 11 = 3. At this point T is now 2; B will thus attempt again at T=2+0=2; C will attempt again at T=2+3=5. T=2: hosts A,B,D,E attempt. B chooses a three-bit backoff time as it is on its third collision, while the others choose two-bit times. We got k A = 10 = 2, k B = 010 = 2, k D = 01 = 1, k E = 11 = 3. We add each k to T=3 to get the respective retransmission-attempt times: T=5,5,4,6. T=3: Nothing happens. T=4: Station D is the only one to attempt transmission; it successfully seizes the channel. T=5: Stations A, B, and C sense the channel before transmission, but find it busy. E joins them at T=6. (b) Perhaps the most significant difference on a real Ethernet is that stations close to each other will detect collisions almost immediately; only stations at extreme opposite points will need a full slot time to detect a collision. Suppose stations A and B are close, and C is far away. All transmit at the same time T=0. Then A and B will effectively start their backoff at T 0; C will on the other hand wait for T=1. If A, B, and C choose the same backoff time, A and B will be nearly a full slot ahead. Interframe spacing is only one-fifth of a slot time and applies to all participants equally; it is not likely to matter here. 51. Here is a simple program (also available on the web site): #define USAGE "ether N" // Simulates N ethernet stations all trying to transmit at once; // returns average # of slot times until one station succeeds. #include <iostream.h> #include <stdlib.h> #include <assert.h> #define MAX 1000 /* max # of stations */ class station { public: void reset() { NextAttempt = CollisionCount = 0;} bool transmits(int T) {return NextAttempt==T;} void collide() { // updates station after a collision CollisionCount ++; NextAttempt += 1 + backoff( CollisionCount); //the 1 above is for the current slot } private: int NextAttempt; int CollisionCount; static int backoff(int k) {

26 Chapter 2 24 }; //choose random number 0..2 k-1; ie choose k random bits unsigned short r = rand(); unsigned short mask = 0xFFFF >> (16-k); // mask = 2 k-1 return int (r & mask); } station S[MAX]; // run does a single simulation // it returns the time at which some entrant transmits int run (int N) { int time = 0; int i; for (i=0;i<n;i++) { S[i].reset(); } while(1) { int count = 0; // # of attempts at this time int j= -1; // count the # of attempts; save j as index of one of them for (i=0; i<n; i++) { if (S[i].transmits(time)) {j=i; ++count;} } if (count==1) // we are done return time; else if (count>1) { // collisions occurred for (i=0;i<n;i++) { if (S[i].transmits(time)) S[i].collide(); } } ++time; } } int RUNCOUNT = 10000; void main(int argc, char * argv[]) { int N, i, runsum=0; assert(argc == 2); N=atoi(argv[1]); assert(n<max); for (i=0;i<runcount;i++) runsum += run(n); cout << "runsum = " << runsum << " RUNCOUNT= " << RUNCOUNT << " average: " << ((double)runsum)/runcount << endl; return; }

27 Chapter 2 25 Here is some data obtained from it: # stations slot times We alternate N/2 slots of wasted bandwidth with 5 slots of useful bandwidth. The useful fraction is: 5/(N/2 + 5) = 10/(N+10) 53. (a) The program is below (and on the web site). It produced the following output: λ # slot times λ # slot times The minimum occurs at about λ=2; the theoretical value of the minimum is 2e 1 = (b) If the contention period has length C, then the useful fraction is 8/(C +8), which is about 64% for C = 2e 1. #include <iostream.h> #include <stdlib.h> #include <math.h> const int RUNCOUNT = ; // X = X(lambda) is our random variable double X(double lambda) { double u; do { u= double(rand())/rand MAX; } while (u== 0); double val = - log(u)*lambda; return val; }

28 Chapter 2 26 double run(double lambda) { int i = 0; double time = 0; double prevtime = -1; double nexttime = 0; time = X(lambda); nexttime = time + X(lambda); // while collision: adjacent times within +/- 1 slot while (time - prevtime < 1 nexttime - time < 1) { prevtime = time; time = nexttime; nexttime += X(lambda); } return time; } void main(int argc, char * argv[]) { int i; double sum, lambda; for (lambda = 1.0; lambda <= 3.01; lambda += 0.1) { sum = 0; for (i=0; i<runcount; i++) sum += run(lambda); cout << lambda << " " << sum/runcount << endl; } } 54. The sender of a frame normally removes it as the frame comes around again. The sender might either have failed (an orphaned frame), or the frame s source address might be corrupted so the sender doesn t recognize it. A monitor station fixes this by setting the monitor bit on the first pass; frames with the bit set (i.e. the corrupted frame, now on its second pass) are removed. The source address doesn t matter at this point m/( m/sec) = 1 µs; at 16Mbps this is 16 bits. If we assume that each station introduces a minimum of 1 bit of delay, the the five stations add another five bits. So the monitor must add 24 (16 + 5) = 3 additional bits of delay. At 4Mbps the monitor needs to add 24 (4 + 5) = 15 more bits. 56. (a) THT/(THT + RingLatency) (b) Infinity; we let the station transmit as long as it likes. (c) TRT N THT + RingLatency 57. At 4 Mbps it takes 2 ms to send a packet. A single active host would transmit for 2000 µs and then be idle for 200 µs as the token went around; this yields

Computer Networks a Systems Approach 4th Edition Solution Manual Pdf

Source: https://docplayer.net/13738329-Computer-networks-a-systems-approach-fourth-edition-solutions-manual-larry-peterson-and-bruce-davie.html