CCNA Routing and Switching Complete Study Guide. Todd Lammle

Чтение книги онлайн.

Читать онлайн книгу CCNA Routing and Switching Complete Study Guide - Todd Lammle страница 22

CCNA Routing and Switching Complete Study Guide - Todd Lammle

Скачать книгу

layer, TCP is known as a reliable service but UDP is not. This distinction gives application developers more options because they have a choice between the two protocols when they are designing products for this layer.

      The Transport layer is responsible for providing mechanisms for multiplexing upper-layer applications, establishing sessions, and tearing down virtual circuits. It can also hide the details of network-dependent information from the higher layers as well as provide transparent data transfer.

       The term reliable networking can be used at the Transport layer. Reliable networking requires that acknowledgments, sequencing, and flow control will all be used.

      The Transport layer can be either connectionless or connection-oriented, but because Cisco really wants you to understand the connection-oriented function of the Transport layer, I’m going to go into that in more detail here.

      Connection-Oriented Communication

      For reliable transport to occur, a device that wants to transmit must first establish a connection-oriented communication session with a remote device – its peer system – known as a call setup or a three-way handshake. Once this process is complete, the data transfer occurs, and when it’s finished, a call termination takes place to tear down the virtual circuit.

Figure 1.10 depicts a typical reliable session taking place between sending and receiving systems. In it, you can see that both hosts’ application programs begin by notifying their individual operating systems that a connection is about to be initiated. The two operating systems communicate by sending messages over the network confirming that the transfer is approved and that both sides are ready for it to take place. After all of this required synchronization takes place, a connection is fully established and the data transfer begins. And by the way, it’s really helpful to understand that this virtual circuit setup is often referred to as overhead!

Diagram shows the transmission of SYN, SYN/ ACK and ACK signals, connection establishment and data transfer between sender and receiver systems.

FIGURE 1.10 Establishing a connection-oriented session

      Okay, now while the information is being transferred between hosts, the two machines periodically check in with each other, communicating through their protocol software to ensure that all is going well and that the data is being received properly.

      Here’s a summary of the steps in the connection-oriented session – that three-way handshake – pictured in Figure 1.9:

      ■ The first “connection agreement” segment is a request for synchronization (SYN).

      ■ The next segments acknowledge (ACK) the request and establish connection parameters – the rules – between hosts. These segments request that the receiver’s sequencing is synchronized here as well so that a bidirectional connection can be formed.

      ■ The final segment is also an acknowledgment, which notifies the destination host that the connection agreement has been accepted and that the actual connection has been established. Data transfer can now begin.

      Sounds pretty simple, but things don’t always flow so smoothly. Sometimes during a transfer, congestion can occur because a high-speed computer is generating data traffic a lot faster than the network itself can process it! And a whole bunch of computers simultaneously sending datagrams through a single gateway or destination can also jam things up pretty badly. In the latter case, a gateway or destination can become congested even though no single source caused the problem. Either way, the problem is basically akin to a freeway bottleneck – too much traffic for too small a capacity. It’s not usually one car that’s the problem; it’s just that there are way too many cars on that freeway at once!

      But what actually happens when a machine receives a flood of datagrams too quickly for it to process? It stores them in a memory section called a buffer. Sounds great; it’s just that this buffering action can solve the problem only if the datagrams are part of a small burst. If the datagram deluge continues, eventually exhausting the device’s memory, its flood capacity will be exceeded and it will dump any and all additional datagrams it receives just like an inundated overflowing bucket!

      Flow Control

      Since floods and losing data can both be tragic, we have a fail-safe solution in place known as flow control. Its job is to ensure data integrity at the Transport layer by allowing applications to request reliable data transport between systems. Flow control prevents a sending host on one side of the connection from overflowing the buffers in the receiving host. Reliable data transport employs a connection-oriented communications session between systems, and the protocols involved ensure that the following will be achieved:

      ■ The segments delivered are acknowledged back to the sender upon their reception.

      ■ Any segments not acknowledged are retransmitted.

      ■ Segments are sequenced back into their proper order upon arrival at their destination.

      ■ A manageable data flow is maintained in order to avoid congestion, overloading, or worse, data loss.

       The purpose of flow control is to provide a way for the receiving device to control the amount of data sent by the sender.

Because of the transport function, network flood control systems really work well. Instead of dumping and losing data, the Transport layer can issue a “not ready” indicator to the sender, or potential source of the flood. This mechanism works kind of like a stoplight, signaling the sending device to stop transmitting segment traffic to its overwhelmed peer. After the peer receiver processes the segments already in its memory reservoir – its buffer – it sends out a “ready” transport indicator. When the machine waiting to transmit the rest of its datagrams receives this “go” indicator, it resumes its transmission. The process is pictured in Figure 1.11.

Diagram shows sender transmitting data to receiver, receiver sending messages to the sender such as Buffer full, not ready- STOP and Segments processed- GO and again sender transmitting data to receiver.

FIGURE 1.11 Transmitting segments with flow control

      In a reliable, connection-oriented data transfer, datagrams are delivered to the receiving host hopefully in the same sequence they’re transmitted. A failure will occur if any data segments are lost, duplicated, or damaged along the way – a problem solved by having the receiving host acknowledge that it has received each and every data segment.

      A service is considered connection-oriented if it has the following characteristics:

      ■ A virtual circuit, or “three-way handshake,” is set up.

      ■ It uses sequencing.

      ■ It uses acknowledgments.

      ■ It uses flow control.

       The types of flow control are buffering, windowing, and congestion avoidance.

      Windowing

      Ideally, data throughput happens quickly and efficiently. And as you can imagine, it would be painfully slow if the transmitting

Скачать книгу