CCENT ICND1 Study Guide. Lammle Todd
Чтение книги онлайн.
Читать онлайн книгу CCENT ICND1 Study Guide - Lammle Todd страница 13
Many network applications provide services for communication over enterprise networks, but for present and future internetworking, the need is fast developing to reach beyond the limits of current physical networking.
The Presentation Layer
The Presentation layer gets its name from its purpose: It presents data to the Application layer and is responsible for data translation and code formatting. Think of it as the OSI model’s translator, providing coding and conversion services. One very effective way of ensuring a successful data transfer is to convert the data into a standard format before transmission. Computers are configured to receive this generically formatted data and then reformat it back into its native state to read it. An example of this type of translation service occurs when translating old Extended Binary Coded Decimal Interchange Code (EBCDIC) data to ASCII, the American Standard Code for Information Interchange (often pronounced “askee”). So just remember that by providing translation services, the Presentation layer ensures that data transferred from the Application layer of one system can be read by the Application layer of another one.
With this in mind, it follows that the OSI would include protocols that define how standard data should be formatted, so key functions like data compression, decompression, encryption, and decryption are also associated with this layer. Some Presentation layer standards are involved in multimedia operations as well.
The Session Layer
The Session layer is responsible for setting up, managing, and dismantling sessions between Presentation layer entities and keeping user data separate. Dialog control between devices also occurs at this layer.
Communication between hosts’ various applications at the Session layer, as from a client to a server, is coordinated and organized via three different modes: simplex, half-duplex, and full-duplex. Simplex is simple one-way communication, kind of like saying something and not getting a reply. Half-duplex is actual two-way communication, but it can take place in only one direction at a time, preventing the interruption of the transmitting device. It’s like when pilots and ship captains communicate over their radios, or even a walkie-talkie. But full-duplex is exactly like a real conversation where devices can transmit and receive at the same time, much like two people arguing or interrupting each other during a telephone conversation.
The Transport Layer
The Transport layer segments and reassembles data into a single data stream. Services located at this layer take all the various data received from upper-layer applications, then combine it into the same, concise data stream. These protocols provide end-to-end data transport services and can establish a logical connection between the sending host and destination host on an internetwork.
A pair of well-known protocols called TCP and UDP are integral to this layer, but no worries if you’re not already familiar with them because I’ll bring you up to speed later, in Chapter 3. For now, understand that although both work at the Transport layer, TCP is known as a reliable service but UDP is not. This distinction gives application developers more options because they have a choice between the two protocols when they are designing products for this layer.
The Transport layer is responsible for providing mechanisms for multiplexing upper-layer applications, establishing sessions, and tearing down virtual circuits. It can also hide the details of network-dependent information from the higher layers as well as provide transparent data transfer.
The Transport layer can be either connectionless or connection-oriented, but because Cisco really wants you to understand the connection-oriented function of the Transport layer, I’m going to go into that in more detail here.
Connection-Oriented Communication
For reliable transport to occur, a device that wants to transmit must first establish a connection-oriented communication session with a remote device – its peer system – known as a call setup or a three-way handshake. Once this process is complete, the data transfer occurs, and when it’s finished, a call termination takes place to tear down the virtual circuit.
Figure 1.10 depicts a typical reliable session taking place between sending and receiving systems. In it, you can see that both hosts’ application programs begin by notifying their individual operating systems that a connection is about to be initiated. The two operating systems communicate by sending messages over the network confirming that the transfer is approved and that both sides are ready for it to take place. After all of this required synchronization takes place, a connection is fully established and the data transfer begins. And by the way, it’s really helpful to understand that this virtual circuit setup is often referred to as overhead!
Figure 1.10 Establishing a connection-oriented session
Okay, now while the information is being transferred between hosts, the two machines periodically check in with each other, communicating through their protocol software to ensure that all is going well and that the data is being received properly.
Here’s a summary of the steps in the connection-oriented session – that three-way handshake – pictured in Figure 1.9:
■ The first “connection agreement” segment is a request for synchronization (SYN).
■ The next segments acknowledge (ACK) the request and establish connection parameters – the rules – between hosts. These segments request that the receiver’s sequencing is synchronized here as well so that a bidirectional connection can be formed.
■ The final segment is also an acknowledgment, which notifies the destination host that the connection agreement has been accepted and that the actual connection has been established. Data transfer can now begin.
Sounds pretty simple, but things don’t always flow so smoothly. Sometimes during a transfer, congestion can occur because a high-speed computer is generating data traffic a lot faster than the network itself can process it! And a whole bunch of computers simultaneously sending datagrams through a single gateway or destination can also jam things up pretty badly. In the latter case, a gateway or destination can become congested even though no single source caused the problem. Either way, the problem is basically akin to a freeway bottleneck – too much traffic for too small a capacity. It’s not usually one car that’s the problem; it’s just that there are way too many cars on that freeway at once!
But what actually happens when a machine receives a flood of datagrams too quickly for it to process? It stores them in a memory section called a buffer. Sounds great; it’s just that this buffering action can solve the