/*Google Adsense */

User Datagram Protocol (UDP)

UDP (User Datagram Protocol) is an alternative to the Transmission Control Protocol (TCP) and , together with IP, is sometimes referred to as UDP/IP. UDP is a communications protocol that offers a limited amount of service when messages are exchanged between computers in a network that uses the Internet Protocol (IP).

The User Datagram Protocol (UDP) is a connectionless transport-layer protocol that belongs to the Internet protocol family. UDP is defined to make available a datagram mode of packet-switched computer communication in the environment of an interconnected set of computer networks. This protocol assumes that the Internet Protocol (IP) is used as the underlying protocol.

This protocol provides a procedure for application programs to send messages to other programs with a minimum of protocol mechanism. The protocol is transaction oriented and delivery and duplicate protection are not guaranteed. Applications requiring ordered reliable delivery of streams of data should use the Transmission Control Protocol (TCP)

UDP is basically an interface between IP and upper-layer processes. UDP protocol ports distinguish multiple applications running on a single device from one another. Unlike the TCP, UDP adds no reliability, flow-control or error-recovery functions to IP. Because of UDP's simplicity, UDP headers contain fewer bytes and consume less network overhead than TCP. UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in cases where a higher-layer protocol might provide error and flow control.

UDP is the transport protocol for several well-known application-layer protocols, including Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name System (DNS) and Trivial File Transfer Protocol(TFTP).

The UDP packet format contains four fields - source and destination ports, length, and checksum fields.

Source and destination ports contain the 16-bit UDP protocol port numbers used to demultiplex datagrams for receiving application-layer processes. A length field specifies the length of the UDP header and data. Checksum provides an (optional) integrity check on the UDP header and data.

It should be noted that UDP like TCP, uses the Internet Protocol to actually get a data unit (called a datagram) from one computer to another. Unlike TCP, however, UDP does not provide the service of dividing a message into packets (datagrams) and reassembling it at the other end. Specifically, UDP does not provide sequencing of the packets that the data arrives in. This means that the application program that uses UDP must be able to make sure that the entire message has arrived and is in the right order. Network applications that want to save processing time because they have very small data units to exchange (and therefore very little message reassembling to do) may prefer UDP to TCP. The Trivial File Transfer Protocol (TFTP) uses UDP instead of TCP.

UDP provides two services not provided by the IP layer. It provides port numbers to help distinguish different user request and, optionally, a checksum capability to verify that the data arrived intact.

TCP Sliding Window

A TCP sliding window provides more efficient use of network bandwidth than PAR because it enables hosts to send multiple bytes or packets before waiting for an acknowledgment.
Each TCP packet contains the starting sequence number of the data in that packet and the sequence number of the last byte (called the acknowledgment number) received from the remote peer. With this information, a sliding-window protocol is implemented. Forward and reverse sequence numbers are completely independent and each TCP peer must track both its own sequence numbering and the numbering being used by the remote peer.

Each endpoint of a TCP connection will have a buffer for storing data that is transmitted over the network before the application is ready to read the data. This lets network transfers take place while applications are busy with other processing, improving overall performance.

To avoid overflowing the buffer, TCP sets a Window Size field in each packet it transmits. This field contains the amount of data that may be transmitted into the buffer. If this number falls to zero, the remote TCP can send no more data. It must wait until buffer space becomes available and it receives a packet announcing a non-zero window size.
TCP uses a number of control flags to manage the connection. Some of these flags pertain to a single packet, such as the URG flag indicating valid data in the Urgent Pointer field, but two flags (SYN and FIN), require reliable delivery as they mark the beginning and end of the data stream. In order to insure reliable delivery of these two flags, they are assigned spots in the sequence number space. Each flag occupies a single byte.


TCP Packet Format:
  • Source port and Destination port: Identifies points at which upper layer source and destination processes receive TCP services.
  • Sequence Number: Usually specifiers the number assigned to the first byte of data in the current message. In the connection-establishment phase, this field also can be used to identify an initial sequence number to be used in an upcoming transmission.
  • Acknowledgment Number: Contains the sequence number of the next byte of data the sender of the packet expects to receive.
  • Data Offset: Indicates the number of 32-bit words in the TCP header.
  • Reserved: Remains reserved for future use.
  • Flags: Carries a variety of control information, including the SYN and ACK bits used for connection establishment, and the FIN bit used for connection termination.
  • Window: Specifies the size of the sender's receive window (that is, the buffer space available for incoming data).
  • Checksum: Indicates whether the header was damaged in transit.
  • Urgent Pointer: Points to the first urgent data byte in the packet.
  • Options: Specifies various TCP options.
  • Data: Contains upper-layer information.

In TCP, the receiver specifies the current window size in every packet. Because TCP provides a byte-stream connection, window sizes are expressed in bytes. This means that a window is the number of data bytes that the sender is allowed to send before waiting for an acknowledgment. Initial window sizes are indicated at connection setup, but might vary throughout the data transfer to provide flow control. A window size of zero, for instance, means "Send no data".

In a TCP sliding-window operation, for example, the sender might have a sequence of bytes to send (numbered 1 to 10) to a receiver who has a window size of five. The sender then would place a window around the first five bytes and transmit them together. It would then wait for an acknowledgment.

The receiver would respond with an ACK=6, indicating that it has received bytes 1 to 5 and is expecting byte 6 next. In the same packet, the receiver would indicate that its window size is 5. The sender then would move the sliding window five bytes to the right and transmit bytes 6 to 10. The receiver would respond with an ACK=11, indicating that it is expecting sequenced byte 11 next. In this packet, the receiver might indicate that its window size is 0 (because, for example, its internal buffers are full). At this point, the sender cannot send any more bytes until the receiver sends another packet with a window size greater than 0.

Positive Acknowledgment and Retransmission (PAR)

A simple transport protocol might implement a reliability-and-flow control technique where a host transmits a TCP packet to its peer, starts a timer and waits for a period of time for an acknowledgment before sending a new packet. If the acknowledgment is not received before the timer expires, the packet is assumed to have been lost and the data is retransmitted. Such a technique is called positive acknowledgment and retransmission (PAR).
However the time duration for which the source is expected to wait depends on a number of factors. Over an Ethernet, no more than a few microseconds should be needed for an acknowledgement. If the traffic is more, the time for which it has to wait should be more. All modern TCP implementations estimate this time by monitoring the normal exchange of data packets and developing an estimate of how long is "too long". This process is called Round-Trip Time (RTT) estimation. RTT estimates are one of the most important performance parameters in a TCP exchange, especially when considering the fact that on an indefinitely large transfer, all TCP implementations eventually drop packets and retransmit them, no matter how good the quality of the link. If the RTT estimate is too low, packets are retransmitted unnecessarily; if too high, the connection can sit idle while the host waits to timeout.

By assigning each packet a 32 bit sequence number, PAR enables hosts to track lost or duplicate packets caused by network delays that result in premature retransmission. The sequence numbers are sent back in the acknowledgments so that the acknowledgments can be tracked.

PAR is an inefficient use of bandwidth, however, because a host must wait for an acknowledgment before sending a new packet and only one packet can be sent at a time.

TCP Connection Establishment

To use reliable transport services, TCP hosts must establish a connection-oriented session with one another. Connection establishment is performed by using a "three-way handshake" mechanism because TCP is layered on the unreliable datagram service provided by IP, so that these control segments can by lost, duplicated or delivered out of order leading to trouble if original or retransmitted segments arrive while the connection is being established.

A three-way handshake synchronizes both ends of connection by allowing both sides to agree upon initial sequence number. This mechanism also guarantees that both sides are ready to transmit data and know that the other side is ready to transmit as well. This is necessary so that packets are not transmitted during session establishment or after session termination.

Each host randomly chooses a sequence number used to track bytes within the stream it is sending and receiving. The client initiates a connection by sending a packet with the initial sequence number (X) and SYN bit set to indicate a connection request. The server receives the SYN, records the sequence number x, and acknowledges the SYN (with an ACK = x+1). The server includes its own initial sequence number (SEQ=Y). An ACK=20 means the host has received bytes 0 through 19 and expects byte 20 next. This technique is called forward acknowledgment. The client then acknowledges all bytes the server has sent with a forward acknowledgment indicating the next byte it expects to receive (ACK=Y+1).

Transmission Control Protocol (TCP)

The TCP provides reliable transmission of data in an IP environment. TCP corresponds to the transport layer of the OSI reference model. TCP provides services like full-duplex operation, stream data transfer, reliability, efficient flow control and Network adaptation.

TCP operates in full-duplex by sending and receiving data at the same time. In stream data transfer TCP groups bytes into segments and passes them to IP for delivery. TCP offers reliability by providing connection-oriented, end-to-end reliable packet delivery through an internetwork. It does this by sequencing bytes with a forwarding acknowledgment number that indicates to the destination the netxt byte the source expects to receive. Bytes not acknowledged within a specified time period are retransmitted.

Transmission Control Protocol efficiently controls the flow, while acknowledging the source, the receiving TCP process indicates the highest sequence number it can receive without overflowing its internal buffers. TCP can adapt to network by dynamically studying the delay characteristics of a network and adjust its operation to maximize throughput without overloading the network.

TCP provides an inter process delivery system, so its needs to identify processes in the two "end systems" which it connects. Two processes can communicate by agreeing on the port numbers an abstract, that can be used for communication. Each segment contains port numbers for sending and receiving processes.

In order to set up a TCP connection, a process called server notifies the TCP software that it is waiting for connections "at" a certain port number. A process called client which is waiting for a request to be processed by the server requests the local TCP software to allocate an unused port number to connect the server and establish the connection. Once the connection is established, the two processes can communicate.

Internet Transport Protocols (TCP & UDP)

A process can be defined as a program currently executed. Sometimes, it is required that the processes running in two different machines communicate with each other. This is called Inter-Process Communications. Inter-Process Communications can be implemented using protocols like TCP, UDP, etc.

Reliable Transport Service

Reliable transport service has three aspects, user multiplexing, connection management and data transfer. Data transfer provides for the reliable exchange of data between connected users. Connection management provides for the establishment and termination of connections between users. Users can open and close connections to other users, and can accept or reject incoming connection requests. Resources are acquired when a user enters a connection, and released when the user leaves the connection . An incoming connection request is rejected if the user has filed or it transport entity does not have adequate resources for new connections.

A key concern of transport protocols is to ensure that a connection is not infiltrated by old messages that may remain in the network from previous terminated connections. The standard techniques are to use the 3-way handshake mechanism for connection management and the sliding window mechanism for data transfer within a connection. These mechanisms use cyclic sequence numbers to identify the connection attempts of a user and the data blocks within a connection. The protocol must ensure that received cyclic sequence numbers are correctly interpreted and this invariably requires the network to enforce a maximum message lifetime.

Each user goes through a succession of incarnations. An incarnation of a client is started whenever the client requests a connection to any server. An incarnation of a server is started whenever the server accepts a (potentially new) connection request from any client. Every incarnation is assigned an incarnation number when it starts; the incarnation is uniquely distinguished by its incarnation number and user id.

Once an incarnation 'x' of a user 'c' is started in an attempt to connect to a user 's', it has one of two possible futures. The first possibility is that at some point 'x' becomes open and acquires an incarnation number y of some incarnation of 's'; at some later point 'x' becomes closed. The second possibility is that 'x' becomes closed without ever becoming open. This can happen to client incarnation either because its connection request was rejected by the server or because of failure (in the server, the client, the relevant transport entities, or the channels). It can happen to a server incarnation either because of failure or because it was started in response to a connection request that later turns out to be a duplicate request from some old, now closed, incarnation.

Because of failures, it is also possible that an incarnation 'x' of 'c' becomes open to incarnation 'y' of 's' but 'y' becomes closed without becoming open. This is referred to as a half-open connection. A connection is an association between two open incarnation. Formally, a connection exists between incarnation 'x' of user 'c' and incarnation 'y' of user 's' if y has become open to 'x' and 'x' has become open to 'y'. The following properties are desired of connection management:

Consistent connections - If an incarnation x of user c becomes open to an incarnation 'y' of user 's', then incarnation 'y' is either open to 'x' or will become open to 'x' unless there are failures.

Consistent data-transfer - If an incarnation 'x' of user 'c' becomes open to an incarnation 'y' of user 's', then 'x' accepts received data only if sent by 'y'.

Progress - If an incarnation 'x' of a client requests a connection to a server, then a connection is established between 'x' and an incarnation of the server within some specified time, provided the server does not reject x's request and neither client, server nor channels fail within that time.

Terminating handshakes - The transport entity (of either user) cannot stay indefinitely in a state (or set of states) where it is repeatedly sending messages expecting a response that never arrives.

Unreliable Transport Service

Unreliable transport service involves two aspects: user multiplexing and unreliable data transfer between users. A transport protocol can achieve this simply by adding user multiplexing to the message transfer service provided by the network layer. When a user generates a data segment destined to a remote user, the transport protocol gives the network layer a packet containing the user's local port number, the user's remote port number, the user's remote host IP address, the user's transport protocol number, and the data segment.

When the network entity at a host receives a packet, it first looks for a local user with local port number equal to the packet's destination port number, remote port number equal to the packet's sender port number, remote host address equal to the packet's sender IP address and transport protocol number equal to the packet's transport protocol number. If it finds such a user, it passes the packet's data segment to the user. Otherwise, it looks for a local user (presumably a server) with local port number equal to the packet's destination port number, remote port number and host address equal to nil, and transport protocol number equal to the packet's transport protocol number. It it finds such a user, it passes the packet's data segment to the user. Otherwise, it discards the packet.

User Multiplexing

User-to-user communications requires that network packets contain header information identifying the source and destination users, in addition to the source and destination hosts. In the TCP/IP architecture, hosts are identified by IP addresses and users are identified by port numbers. The obvious way to identify users on a host is to assign a distinct port to every users, so that any user is identified network-wide by its port number and its host's IP address. But this is not what is done. Instead each user is identified network-wide by the following attributes: local port number, remote port number, local host IP address, remote host IP address, and transport protocol number. The remote port number and IP address are the local port number and IP address of the remote peer users; if the remote user is not known, the remote port number and IP address are nil.

The transport protocol number identifies the particular transport protocol (For e.g. UDP or TCP) being accessed by the user. A user's local port number is assigned as soon as the user starts to use the transport service. The user's remote port number and IP address are assigned as soon as it learns of the local port number and host IP address of its intended peer user. Every IP network packet has the local port number of the originating user, called sender port number, the local port number of the intended destination user, called destination port number, the sender and destination IP addresses, and the transport protocol number. This approach of using local and remote that numbers and IP addresses to identify a user supports the client-server paradigm. This enables the clients to handle the same services simultaneously.

Consider a host H providing a service over a certain transport protocol (For e.g. FTP over TCP). H dedicates a specific local port number, say p1, to the service. H creates a server user, say S, with local port number set to p1, transport protocol number set appropriately, and remote port number and IP address set to nil. When a client user, say C on another host G wants to avail of this service, C would get local port number set to an arbitrary value, say p2, remote port number set to p1, remote IP address set to H's IP address, and transport protocol number set appropriately. When the request packet arrives at the transport layer in H, it gives the packet to S (assuming that there is no user at H with local port number p1, remote port number p2, remote IP address equal to G's IP address). The server S then can create another server specifically for servicing client C; this new server NS would have remote port number set to p2 and remote IP address set to G's IP address, and hence it can use local port number p1, same as S.

The following table illustrates the above example:

HostUserActs asLocal Port No.Remote Port No.
HSServerP1Nil
GCClientP2P1



When sending the request packet to the Client C, the Server S creates another Server with following specification.




UserActs asLocal Port No.Remote Port No.
NSNew ServerP1P2

Network Transport Service

The transport service are provided by transport protocols, which are distributed algorithms running on the hosts. The channels provided by the network layer between any two hosts can lose, duplicate and recorder messages in transit. The transport protocol is so designed such that they operate correctly inspite of unreliable network service and failure-prone networks and hosts.

The transport layer of a TCP/IP computer network, situated above the network layer and below the applications layer, provides transport service. The network layer provides unreliable packet transfer service between any two hosts. The transport layer uses this network service and provides transport services between any two applications in the network. Applications include email (SMTP), remote login (TELNET, SSH), file transfer (FTP), web browsers (HTTP), etc.

The ideal transport service is one that can transfer data packets between any two users and can do so reliably and with low-delay and low-jitter. Providing user-to-user service implies that the transport layer has to do user multiplexing at each host. Reliable data transfer means that data is delivered in the same sequence it was sent and without loss. Low-delay means that data sent is delivered within a specified (usually small ) time bound. Low jitter means that the time intervals between sending data is preserved at delivery within specified (usually small) time bounds. Achieving such ideal service requires the network to be capable of handling the worst-case load at any time, which, if the network is not to be incredibly expensive, means imposing severe restrictions on network access and data rates available to users ( as in telephony networks).

Fortunately, ideal transport service is not required for most applications. Thus the transport layer in TCP/IP networks does not strive for it. Instead it provides two separate services: a reliable service, which can suffer high delays and jitter and an unreliable service, which does no better than the network service. The reliable service, implemented by a transport protocol known as TCP, is used by applications where data integrity is essential, such as file transfer, email, remote login etc. The unreliable service, implemented by a transport protocol known as UDP, is used by applications where data loss can be tolerated but low-delay or low-jitter is desired, such as Internet telephony and voice/video streaming.

Thus reliable transport service is nothing but reliable data transfer between any two users. But reliable data transfer requires resources at the entities, such as buffers and processes for retransmitting data, reassembling data, etc. These resources typically cannot be maintained across failures. Furthermore, maintaining the resources continuously for every pair of users would be prohibitively inefficient, because only a very small fraction of user pairs in a network exchange data with any regularity, especially in a large network such as the Internet. Therefore a reliable transport service involves connection management and data transfer. Data transfer provides for the reliable exchange of data between connected users. Connection management provides for the establishment and termination of connections between users.

In general, reliable transport service (For e.g. TCP service) involves three aspects: user multiplexing, reliable connection management between users, and reliable data transfer between connected users. Unreliable transport service (For e.g. UDP service), on the other hand, involves two aspects: user multiplexing and unreliable data transfer between users.

Standards Organizations

A wide variety of organizations contribute to internetworking standards by providing forums for discussion, turning informal discussion into formal specifications and proliferating specifications after they are standardized.

Most standards organizations create formal standards by using specific processes. Such as : organizing ideas, discussing the approach, developing draft standards, voting on all or certain aspects of the standards and then formally releasing the completed standard to the public.

Some of the best-known standards organizations that contribute to internetworking standards include these:
  • International Organization for Standardization (ISO) - ISO is an international standards organization responsible for a wide range of standards, including many that are relevant to networking. Its best-known contribution is the development of the OSI reference model and the OSI protocol suite.
  • American National Standards Institute (ANSI) - ANSI, which is also a member of the ISO, is the coordinating body for voluntary standards groups within the United States. ANSI developed the Fiber Distributed Data Interface (FDDI) and other communications standards.
  • Electronic Industries Association (EIA) - EIA specifies electrical transmission standards, including those used in networking. The EIA developed the widely used EIA/TIA-232 standard (formerly known as RS-232).
  • Institute of Electrical and Electronic Engineers (IEEE) - IEEE is a professional organization that defines networking and other standards. The IEEE developed the widely used LAN standards IEEE 802.3 and IEEE 802.5.
  • International Telecommunication Union Telecommunication Standardization Sector (ITU-T) - Formerly called the Committee for International Telegraph and Telephone (CCITT), ITU-T is now an international organization that develops communication standards. The ITU-T developed X.25 and other communications standards.
  • Internet Activities Board (IAB) - IABis a group of internetwork researchers who discuss issues pertinent to the Internet and set Internet policies through decisions and task forces. The IAB designates some Request for Comments (RFC) documents as Internet standards, including Transmission Control Protocol/Internet Protocol (TCP/IP) and the Simple Network Management Protocol (SNMP).

Multiplexing Basics

Multiplexing is a process in which multiple data channels are combined into a single data or physical channel at the source. Multiplexing can be implemented at any of the OSI layers. Conversely, demultiplexing is the process of separating multiplexed data channels at the destination. One example of multiplexing is when data from multiple applications is mulitplexed into a single lower-layer data packet. Another example of multiplexing is when data from mulitple devices is combined into a single physical channel (using a device called a multiplexer).

A multiplexer is a physical layer device that combines multiple data streams into one or more output channels at the source. Multiplexers demultiplex the channels into multiple data streams at the remote end and thus maximize the use of the bandwidth of the physical medium by enabling it to be shared by multiple traffic sources.

Some methods used for multiplexing data are Time-Division Multiplexing (TDM), Asynchronous Time-Division Multiplexing (ATDM), Frequency-Division Multiplexing (FDM) and statistical multiplexing.

In TDM, information from each data channel is allocated bandwidth based on preassigned time slots, regardless of whether there is data to transmit. In ATDM, information from data channels is allocated bandwidth as needed by using dynamically assigned time slots. In FDM, information from each data channel is allocated bandwidth based on the signal frequency of the traffic. In statistical multiplexing, bandwidth is dynamically allocated to any data channels that have information to transmit.

Error-Checking Basics

Error-checking schemes determine whether transmitted data has become corrupt or otherwise damaged while traveling from the source to the destination. Error checking is implemented at several of the OSI layers.

On common error -checking scheme is the Cyclic Redundancy Check (CRC), which detects and discards corrupted data. Error-correction functions (such as data retransmission) are left to higher-layer protocols. A CRC value is generated by a calculation that is performed at the source device. The destination device compares this value to its own calculation to determine whether errors occurred during transmission. First, the source device performs a predetermined set of calculations over the contents of the packet to be sent. Then, the source places the calculated value in the packet and sends the packet to the destination. The destination performs the same predetermined set of calculations over the contents of the packet and them compares its computed value with that contained in the packet. If the values are equal, the packet is considered valid. If the values are unequal, the packet contains errors and is discarded.

Flow Control Basics

Flow control is a function that prevents network congestion by ensuring that transmitting devices do not overwhelm receiving devices with data. A high-speed computer, for example, may generate traffic faster than the network can transfer it or faster than the destination device can receive and process it. The three commonly used methods for handling network congestion are buffering, transmitting source quench messages, and windowing.

Buffering is used by network devices to temporarily store bursts of excess data in memory until they can be processed. Occasional data bursts are easily handled by buffering. Excess data bursts can exhaust memory, however forcing the device to discard any additional datagrams that arrive.

Source-quench messages are used by receiving devices to help prevent their buffers from overflowing. The receiving device sends source-quench messages to request that the source reduce its current rate of data transmission. First, the receiving device begins discarding received data due to overflowing buffers. Second , the receiving device begins sending source-quench messages to the transmitting device at the rate of one message for each packet dropped. The source device receives the source-quench messages and lowers the data rate until it stops receiving the messages. Finally, the source device then gradually increases the data rate as long as no further source-quench requests are received.

Windowing is a flow-control scheme in which the source device requires an acknowledgment from the destination after a certain number of packets have been transmitted. With a window size of 3 the source requires an acknowledgment after sending three packets, as follows. First, the source device sends three packets to the destination device. Then after receiving the three packets, the destination device sends and acknowledgment to the source. The source receives the acknowledgment and sends three more packets. If the destination does not receive one or more of the packets for some reason, such as over flowing buffers, it does not receive enough packets to send an acknowledgment. The source then retransmits the packets at a reduced transmission rate.

Addresse Versus Names

Internetwork devices usually have both a name and an address associated with them. Internetwork names typically are location-independent and remain associated with a device wherever that device moves (for example, from one building to another). Internetwork addresses usually are location-dependent and change when a device is moved (although MAC addresses are an exception to this rule). As with network addresses being mapped to MAC addresses, names are usually mapped to network addresses through some protocol. The internet uses Domain Name System (DNS) to map the name of a device to its IP address. For example, its easier to remember www.cisco.com instead of some IP address. Computer performs a DNS lookup of the IP address for Cisco's web server and then communicates with it using the network address.

Address Assignments

Addresses are assigned to devices as one of two types: static and dynamic. Static addresses are assigned by a network administrator according to a preconceived internetwork addressing plan. A static address does not change until the network administrator manually changes it. Dynamic addresses are obtained by devices when they attach to a network, by means of some protocol-specific process. A device using a dynamic address often has a different address each time that it connects to the network. Some networks use a server to assign addresses. Server-assigned addresses are recycled for reuse as devices disconnect . A device is therefore likely to have a different address each time that it connects to the network.

Hierarchical Versus Flat Address Space

Intetnetwork address space typically takes one of two forms: hierarchical address space or flat address space. A hierarchical address space is organized int numerous subgroups, each successively narrowing an address until it points to a single device (in a manner similar to street addresses). A flat address space is organized into a single group ( in a manner similar to U.S. Social Security Numbers).
Hierarchical addressing offers certain advantages over flat-addressing schemes. Address sorting and recall is simplified using comparison operations. For example, "Ireland" in a street address eliminates any other country as possible location.

Network Layer Addresses

Network layer address identifies an entity at the network layer of he OSI layers. Network addresses usually exist within a hierarchical address space and sometimes are called virtual or logical addresses.

The relationship between a network address and a device is logical and unfixed; it typically is based either on physical network characteristics (the device is on a particular network segment) or on groupings that have no physical basis ( the device is part of an AppleTalk Zone). End systems require one network layer address for each network layer protocol that they support. (This assumes that the device has only one physical network connection.). Routers and other internetworking devices require one network layer address per physical network connection for each network layer protocol supported. For example, a router with three interfaces each running AppleTalk, TCP/IP and OSI must have three network layer addresses for each interface. The router therefore has nine network layer addresses.

Media Access Control (MAC) Addresses

Media Access Control (MAC) addresses consist of a subset of data link layer addresses. MAC addresses identify network entities in LANs that implement the IEEE MAC addresses of the data link layer. As with most data-link addresses, MAC addresses are unique for each LAN interface.

MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial number or another value administered by the specific vendor. MAC addresses sometimes are called Burned-In Addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the interface card initializes.

When internetworks generally use network addresses to route traffic around the network, there is a need to map network addresses to MAC addresses. When the network layer has determined the destination station's network address, it must forward the information over a physical network using a MAC address. Different protocol suites use different methods to perform this mapping, but the most popular is Address Resolution Protocol (ARP).

Different protocol suites use different methods for determining the MAC address of a device. The following three methods are used most often. Address Resolution Protocol (ARP) maps network addresses to MAC addresses. The Hello protocol enables network devices to learn the MAC addresses of other network devices. MAC addresses either are embedded in the network layer address or are generated by an algorithm.

Address Resolution Protocol (ARP) is the method used in the TCP/IP suite. When a network device needs to send data to another device on the same network, it knows the source and destination network addresses for the data transfer. It must somehow map the destination address to a MAC address before forwarding the data. First, the sending station will check its ARP table to see if it has already discovered this destination station's IP address contained in the broadcast. Every station on the network receives the broadcast and compares the embedded IP address to its own. Only the station with matching IP address replies to the sending station with a packet containing the MAC address for the station. The first station then adds this information to its ARP table for future reference and proceeds to transfer the data.

When the destination device lies on a remote network, one beyond a router, the process is the same except that the sending station sends the ARP request for the MAC address of its default gateway. It then forwards the information to that device. The default gateway will then forward the information over whatever networks necessary to deliver the packet to the network on which the destination device resides. The router on the destination device's network then uses ARP to obtain the MAC of the actual destination device and delivers the packet.

The Hello protocol is a network layer protocol that enables network devices to identify on another and indicate that they are still functional. When a new end system powers up, for example, it broadcasts hello messages onto the network. Devices on the network then return hello replies, and hello messages are also sent at specific intervals to indicate that they are still functional. Network devices can learn the MAC addresses of other devices by examining Hello protocol packets.

Three protocols use predictable MAC addresses. In these protocol suites, MAC addresses are predictable because the network layer either embeds the MAC address in the network layer address or uses an algorithm to determine the MAC address. The three protocols are Xerox Network Systems (XNS), Novell Internetwork Packet Exchange (IPX), and DECnet Phase IV.

Data Link Layer Addresses

A data link layer address uniquely identifies each physical network connection of a network device. Data-link addresses sometimes are referred to as physical or hardware addresses. Data-link addresses usually exist within a flat address space and have a pre-established and typically fixed relationship to a specific device.

End systems generally have only one physical network connection and thus have only one data link address. Routers and other internetworking devices typically have multiple physical network connections and therefore have multiple data-link addresses.

Internetwork Addressing

Internetwork addresses identify devices separately or as members of a group. Addressing schemes vary depending on the protocol family and the OSI layer. Three types of internetwork addresses are commonly used:



  • Data link layer addresses

  • Media Access Control (MAC) addresses

  • Network layer addresses

Connection-Oriented and Connectionless Internetwork Services

Connection-Oriented and Connectionless Internetwork Services

In general, transport protocols can be characterized as being either connection-oriented or connectionless. Connection-oriented services must first establish a connection with the desired service before passing any data. A connectionless service can send the data without any need to establish a connection first. In general, connection-oriented services provide some level of delivery guarantee, whereas connectionless services do not.

Connection-oriented service involves three phases: connection establishment, data transfer and connection termination.

During connection establishment, the end nodes may reserve resources for the connection. The end nodes also may negotiate and establish certain criteria for the transfer, such as a window size used in TCP connections. This resource reservation is one of the things exploited in some denial of service (DOS) attacks. An attacking system will send many requests for establishing a connection but then will never complete the connection. The attacked computer is then left with resources allocated for many never-completed connections. Then, when an end node tries to complete an actual connection, there are not enough resources for the valid connection.

The data transfer phase occurs when the actual data is transmitted over the connection. During data transfer, most connection-oriented services will monitor for lost packets and handle resending them. The protocol is generally also responsible for putting the packets in the right sequence before passing the data up the protocol stack. When the transfer of data is complete, the end nodes terminate the connection and release resources reserved for the connection.

Connection-oriented network services have more overhead than connectionless ones. Connection-oriented services must negotiate a connection, transfer data and tear down the connection, whereas a connectionless transfer can simply send the data without the added overhead of creating and tearing don a connection. Each has its place in internetworks.

Internetworking Challenges

Implementing a functional internetwork is no simple task. Many challenges must be faced, especially in the areas of connectivity, reliability, network management and flexibility. Each area is key in establishing an efficient and effective internetwork.

The challenge when connecting various systems is to support communication among disparate technologies. Different sites, for example, may use different types of media operating at varying speeds or may even include different types of systems that need to communicate.

Because companies rely heavily on data communication, internetworks must provide a certain level of reliability. This is an unpredictable world, so many large internetworks include redundancy to allow for communication even when problems occur.

Furthermore, network management must provide centralized support and troubleshooting capabilities in an intetnetwork. Configuration, security, performance and other issues must be adequately addressed for the internetwork to function smoothly. Security within an internetwork is essential. Many people think of network security from the perspective of protecting the private network from outside attacks. However, it is just as important to protect the network from internal attacks, especially because most security breaches come from inside. Networks must also be secured so that the internal network cannot be used as a tool to attack other external sites.

Early in the year 2000, many major web sites were the victims of distributed denial of service (DDOS) attacks. These attacks were possible because a great number of private networks currently connected with the Internet were not properly secured. These private networks were used as tools for the attackers.

Because nothing in this world is stagnant, internteworks must be flexible enough to change with new demands.

Introduction and History of Internetworking

Introduction to Internetwork


An internetwork is a collection of individual networks, connected by intermediate networking devices, that functions as a single large network. Internetworking refers to the industry, products and procedures that meet the challenge of creating and administering internetworks. The different kinds of network technologies that can be interconnected by routers and other networking devices to create an internetwork.

History of Internetworking


The first networks were time-sharing networks that used mainframes and attached terminals. Such environments were implemented by both IBM's Systems Network Architecture (SNA) and Digital's Network Architecture.

Local-area networks (LAN's) evolved around the PC revolution. LANs enabled multiple users in a relatively small geographical area to exchange files and messages, as well as access shared resources such as file servers and printers.

Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create connectivity. Some of the technologies used for connecting LAN's include T1, T3, ATM, ISDN, ADSL, Frame Relay, radio links and others. New methods of connecting dispersed LAN's are appearing everyday.


Today, high-speed LAN's and switched internetworks are becoming widely used, largely because they operate at very high speeds and support such high-bandwidth applications as multimedia and videoconferencing.

Internetworking evolved as a solution to three key problems: isolated LANs duplication of resources and a lack of network management. Isolated LANs made electronic communication between different offices or departments impossible. Duplication of resources meant that the same hardware and software had to be supplied to each office or department, as did separate support staff. This lack of network management meant that no centralized method of managing and troubleshooting networks existed.

Dead Locks

An ultimate congestion is called Dead Lock.(Also called Lock Up). First IMP cannot proceed until second IMP does something and second IMP cannot proceed because it waits for first IMP to do something. Both IMPs have ground to a complete halt and will stay that way forever.

The simplest lockup can happen with two IMPs. Suppose that IMP A has five buffers, all of which are queued for output to IMP B. Similarly, IMP B has five buffers, all of which are occupied by packets needing to go to IMP A. Neither IMP can accept any incoming packets from the other. They are both stuck. This situation is called direct store-and-forward lockup. The same thing can happen on a larger scale. Each IMP is trying to send to a neighbor, but nobody has any buffers available to receive incoming packets. This situation is called indirect store-and-forward lockup. When an IMP is locked up, all its lines are effectively blocked, including those not involved in the lockup.

Solution for Dead Lock:
A directed graph is constructed with being the nodes of the graph. Arcs connect pairs of buffers (in the same IMP or on adjacent IMPs). The graph is designed in such a way that if packets move from buffer to buffer along the arc, then there is no deadlock.

Choke Packets

Each IMP monitors the percentage utilization of lines. Associated with this is a real variable u, whose value lies between 0, 0 and 1.0 u is periodically updated using.





unew = auold + (1-a) f

f -> Instance of line utilization . (0 or 1)

a -> constant which determines how fast the IMP 'forgets' recent history.

As 'u' crosses a threshold, the output line enters a 'warning' state. Then a 'Choke Packet' is sent to the source host. When the source host receives the choke packet it is required to reduce the traffic to the destination by X percent. Since some packets have already been sent, the successive choke packets are ignored for some time. Even after that, if choke packets arrive, then the traffic is still reduced.

Two threshold levels can be used. Above the first level the packets are sent and above the second level the packets are discarded. Queue length can also be monitored instead of line utilization.

Flow Control

This is used by tranport layer to prevent one IMP from flooding another IMP with packets. Flow control can be applied between pairs of
  • User processes (For e.g: one outstanding message per virtual circuit).

  • Hosts, irrespective of the number of virtual circuits open.

  • Source and destination IMPs, without regard to hosts.

Isarithmic Congestion Control

The algorithm is called Isarithmic because the tidal number of packets in the network is kept constant by issuing "permits", circulate in the subnet. To transfer data, and IMP must capture a "permit", destroy it and transfer the data to the destination IMP and destination IMP on reception of the data regenerates the "permit". By this method the congestion can never arise, in the subnet as a whole.

However it has a few drawbacks,


  • It does not guarantee that a given IMP will never be flooded with packets.
  • Permits must be uniformly distributed to prevent long delays by some IMPs. It is preferred to have them centralized.
  • If a permit is destroyed for some reason, they are lost forever and the network capacity is reduced.

Packet Discarding to control Congestion

The packets are discarded by IMPs to control congestion. The source IMPs will have to keep sending the packets until it is accepted or make a time out and start every thing again. One buffer is always reserved in the IMPs to check to acknowledgement packets. If there are some number of input lines, S number of output lines and K number of buffers, then for a good performance, the max queue length of buffers for each line must be


That is, if there are 7 free buffers and three output lines, then it is not desirable to use all buffers for a single output line, because if they are all used up (waiting in the queue) the the packets for other output lines must be discarded.

So a maximum limit for the number of buffers for an output line is set using the formula (see above) and the other buffers are set free.
It has drawback that it needs extra bandwidth for duplicates.

Preallocation of Buffers to control Congestion

By permanently allocating buffers to each virtual circuit in each IMP, there will always be a place to store any incoming packet until it can be forwarded. First consider the case of a stop-and-wait IMP-IMP protocol. One buffer per virtual circuit per IMP is sufficient for simplex circuits, and one for each direction is sufficient for full duplex circuits. When a packet arrives, the acknowledgement is not sent back to the sending IMP until the packet has been forwarded. Thus an acknowledgment means that the receiver not only received the packet correctly, but also has a free buffer and is willing to accept another one. If the IMP-IMP protocol allows multiple outstanding packets, each IMP will have to dedicate a full window's worth of buffers to each virtual circuit to completely eliminate the possibility of congestion. Because dedicating a complete set of buffers to an idle virtual circuit is expensive, some subnets may use it only where low delay and high bandwidth are essential, for example, on virtual circuits carrying digitized speech.

Congestion Control Algorithms

There are five strategies for congestion control. These strategies involve allocating resources in advance, allowing packets to be discarded when they cannot be processed, restricting the number of packets in the subnet, using flow control to avoid congestion and chocking off input, when the subnet is overloaded.
  • Preallocating resources
  • Packet discarding
  • Isarithmic Congestion Control
  • Flow control
  • Choke Packets
  • Dead Locks

Congestion

When there are too many packets in a network beyond the network capacity, the performance of the network degrades. This is called congestion.

Consider that a sender sends a packet to a receiver which has no buffers free, as a consequence, the sender repeatedly sends the packet until an acknowledgement is released. This the sender cannot free its buffer (queue), which at some time becomes saturated and thus congestion arises.

Routing Network Protocols

Routed protocols are transported by routing protocols across an internetwork. In general, routed protocols in this context also are referred to as network protocols. These network protocols perform a variety of functions required for communication between user applications in source and destination devices and these functions can differ widely among protocol suites. Network protocols occur at the upper five layers of the OSI reference model: the network layer, the transport layer, the session layer, the presentation layer and the application layer.

Routed protocols are protocols that are routed over an internetwork. Examples of such protocols are the Internet Protocols (IP), DECnet, AppleTalk, Novell NetWare, OSI, Banyan VINES and Xerox Network System (XNS). Routing protocols, on the other hand, are protocols that implement routing algorithms. Routing protocols are used by intermediate systems to build tables used in determining path selection of routed protocols. Examples of these protocols include Interior Gateway Routing Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (Enhanced IGRP), Open Shortest Path First (OSPF), Exterior Gateway Protocol (EGP), Border Gateway Protocol (BGP), Intermediate System-Intermediate System (IS-IS) and Routing Information Protocol (RIP).

Routing Metrics

Routing tables contain information used by switching software to select the best route. But how, specifically, are routing tables built? What is the specific nature of the information that they contain? How do routing algorithms determine that one route is preferable to others?

Routing algorithms have used many different metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric. The following metrics have been used:
  • Path length
  • Reliability
  • Delay
  • Bandwidth
  • Load
  • Communication cost.

Path length is the most common routing metric. Some routing protocols allow network administrators to assign costs to each network. In this case, path length is the sum of the costs associated with each link traversed. Other routing protocols define hop count, a metric that specifies the number of passes through internetworking products, such as routers, that a packet must taken route from a source to a destination.

Reliability, in the context of routing algorithms, refers to the dependability (usually described in terms of the bit-error rate) of each network link. Some network links might go down more often than others. After a network fails, certain network links might be repaired more easily ore more quickly than other links. Any reliability factors can be taken into account in the assignment of the reliability ratings, which are arbitrary numeric values usually assigned to network links by network administrators.

Routing delay refers to the length of time required to move a packet from source to destination through the internetwork. Delay depends on many factors, including the bandwidth of intermediate network links, the port queues at each router along the way, network congestion on all intermediate network links and the physical distance to be traveled. Because delay is a conglomeration of several important variables, its is a common and useful metric.

Bandwidth refers to the available traffic capacity of a link. All other things being equal, a 10-Mbps Ethernet link would be preferable to a 64-kbps leased line. Although bandwidth is a rating of the maximum attainable throughput on a link, routes through links with greater bandwidth do not necessarily provide better routes than routes through slower links. For example, if a faster link is busier, the actual time required to send a packet to the destination could be greater.

Load refers to the degree to which a network resource, such as a router, is busy. Load can be calculated in a variety of ways, including CPU utilization and packets precessed per second. Monitoring these parameters on a continual basis can be resource-intensive itself.

Communication cost is another important metric, especially because some companies may not care about performance as much as they care about operating expenditures. Although line delay may be longer, they will send packets over their own lines rather than through the public lines that cost money for usage time.

Link-State Versus Distance Vector Routing Algorithm

Link-state algorithm (also known as shortest path first algorithms) flood routing information to all nodes in the internetwork. Each router, however, sends only the portion of the routing table that describes the state of its own links. In link-state algorithms, each router builds a picture of the entire network in its routing tables. Distance vector algorithms (also known as Bellman-Ford algorithms) call for each router to send all or some portion of its routing table, but only to its neighbors. In essence, link-state algorithms send small updates everywhere , while distance vector algorithms send larger updates only to neighboring routers. Distance vector algorithms know only about their neighbors.

Because they converge more quickly, link-state algorithms are somewhat less prone to routing lops than distance vector algorithms. On the other hand, link-state algorithms require more CPU power and memory than distance vector algorithms. Link-state algorithms, therefore, can be more expensive to implement and support. Link-state protocols are generally more scalable than distance vector protocols.

Intradomain Versus Interdomain Routing Algorithm

Some routing algorithms work only within domains while others work within and between domains. The nature of these two algorithm types is different. It stands to reason, therefore, that an optimal intradomain-routing algorithm would not necessarily be an optimal interdomain-routing algorithm.

Host Intelligent Versus Router-Intelligent Routing Algorithm

Some routing algorithms assume that the source end node will determine the entire route. This is usually referred to as source. In source-routing systems, routers merely act as store-and-forward devices, mindlessly sending the packet to the next stop.

Other algorithm assume that hosts know nothing about routes. In these algorithms, routers determine the path through the internetwork based on their own calculations. In the first system, the hosts have the routing intelligence. In the latter system, routers have the routing intelligence.

Flat Versus Hierarchical Routing Algorithm

Some routing algorithms operate in a flat space, while others use routing hierarchies. In a flat routing system, the routers are peers of all others. In a hierarchical routing system, some routers form what amounts to a routing backbone. Packets from nonbackbone routers travel to the backbone routers, where they are sent through the backbone until they reach the general area of the destination. At this point, they travel from the last backbone router through one or more nonbackbone routers to the final destination.

Routing systems often designate logical groups of nodes, called domains, autonomous systems or areas. In hierarchical systems, some routers in a domain can communicate with routers in other domains, while others can communicate only with routers within their domain. In very large networks, additional hierarchical levels may exist, with routers at the highest hierarchical level forming the routing backbone.

The primary advantage of hierarchical routing is that it mimics the organization of most companies and therefore supports their traffic patterns well. Most network communication occurs within small company groups (domains). Because intradomain routers need to know only about other routers within their domain, their routing algorithms can be simplified and, depending on the routing algorithm being used, routing update traffic can be reduced accordingly.

Single-Path Versus Multipath Routing Algorithm

Some sophisticated routing protocols support multiple paths to the same destination. Unlike single-path algorithm these multipath algorithms permit traffic multiplexing over multiple lines. The advantages of multiplath algorithms are obvious: They can provide substantially better throughput and reliability. This is generally called Load Sharing.

Static Versus Dynamic Routing Algorithms

Static routing algorithms are hardly algorithms at all, but are table mappings established by the network administrator before the beginning of routing. These mappings do not change unless the network administrator alters them. Algorithms that use static routes are simple to design and work well in environments where network traffic is relatively predictable and where network design is relatively simple.

Because static routing systems cannot react to network changes, they generally are considered unsuitable for today's large, constantly changing networks. Most of the dominant routing algorithms today are dynamic routing algorithms, which adjust to changing network circumstances by analyzing incoming routing update messages. If the message indicates that a network change has occurred, the routing software recalculates routes and sends out new routing update messages. These messages permeate the network, stimulating routers to rerun their algorithms and change their routing tables accordingly.

A router of last resort (a router to which all unroutable packets are sent), for example, can be designated to act as a repository for all unroutable packets, ensuring that all messages are at least handled in some way.

Routing Algorithm Types

Routing algorithms can be classified by different types. Key differentiators include these:









  • Static versus dynamic routing algorithms
  • Flat versus hierarchical routing algorithms
  • Host-intelligent versus router-intelligent routing algorithms
  • Intradomain versus interdomain routing algorithms
  • Link-state versus distance vector routing algorithms

Routing Algorithms Design Goals

Routing algorithms often have one or more of the following design goals:

  • Optimality
  • Simplicity and low overhead
  • Robustness and stability
  • Rapid convergence
  • Flexibility

Optimality refers to the capability of the routing algorithm to select the best route, which depends on the metrics weightings used to make the calculation. For example, one routing algorithm may use a number of hops and delays, but it may weigh delay more heavily in the calculation. Naturally, routing protocols must define their metric calculation algorithms strictly.

Routing algorithms also are designed to be as single as possible. In other words, the routing algorithm must offer its functionality efficiently, with a minimum of software and utilization overhead. Efficiency is particularly important when the software implementing the routing algorithm must run on a computer with limited physical resources.

Routing algorithms must be robust which means that they should perform correctly in the face of unusual or unforeseen circumstances, such as hardware failures, high load conditions, and incorrect implementations. Because routers are located at network junctions points, they can cause considerable problems when they fail. The best routing algorithms are often those that have withstood the test of time and that have proven stable under a variety of network conditions.

In addition, routing algorithms must converge rapidly . Convergence is the process of agreement, by all routers, on optimal routes. When a network event causes routes to either go down or become available, routers distribute routing update messages that permeate networks, stimulating recalculation of optimal routes and eventually causing all routers to agree on these routes. Routing algorithms that converge slowly can cause routing loops or network outages.

In the routing loop if a packet arrives at Router 1 at time 't1', Router 1 already has been updated and thus knows that the optimal route to the destination calls for Router 2 to be the next stop. Router 1 therefore forwards the packet to Router 2, but because this router has not yet been updated, it believes that the optimal next hop is Router n. Router 2 therefore forwards the packet back to Router 1, and the packet continues to bounce back and forth between the two routers until Router 2 receives its routing update or until the packet has been switched the maximum number of times allowed.

Routing algorithms should also be flexible, which means that they should quickly and accurately adapt to a variety of network circumstances. Assume, for example, that a network segment has gone down. As many routing algorithms become aware of the problem, they will quickly select the next-best path for all routes normally using that segment. Routing algorithms can be programmed to adapt to changes in network bandwidth, router queue size and network delay , among other variables.

Routing Algorithms

Routing algorithms can be differentiated based on several key characteristics. First, the particular goals of the algorithm designer affect the operation of the resulting routing protocol. Second, various types of routing algorithms exist, and each algorithm has a different impact on network and router resources. Finally, routing algorithms use a variety of metrics that affect calculation of optimal routes. The following sections analyze these routing algorithm attributes:


  1. Design Goals
  2. Algorithm Types

Switching as a Routing Component

Switching

Switching algorithms is relatively simple and it is the same for most routing protocols. In most cases, a host determines that is must send a packet to another host. Having acquired a router's address by some means, the source host sends a packet addressed specifically to a router's physical (Media Access Control [MAC]-layer) address, this time with the protocol (network layer) address of the destination host.

As it examines the packet's destination protocol address, the router determines that it either knows or does not know how to forward the packet to the next hop. If the router does not know how to forward the packet, it typically drops the packet. If the router knows how to forward the packet, however, it changes the destination physical address to that of the next hop and transmits the packet.

The next hop may be the ultimate destination host. If not, the next hop is usually another router, which executes the same switching decision process. As the packet moves through the internetwork, its physical address changes, but its protocol address remains constant.

The International Organization for Standardization (ISO) has developed a hierarchical terminology that is useful in describing the switching process. Using the switching terminology network devices without the capability to forward packets between sub networks are called End Systems (ESs), whereas network devices with these capabilities are called Intermediate Systems (ISs). ISs are further divided into those that can communicate within routing domains (intradomain ISs) and those that communicate both within and between routing domains (interdomain ISs). A routing domain generally is considered a portion of an internetwork under common administrative authority that is regulated by a particular set of administrative guidelines. Routing domains are all called autonomous systems. With certain protocols, routing domains can be divided into routing areas, but intradomain routing protocols are still used for switching both within and between areas.


Other Routing Component

Path Determination as a Routing Component

Path Determination

Routing protocols use metrics to evaluate what path will be the best for a packet to travel. A metric is standard of measurement, such as path bandwidth, that is used by routing algorithms to determine the optimal path to a destination. To aid the process of path determination, routing algorithms initialize and maintain routing tables, which contain route information. Route information varies depending on the routing algorithm used.

Routing algorithms fill routing tables with a variety of information. Destination/next hop associations tell a router that a particular destination can be reached optimally by sending the packet to a particular router representing the "next hop" on the way to the final destination. When a router receives an incoming packet, it checks the destination address and attempts to associate this address with a next hop.

Routing tables also can contain other information, such as data about the desirability of a path. Routers compare metrics to determine optimal routes and these metrics differ depending on the design of the routing algorithm used.

Routers communicate with one another and maintain their routing tables through the transmission of a variety of messages. The routing update message is one such message that generally consists of all or a portion of a routing table. By analyzing routing updates from all other routers, a router can build a detailed picture of network topology. A link-state advertisement, another example of a message sent between routers, informs other routers of the state of the sender's links. Link information also can be used to build a complete picture of network topology to enable routers to determine optimal routes to network destinations.

Other Component of Routing:

Routing Components

Routing involves two basic activities: determining optimal paths and transporting information groups (typically called packets) through an internetwork. In the context of the routing process, the latter of these is referred to as packet switching. Although packet switching is relatively straightforward, path determination can be very complex.


Routing

Routing is the act of moving information across an internetwork from a source to a destination. Along the way, at least one intermediate node typically is encountered. Routing is often contrasted with bridging, which might seem to accomplish precisely the same thing to the casual observer. The primary difference between the two is that bridging occurs at Layer 2 (the link layer) of the OSI reference model, whereas routing occurs at Layer 3 (the network layer). This destination provides routing and bridging with different information to use in the process of moving information from source to destination, so the two functions accomplish their tasks in different ways.

Related Topics:


  • Routing Components
  • Routing Algorithms
    • Design Goals
    • Algorithm Types
      • Static Versus Dynamic
      • Single-Path Versus Multipath
      • Flat Versus Hierarchical
      • Host-Intelligent Versus Router-Intelligent
      • Intradomain Versus Interdomain
      • Link-State Versus Distance Vector
      • Routing Metrics
  • Network Routing Protocols

Wireless LANs

Not all networks are connected with cabling, some networks are wireless. Wireless LANs use high frequency radio signals, infrared light beams or laser to communicate between the workstations and the file server or hubs. Each workstation and file server on a wireless network has some sort of transceiver/antenna to send and receive the data. Information is relayed between transceivers as if they were physically connected. For longer distance, wireless communications can also take place through cellular telephone technology, microwave transmission or by satellite.

Wireless networks are great for allowing laptop computers or remote computers to connect to the LAN. Wireless networks are also beneficial in older buildings where it may be difficult or impossible to install cables.

The two most common type of infrared communications used in schools are Line-of-sight and Scattered broadcast.

Line-of-sight communication means that there must be an unblocked direct line between the workstation and the transceiver. If a person walks within the line-of-sight while there is a transmission, the information would need to be sent again. This kind of obstruction can slow down the wireless network.

Scattered infrared communication is a broadcast of infrared transmissions sent out in multiple directions that bounces of walls and ceilings until it eventually hits the receiver. Networking communications with laser are virtually the same as line-of-sight infrared networks.

Wireless LANs have several disadvantages. They are very expensive, provide poor security and are susceptible to interference from lights and electronic devices. They are also slower than LANs using cabling.

LAN Transmission Methods

LAN data transmission fall into three classifications












In each type of transmission, a single packet is sent to one or more nodes.

In a Unicast transmission, a single packet is sent from the source to a destination on a network. First, the source node addresses the packet by using the address of the destination node. The package is then sent onto the network and finally,the network passes the packet to its destination.

A Multicast transmission consists of a single data packet that is copied and sent to a specific subset of nodes on the network. First, the source node addresses the packet by using a multicast address. The packet is then sent into the network, which makes copies of the packet and sends a copy to each node that is a part of the multicast address.

A Broadcast transmission consists of a single data packet that is copied and sent to all nodes on the network. In these types of transmissions, the source node addresses the packet by using the broadcast address. The packet is then sent on the network, which makes copies of the packet and send.

LAN Media-Access Methods

Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways:


  1. Carrier Sense Multiple Access Collision Detect (CSMA/CD) - In networks using CSMA/CD technology such as Ethernet, network devices contend for the network media. When a device has data to send, it first listens to see if any other device is currently using the network. If not, it starts sending its data. After finishing its transmission, it listens again to see if a collision occurred. A collision occurs when two devices send data simultaneously. When a collision happens, each device waits a random length of time before resending its data. In most cases, a collision will not occur again between the two devices. Because of this type of network contention, the busier a network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as the umber of devices on a single network increases.

  1. Token Passing - In Token Passing networks such as Token Ring and FDDI, a special network packet called a token is passed around the network from device to device. When a device has data to send, it must wait until it has the token and then sends its data. When the data transmission is complete, the token is released so that other devices may use the network media. The main advantage of token is released so that other devices may use the network media. The main advantage of token-passing networks is that they are deterministic. In other words, it is easy to calculate the maximum time that will pass before networks in some real-time environments such as factories, where machinery must be capable to communicating at a determinable interval.

LAN Topologies

LAN topologies define the manner in which network devices are organized. Four common LAN topologies exist:bus, ring, star and tree.






  1. Bus topology is a linear LAN architecture in which transmissions from network stations propagate the length of medium and are received by all other stations.

  1. Ring topology is a LAN architecture that consists of a series of devices connected to one another by unidirectional transmission links to form a single closed loop. Both Token Ring/IEEE 802.5 and FDDI networks implement a ring topology.

  1. Star topology is a LAN architecture in which the endpoints on a network are connected to a common central hub or switch, by dedicated links. Logical bus and ring topologies are often implemented physically in a star topology.

  1. Tree topology is a LAN architecture that is identical to the bus topology, except that branches with multiple nodes are possible in this case.

LAN as a Management tool

Management

As a management tool, a LAN can:

Increase system performance through the distribution of tasks and equipments, improve the availability of computer resources. Tasks can be assigned to several machines; increase system reliability. Crucial processes can be duplicated and/or divided so that, on the failure of one machine, other machine can quickly take up the load. Minimize the adverse effects of loss of any one system. Help regain administrative control of equipment, LAN improves the efficiency with more information accessible at workstation which can be used for taking better and timely decision. LAN can have dramatic impact on efficiency where the data is dynamic. The LAN server concept allows efficient centralization of information by allowing control over who uses the network and for what purpose. A LAN has extensive security system. A LAN has configuration flexibility. PCs and other resources can be added as and when needed.

The advantages of local area networks can be enumerated:

  • Local area networks are the best means to provide a cost-effective multiuser computer environment.
  • A LAN can fit any site requirements.
  • It can be tailored to suit any type of application.
  • Any number of users can be accommodated.
  • It is flexible and growth-oriented.
  • It offers existing single users a familiar Disk Operating System (DOS) environment.
  • It can use existing software if the original language supports multi-user environment.
  • It offers electronic mail as an in-built facility.
  • It allows sharing of mass central storage and printers.
  • Data transfer rates are above 10 Mbps.
  • It allows file/record locking.
  • It provides foolproof security system against illegal access to data.
  • It provides data integrity.

More characterisctic of LAN

LAN as a Communication Resource

Communication

LAN facilitates communication through its powerful Electronic Mail System (EMS) among authorized network users across time boundaries and distance. Network provides fast responses and transmits urgent notes, messages and circulars.

As a communication resources, a LAN can:
  • Facilitate communication within the organization by supplying an additional channel for coordinating the work of various groups, exchanging data, sending messages and sharing information.
  • Provide in-house, computer-to-computer communication at high speed.
  • Provide a method of accessing remote resources, thereby facilitating communication with the world outside the immediate organization.

More characteristic of LAN