Become Masters in Computer Network. Anybody Can Master Computer Hardware and Maintenance," the ultimate destination for individuals seeking to gain expertise in computer hardware components and maintenance techniques. Whether you're a beginner looking to expand your knowledge or a seasoned professional aiming to enhance your skills, this website is designed to cater to all levels of expertise
User Datagram Protocol (UDP)
TCP Sliding Window
- Source port and Destination port: Identifies points at which upper layer source and destination processes receive TCP services.
- Sequence Number: Usually specifiers the number assigned to the first byte of data in the current message. In the connection-establishment phase, this field also can be used to identify an initial sequence number to be used in an upcoming transmission.
- Acknowledgment Number: Contains the sequence number of the next byte of data the sender of the packet expects to receive.
- Data Offset: Indicates the number of 32-bit words in the TCP header.
- Reserved: Remains reserved for future use.
- Flags: Carries a variety of control information, including the SYN and ACK bits used for connection establishment, and the FIN bit used for connection termination.
- Window: Specifies the size of the sender's receive window (that is, the buffer space available for incoming data).
- Checksum: Indicates whether the header was damaged in transit.
- Urgent Pointer: Points to the first urgent data byte in the packet.
- Options: Specifies various TCP options.
- Data: Contains upper-layer information.
In TCP, the receiver specifies the current window size in every packet. Because TCP provides a byte-stream connection, window sizes are expressed in bytes. This means that a window is the number of data bytes that the sender is allowed to send before waiting for an acknowledgment. Initial window sizes are indicated at connection setup, but might vary throughout the data transfer to provide flow control. A window size of zero, for instance, means "Send no data".
In a TCP sliding-window operation, for example, the sender might have a sequence of bytes to send (numbered 1 to 10) to a receiver who has a window size of five. The sender then would place a window around the first five bytes and transmit them together. It would then wait for an acknowledgment.
The receiver would respond with an ACK=6, indicating that it has received bytes 1 to 5 and is expecting byte 6 next. In the same packet, the receiver would indicate that its window size is 5. The sender then would move the sliding window five bytes to the right and transmit bytes 6 to 10. The receiver would respond with an ACK=11, indicating that it is expecting sequenced byte 11 next. In this packet, the receiver might indicate that its window size is 0 (because, for example, its internal buffers are full). At this point, the sender cannot send any more bytes until the receiver sends another packet with a window size greater than 0.
Positive Acknowledgment and Retransmission (PAR)
TCP Connection Establishment
Transmission Control Protocol (TCP)
Internet Transport Protocols (TCP & UDP)
Reliable Transport Service
Unreliable Transport Service
User Multiplexing
The transport protocol number identifies the particular transport protocol (For e.g. UDP or TCP) being accessed by the user. A user's local port number is assigned as soon as the user starts to use the transport service. The user's remote port number and IP address are assigned as soon as it learns of the local port number and host IP address of its intended peer user. Every IP network packet has the local port number of the originating user, called sender port number, the local port number of the intended destination user, called destination port number, the sender and destination IP addresses, and the transport protocol number. This approach of using local and remote that numbers and IP addresses to identify a user supports the client-server paradigm. This enables the clients to handle the same services simultaneously.
Consider a host H providing a service over a certain transport protocol (For e.g. FTP over TCP). H dedicates a specific local port number, say p1, to the service. H creates a server user, say S, with local port number set to p1, transport protocol number set appropriately, and remote port number and IP address set to nil. When a client user, say C on another host G wants to avail of this service, C would get local port number set to an arbitrary value, say p2, remote port number set to p1, remote IP address set to H's IP address, and transport protocol number set appropriately. When the request packet arrives at the transport layer in H, it gives the packet to S (assuming that there is no user at H with local port number p1, remote port number p2, remote IP address equal to G's IP address). The server S then can create another server specifically for servicing client C; this new server NS would have remote port number set to p2 and remote IP address set to G's IP address, and hence it can use local port number p1, same as S.
The following table illustrates the above example:
Host | User | Acts as | Local Port No. | Remote Port No. |
H | S | Server | P1 | Nil |
G | C | Client | P2 | P1 |
When sending the request packet to the Client C, the Server S creates another Server with following specification.
User | Acts as | Local Port No. | Remote Port No. |
NS | New Server | P1 | P2 |
Network Transport Service
The transport layer of a TCP/IP computer network, situated above the network layer and below the applications layer, provides transport service. The network layer provides unreliable packet transfer service between any two hosts. The transport layer uses this network service and provides transport services between any two applications in the network. Applications include email (SMTP), remote login (TELNET, SSH), file transfer (FTP), web browsers (HTTP), etc.
The ideal transport service is one that can transfer data packets between any two users and can do so reliably and with low-delay and low-jitter. Providing user-to-user service implies that the transport layer has to do user multiplexing at each host. Reliable data transfer means that data is delivered in the same sequence it was sent and without loss. Low-delay means that data sent is delivered within a specified (usually small ) time bound. Low jitter means that the time intervals between sending data is preserved at delivery within specified (usually small) time bounds. Achieving such ideal service requires the network to be capable of handling the worst-case load at any time, which, if the network is not to be incredibly expensive, means imposing severe restrictions on network access and data rates available to users ( as in telephony networks).
Fortunately, ideal transport service is not required for most applications. Thus the transport layer in TCP/IP networks does not strive for it. Instead it provides two separate services: a reliable service, which can suffer high delays and jitter and an unreliable service, which does no better than the network service. The reliable service, implemented by a transport protocol known as TCP, is used by applications where data integrity is essential, such as file transfer, email, remote login etc. The unreliable service, implemented by a transport protocol known as UDP, is used by applications where data loss can be tolerated but low-delay or low-jitter is desired, such as Internet telephony and voice/video streaming.
Thus reliable transport service is nothing but reliable data transfer between any two users. But reliable data transfer requires resources at the entities, such as buffers and processes for retransmitting data, reassembling data, etc. These resources typically cannot be maintained across failures. Furthermore, maintaining the resources continuously for every pair of users would be prohibitively inefficient, because only a very small fraction of user pairs in a network exchange data with any regularity, especially in a large network such as the Internet. Therefore a reliable transport service involves connection management and data transfer. Data transfer provides for the reliable exchange of data between connected users. Connection management provides for the establishment and termination of connections between users.
In general, reliable transport service (For e.g. TCP service) involves three aspects: user multiplexing, reliable connection management between users, and reliable data transfer between connected users. Unreliable transport service (For e.g. UDP service), on the other hand, involves two aspects: user multiplexing and unreliable data transfer between users.
Standards Organizations
- International Organization for Standardization (ISO) - ISO is an international standards organization responsible for a wide range of standards, including many that are relevant to networking. Its best-known contribution is the development of the OSI reference model and the OSI protocol suite.
- American National Standards Institute (ANSI) - ANSI, which is also a member of the ISO, is the coordinating body for voluntary standards groups within the United States. ANSI developed the Fiber Distributed Data Interface (FDDI) and other communications standards.
- Electronic Industries Association (EIA) - EIA specifies electrical transmission standards, including those used in networking. The EIA developed the widely used EIA/TIA-232 standard (formerly known as RS-232).
- Institute of Electrical and Electronic Engineers (IEEE) - IEEE is a professional organization that defines networking and other standards. The IEEE developed the widely used LAN standards IEEE 802.3 and IEEE 802.5.
- International Telecommunication Union Telecommunication Standardization Sector (ITU-T) - Formerly called the Committee for International Telegraph and Telephone (CCITT), ITU-T is now an international organization that develops communication standards. The ITU-T developed X.25 and other communications standards.
- Internet Activities Board (IAB) - IABis a group of internetwork researchers who discuss issues pertinent to the Internet and set Internet policies through decisions and task forces. The IAB designates some Request for Comments (RFC) documents as Internet standards, including Transmission Control Protocol/Internet Protocol (TCP/IP) and the Simple Network Management Protocol (SNMP).
Multiplexing Basics
A multiplexer is a physical layer device that combines multiple data streams into one or more output channels at the source. Multiplexers demultiplex the channels into multiple data streams at the remote end and thus maximize the use of the bandwidth of the physical medium by enabling it to be shared by multiple traffic sources.
Some methods used for multiplexing data are Time-Division Multiplexing (TDM), Asynchronous Time-Division Multiplexing (ATDM), Frequency-Division Multiplexing (FDM) and statistical multiplexing.
In TDM, information from each data channel is allocated bandwidth based on preassigned time slots, regardless of whether there is data to transmit. In ATDM, information from data channels is allocated bandwidth as needed by using dynamically assigned time slots. In FDM, information from each data channel is allocated bandwidth based on the signal frequency of the traffic. In statistical multiplexing, bandwidth is dynamically allocated to any data channels that have information to transmit.
Error-Checking Basics
Flow Control Basics
Buffering is used by network devices to temporarily store bursts of excess data in memory until they can be processed. Occasional data bursts are easily handled by buffering. Excess data bursts can exhaust memory, however forcing the device to discard any additional datagrams that arrive.
Source-quench messages are used by receiving devices to help prevent their buffers from overflowing. The receiving device sends source-quench messages to request that the source reduce its current rate of data transmission. First, the receiving device begins discarding received data due to overflowing buffers. Second , the receiving device begins sending source-quench messages to the transmitting device at the rate of one message for each packet dropped. The source device receives the source-quench messages and lowers the data rate until it stops receiving the messages. Finally, the source device then gradually increases the data rate as long as no further source-quench requests are received.
Windowing is a flow-control scheme in which the source device requires an acknowledgment from the destination after a certain number of packets have been transmitted. With a window size of 3 the source requires an acknowledgment after sending three packets, as follows. First, the source device sends three packets to the destination device. Then after receiving the three packets, the destination device sends and acknowledgment to the source. The source receives the acknowledgment and sends three more packets. If the destination does not receive one or more of the packets for some reason, such as over flowing buffers, it does not receive enough packets to send an acknowledgment. The source then retransmits the packets at a reduced transmission rate.
Addresse Versus Names
Address Assignments
Hierarchical Versus Flat Address Space
Network Layer Addresses
The relationship between a network address and a device is logical and unfixed; it typically is based either on physical network characteristics (the device is on a particular network segment) or on groupings that have no physical basis ( the device is part of an AppleTalk Zone). End systems require one network layer address for each network layer protocol that they support. (This assumes that the device has only one physical network connection.). Routers and other internetworking devices require one network layer address per physical network connection for each network layer protocol supported. For example, a router with three interfaces each running AppleTalk, TCP/IP and OSI must have three network layer addresses for each interface. The router therefore has nine network layer addresses.
Media Access Control (MAC) Addresses
MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial number or another value administered by the specific vendor. MAC addresses sometimes are called Burned-In Addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the interface card initializes.
When internetworks generally use network addresses to route traffic around the network, there is a need to map network addresses to MAC addresses. When the network layer has determined the destination station's network address, it must forward the information over a physical network using a MAC address. Different protocol suites use different methods to perform this mapping, but the most popular is Address Resolution Protocol (ARP).
Different protocol suites use different methods for determining the MAC address of a device. The following three methods are used most often. Address Resolution Protocol (ARP) maps network addresses to MAC addresses. The Hello protocol enables network devices to learn the MAC addresses of other network devices. MAC addresses either are embedded in the network layer address or are generated by an algorithm.
Address Resolution Protocol (ARP) is the method used in the TCP/IP suite. When a network device needs to send data to another device on the same network, it knows the source and destination network addresses for the data transfer. It must somehow map the destination address to a MAC address before forwarding the data. First, the sending station will check its ARP table to see if it has already discovered this destination station's IP address contained in the broadcast. Every station on the network receives the broadcast and compares the embedded IP address to its own. Only the station with matching IP address replies to the sending station with a packet containing the MAC address for the station. The first station then adds this information to its ARP table for future reference and proceeds to transfer the data.
When the destination device lies on a remote network, one beyond a router, the process is the same except that the sending station sends the ARP request for the MAC address of its default gateway. It then forwards the information to that device. The default gateway will then forward the information over whatever networks necessary to deliver the packet to the network on which the destination device resides. The router on the destination device's network then uses ARP to obtain the MAC of the actual destination device and delivers the packet.
The Hello protocol is a network layer protocol that enables network devices to identify on another and indicate that they are still functional. When a new end system powers up, for example, it broadcasts hello messages onto the network. Devices on the network then return hello replies, and hello messages are also sent at specific intervals to indicate that they are still functional. Network devices can learn the MAC addresses of other devices by examining Hello protocol packets.
Three protocols use predictable MAC addresses. In these protocol suites, MAC addresses are predictable because the network layer either embeds the MAC address in the network layer address or uses an algorithm to determine the MAC address. The three protocols are Xerox Network Systems (XNS), Novell Internetwork Packet Exchange (IPX), and DECnet Phase IV.
Data Link Layer Addresses
End systems generally have only one physical network connection and thus have only one data link address. Routers and other internetworking devices typically have multiple physical network connections and therefore have multiple data-link addresses.
Internetwork Addressing
- Data link layer addresses
- Media Access Control (MAC) addresses
- Network layer addresses
Connection-Oriented and Connectionless Internetwork Services
In general, transport protocols can be characterized as being either connection-oriented or connectionless. Connection-oriented services must first establish a connection with the desired service before passing any data. A connectionless service can send the data without any need to establish a connection first. In general, connection-oriented services provide some level of delivery guarantee, whereas connectionless services do not.
Connection-oriented service involves three phases: connection establishment, data transfer and connection termination.
During connection establishment, the end nodes may reserve resources for the connection. The end nodes also may negotiate and establish certain criteria for the transfer, such as a window size used in TCP connections. This resource reservation is one of the things exploited in some denial of service (DOS) attacks. An attacking system will send many requests for establishing a connection but then will never complete the connection. The attacked computer is then left with resources allocated for many never-completed connections. Then, when an end node tries to complete an actual connection, there are not enough resources for the valid connection.
The data transfer phase occurs when the actual data is transmitted over the connection. During data transfer, most connection-oriented services will monitor for lost packets and handle resending them. The protocol is generally also responsible for putting the packets in the right sequence before passing the data up the protocol stack. When the transfer of data is complete, the end nodes terminate the connection and release resources reserved for the connection.
Connection-oriented network services have more overhead than connectionless ones. Connection-oriented services must negotiate a connection, transfer data and tear down the connection, whereas a connectionless transfer can simply send the data without the added overhead of creating and tearing don a connection. Each has its place in internetworks.
Internetworking Challenges
The challenge when connecting various systems is to support communication among disparate technologies. Different sites, for example, may use different types of media operating at varying speeds or may even include different types of systems that need to communicate.
Because companies rely heavily on data communication, internetworks must provide a certain level of reliability. This is an unpredictable world, so many large internetworks include redundancy to allow for communication even when problems occur.
Furthermore, network management must provide centralized support and troubleshooting capabilities in an intetnetwork. Configuration, security, performance and other issues must be adequately addressed for the internetwork to function smoothly. Security within an internetwork is essential. Many people think of network security from the perspective of protecting the private network from outside attacks. However, it is just as important to protect the network from internal attacks, especially because most security breaches come from inside. Networks must also be secured so that the internal network cannot be used as a tool to attack other external sites.
Early in the year 2000, many major web sites were the victims of distributed denial of service (DDOS) attacks. These attacks were possible because a great number of private networks currently connected with the Internet were not properly secured. These private networks were used as tools for the attackers.
Because nothing in this world is stagnant, internteworks must be flexible enough to change with new demands.
Introduction and History of Internetworking
An internetwork is a collection of individual networks, connected by intermediate networking devices, that functions as a single large network. Internetworking refers to the industry, products and procedures that meet the challenge of creating and administering internetworks. The different kinds of network technologies that can be interconnected by routers and other networking devices to create an internetwork.
History of Internetworking
Dead Locks
Solution for Dead Lock:
Choke Packets
unew = auold + (1-a) f
f -> Instance of line utilization . (0 or 1)
a -> constant which determines how fast the IMP 'forgets' recent history.
As 'u' crosses a threshold, the output line enters a 'warning' state. Then a 'Choke Packet' is sent to the source host. When the source host receives the choke packet it is required to reduce the traffic to the destination by X percent. Since some packets have already been sent, the successive choke packets are ignored for some time. Even after that, if choke packets arrive, then the traffic is still reduced.
Two threshold levels can be used. Above the first level the packets are sent and above the second level the packets are discarded. Queue length can also be monitored instead of line utilization.
Flow Control
- User processes (For e.g: one outstanding message per virtual circuit).
- Hosts, irrespective of the number of virtual circuits open.
- Source and destination IMPs, without regard to hosts.
Isarithmic Congestion Control
However it has a few drawbacks,
- It does not guarantee that a given IMP will never be flooded with packets.
- Permits must be uniformly distributed to prevent long delays by some IMPs. It is preferred to have them centralized.
- If a permit is destroyed for some reason, they are lost forever and the network capacity is reduced.
Packet Discarding to control Congestion
Preallocation of Buffers to control Congestion
Congestion Control Algorithms
- Preallocating resources
- Packet discarding
- Isarithmic Congestion Control
- Flow control
- Choke Packets
- Dead Locks
Congestion
Consider that a sender sends a packet to a receiver which has no buffers free, as a consequence, the sender repeatedly sends the packet until an acknowledgement is released. This the sender cannot free its buffer (queue), which at some time becomes saturated and thus congestion arises.
Routing Network Protocols
Routed protocols are protocols that are routed over an internetwork. Examples of such protocols are the Internet Protocols (IP), DECnet, AppleTalk, Novell NetWare, OSI, Banyan VINES and Xerox Network System (XNS). Routing protocols, on the other hand, are protocols that implement routing algorithms. Routing protocols are used by intermediate systems to build tables used in determining path selection of routed protocols. Examples of these protocols include Interior Gateway Routing Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (Enhanced IGRP), Open Shortest Path First (OSPF), Exterior Gateway Protocol (EGP), Border Gateway Protocol (BGP), Intermediate System-Intermediate System (IS-IS) and Routing Information Protocol (RIP).
Routing Metrics
Routing algorithms have used many different metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric. The following metrics have been used:
- Path length
- Reliability
- Delay
- Bandwidth
- Load
- Communication cost.
Path length is the most common routing metric. Some routing protocols allow network administrators to assign costs to each network. In this case, path length is the sum of the costs associated with each link traversed. Other routing protocols define hop count, a metric that specifies the number of passes through internetworking products, such as routers, that a packet must taken route from a source to a destination.
Reliability, in the context of routing algorithms, refers to the dependability (usually described in terms of the bit-error rate) of each network link. Some network links might go down more often than others. After a network fails, certain network links might be repaired more easily ore more quickly than other links. Any reliability factors can be taken into account in the assignment of the reliability ratings, which are arbitrary numeric values usually assigned to network links by network administrators.
Routing delay refers to the length of time required to move a packet from source to destination through the internetwork. Delay depends on many factors, including the bandwidth of intermediate network links, the port queues at each router along the way, network congestion on all intermediate network links and the physical distance to be traveled. Because delay is a conglomeration of several important variables, its is a common and useful metric.
Bandwidth refers to the available traffic capacity of a link. All other things being equal, a 10-Mbps Ethernet link would be preferable to a 64-kbps leased line. Although bandwidth is a rating of the maximum attainable throughput on a link, routes through links with greater bandwidth do not necessarily provide better routes than routes through slower links. For example, if a faster link is busier, the actual time required to send a packet to the destination could be greater.
Load refers to the degree to which a network resource, such as a router, is busy. Load can be calculated in a variety of ways, including CPU utilization and packets precessed per second. Monitoring these parameters on a continual basis can be resource-intensive itself.
Communication cost is another important metric, especially because some companies may not care about performance as much as they care about operating expenditures. Although line delay may be longer, they will send packets over their own lines rather than through the public lines that cost money for usage time.
Link-State Versus Distance Vector Routing Algorithm
Because they converge more quickly, link-state algorithms are somewhat less prone to routing lops than distance vector algorithms. On the other hand, link-state algorithms require more CPU power and memory than distance vector algorithms. Link-state algorithms, therefore, can be more expensive to implement and support. Link-state protocols are generally more scalable than distance vector protocols.
Intradomain Versus Interdomain Routing Algorithm
Host Intelligent Versus Router-Intelligent Routing Algorithm
Other algorithm assume that hosts know nothing about routes. In these algorithms, routers determine the path through the internetwork based on their own calculations. In the first system, the hosts have the routing intelligence. In the latter system, routers have the routing intelligence.
Flat Versus Hierarchical Routing Algorithm
Routing systems often designate logical groups of nodes, called domains, autonomous systems or areas. In hierarchical systems, some routers in a domain can communicate with routers in other domains, while others can communicate only with routers within their domain. In very large networks, additional hierarchical levels may exist, with routers at the highest hierarchical level forming the routing backbone.
The primary advantage of hierarchical routing is that it mimics the organization of most companies and therefore supports their traffic patterns well. Most network communication occurs within small company groups (domains). Because intradomain routers need to know only about other routers within their domain, their routing algorithms can be simplified and, depending on the routing algorithm being used, routing update traffic can be reduced accordingly.
Single-Path Versus Multipath Routing Algorithm
Static Versus Dynamic Routing Algorithms
Because static routing systems cannot react to network changes, they generally are considered unsuitable for today's large, constantly changing networks. Most of the dominant routing algorithms today are dynamic routing algorithms, which adjust to changing network circumstances by analyzing incoming routing update messages. If the message indicates that a network change has occurred, the routing software recalculates routes and sends out new routing update messages. These messages permeate the network, stimulating routers to rerun their algorithms and change their routing tables accordingly.
A router of last resort (a router to which all unroutable packets are sent), for example, can be designated to act as a repository for all unroutable packets, ensuring that all messages are at least handled in some way.
Routing Algorithm Types
- Static versus dynamic routing algorithms
- Flat versus hierarchical routing algorithms
- Host-intelligent versus router-intelligent routing algorithms
- Intradomain versus interdomain routing algorithms
- Link-state versus distance vector routing algorithms
Routing Algorithms Design Goals
- Optimality
- Simplicity and low overhead
- Robustness and stability
- Rapid convergence
- Flexibility
Optimality refers to the capability of the routing algorithm to select the best route, which depends on the metrics weightings used to make the calculation. For example, one routing algorithm may use a number of hops and delays, but it may weigh delay more heavily in the calculation. Naturally, routing protocols must define their metric calculation algorithms strictly.
Routing algorithms also are designed to be as single as possible. In other words, the routing algorithm must offer its functionality efficiently, with a minimum of software and utilization overhead. Efficiency is particularly important when the software implementing the routing algorithm must run on a computer with limited physical resources.
Routing algorithms must be robust which means that they should perform correctly in the face of unusual or unforeseen circumstances, such as hardware failures, high load conditions, and incorrect implementations. Because routers are located at network junctions points, they can cause considerable problems when they fail. The best routing algorithms are often those that have withstood the test of time and that have proven stable under a variety of network conditions.
In addition, routing algorithms must converge rapidly . Convergence is the process of agreement, by all routers, on optimal routes. When a network event causes routes to either go down or become available, routers distribute routing update messages that permeate networks, stimulating recalculation of optimal routes and eventually causing all routers to agree on these routes. Routing algorithms that converge slowly can cause routing loops or network outages.
In the routing loop if a packet arrives at Router 1 at time 't1', Router 1 already has been updated and thus knows that the optimal route to the destination calls for Router 2 to be the next stop. Router 1 therefore forwards the packet to Router 2, but because this router has not yet been updated, it believes that the optimal next hop is Router n. Router 2 therefore forwards the packet back to Router 1, and the packet continues to bounce back and forth between the two routers until Router 2 receives its routing update or until the packet has been switched the maximum number of times allowed.
Routing algorithms should also be flexible, which means that they should quickly and accurately adapt to a variety of network circumstances. Assume, for example, that a network segment has gone down. As many routing algorithms become aware of the problem, they will quickly select the next-best path for all routes normally using that segment. Routing algorithms can be programmed to adapt to changes in network bandwidth, router queue size and network delay , among other variables.
Routing Algorithms
- Design Goals
- Algorithm Types
Switching as a Routing Component
Switching algorithms is relatively simple and it is the same for most routing protocols. In most cases, a host determines that is must send a packet to another host. Having acquired a router's address by some means, the source host sends a packet addressed specifically to a router's physical (Media Access Control [MAC]-layer) address, this time with the protocol (network layer) address of the destination host.
As it examines the packet's destination protocol address, the router determines that it either knows or does not know how to forward the packet to the next hop. If the router does not know how to forward the packet, it typically drops the packet. If the router knows how to forward the packet, however, it changes the destination physical address to that of the next hop and transmits the packet.
The next hop may be the ultimate destination host. If not, the next hop is usually another router, which executes the same switching decision process. As the packet moves through the internetwork, its physical address changes, but its protocol address remains constant.
The International Organization for Standardization (ISO) has developed a hierarchical terminology that is useful in describing the switching process. Using the switching terminology network devices without the capability to forward packets between sub networks are called End Systems (ESs), whereas network devices with these capabilities are called Intermediate Systems (ISs). ISs are further divided into those that can communicate within routing domains (intradomain ISs) and those that communicate both within and between routing domains (interdomain ISs). A routing domain generally is considered a portion of an internetwork under common administrative authority that is regulated by a particular set of administrative guidelines. Routing domains are all called autonomous systems. With certain protocols, routing domains can be divided into routing areas, but intradomain routing protocols are still used for switching both within and between areas.
Other Routing Component
Path Determination as a Routing Component
Routing protocols use metrics to evaluate what path will be the best for a packet to travel. A metric is standard of measurement, such as path bandwidth, that is used by routing algorithms to determine the optimal path to a destination. To aid the process of path determination, routing algorithms initialize and maintain routing tables, which contain route information. Route information varies depending on the routing algorithm used.
Routing algorithms fill routing tables with a variety of information. Destination/next hop associations tell a router that a particular destination can be reached optimally by sending the packet to a particular router representing the "next hop" on the way to the final destination. When a router receives an incoming packet, it checks the destination address and attempts to associate this address with a next hop.
Routing tables also can contain other information, such as data about the desirability of a path. Routers compare metrics to determine optimal routes and these metrics differ depending on the design of the routing algorithm used.
Routers communicate with one another and maintain their routing tables through the transmission of a variety of messages. The routing update message is one such message that generally consists of all or a portion of a routing table. By analyzing routing updates from all other routers, a router can build a detailed picture of network topology. A link-state advertisement, another example of a message sent between routers, informs other routers of the state of the sender's links. Link information also can be used to build a complete picture of network topology to enable routers to determine optimal routes to network destinations.
Other Component of Routing:
Routing Components
Routing involves two basic activities: determining optimal paths and transporting information groups (typically called packets) through an internetwork. In the context of the routing process, the latter of these is referred to as packet switching. Although packet switching is relatively straightforward, path determination can be very complex.
Routing
Related Topics:
- Routing Components
- Routing Algorithms
- Design Goals
- Algorithm Types
- Static Versus Dynamic
- Single-Path Versus Multipath
- Flat Versus Hierarchical
- Host-Intelligent Versus Router-Intelligent
- Intradomain Versus Interdomain
- Link-State Versus Distance Vector
- Routing Metrics
- Network Routing Protocols
Wireless LANs
Wireless networks are great for allowing laptop computers or remote computers to connect to the LAN. Wireless networks are also beneficial in older buildings where it may be difficult or impossible to install cables.
The two most common type of infrared communications used in schools are Line-of-sight and Scattered broadcast.
Line-of-sight communication means that there must be an unblocked direct line between the workstation and the transceiver. If a person walks within the line-of-sight while there is a transmission, the information would need to be sent again. This kind of obstruction can slow down the wireless network.
Scattered infrared communication is a broadcast of infrared transmissions sent out in multiple directions that bounces of walls and ceilings until it eventually hits the receiver. Networking communications with laser are virtually the same as line-of-sight infrared networks.
Wireless LANs have several disadvantages. They are very expensive, provide poor security and are susceptible to interference from lights and electronic devices. They are also slower than LANs using cabling.
LAN Transmission Methods
In each type of transmission, a single packet is sent to one or more nodes.
In a Unicast transmission, a single packet is sent from the source to a destination on a network. First, the source node addresses the packet by using the address of the destination node. The package is then sent onto the network and finally,the network passes the packet to its destination.
A Multicast transmission consists of a single data packet that is copied and sent to a specific subset of nodes on the network. First, the source node addresses the packet by using a multicast address. The packet is then sent into the network, which makes copies of the packet and sends a copy to each node that is a part of the multicast address.
A Broadcast transmission consists of a single data packet that is copied and sent to all nodes on the network. In these types of transmissions, the source node addresses the packet by using the broadcast address. The packet is then sent on the network, which makes copies of the packet and send.
LAN Media-Access Methods
Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways:
- Carrier Sense Multiple Access Collision Detect (CSMA/CD) - In networks using CSMA/CD technology such as Ethernet, network devices contend for the network media. When a device has data to send, it first listens to see if any other device is currently using the network. If not, it starts sending its data. After finishing its transmission, it listens again to see if a collision occurred. A collision occurs when two devices send data simultaneously. When a collision happens, each device waits a random length of time before resending its data. In most cases, a collision will not occur again between the two devices. Because of this type of network contention, the busier a network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as the umber of devices on a single network increases.
- Token Passing - In Token Passing networks such as Token Ring and FDDI, a special network packet called a token is passed around the network from device to device. When a device has data to send, it must wait until it has the token and then sends its data. When the data transmission is complete, the token is released so that other devices may use the network media. The main advantage of token is released so that other devices may use the network media. The main advantage of token-passing networks is that they are deterministic. In other words, it is easy to calculate the maximum time that will pass before networks in some real-time environments such as factories, where machinery must be capable to communicating at a determinable interval.
LAN Topologies
- Bus topology is a linear LAN architecture in which transmissions from network stations propagate the length of medium and are received by all other stations.
- Ring topology is a LAN architecture that consists of a series of devices connected to one another by unidirectional transmission links to form a single closed loop. Both Token Ring/IEEE 802.5 and FDDI networks implement a ring topology.
- Star topology is a LAN architecture in which the endpoints on a network are connected to a common central hub or switch, by dedicated links. Logical bus and ring topologies are often implemented physically in a star topology.
- Tree topology is a LAN architecture that is identical to the bus topology, except that branches with multiple nodes are possible in this case.
LAN as a Management tool
As a management tool, a LAN can:
Increase system performance through the distribution of tasks and equipments, improve the availability of computer resources. Tasks can be assigned to several machines; increase system reliability. Crucial processes can be duplicated and/or divided so that, on the failure of one machine, other machine can quickly take up the load. Minimize the adverse effects of loss of any one system. Help regain administrative control of equipment, LAN improves the efficiency with more information accessible at workstation which can be used for taking better and timely decision. LAN can have dramatic impact on efficiency where the data is dynamic. The LAN server concept allows efficient centralization of information by allowing control over who uses the network and for what purpose. A LAN has extensive security system. A LAN has configuration flexibility. PCs and other resources can be added as and when needed.
The advantages of local area networks can be enumerated:
- Local area networks are the best means to provide a cost-effective multiuser computer environment.
- A LAN can fit any site requirements.
- It can be tailored to suit any type of application.
- Any number of users can be accommodated.
- It is flexible and growth-oriented.
- It offers existing single users a familiar Disk Operating System (DOS) environment.
- It can use existing software if the original language supports multi-user environment.
- It offers electronic mail as an in-built facility.
- It allows sharing of mass central storage and printers.
- Data transfer rates are above 10 Mbps.
- It allows file/record locking.
- It provides foolproof security system against illegal access to data.
- It provides data integrity.
More characterisctic of LAN
LAN as a Communication Resource
LAN facilitates communication through its powerful Electronic Mail System (EMS) among authorized network users across time boundaries and distance. Network provides fast responses and transmits urgent notes, messages and circulars.
As a communication resources, a LAN can:
- Facilitate communication within the organization by supplying an additional channel for coordinating the work of various groups, exchanging data, sending messages and sharing information.
- Provide in-house, computer-to-computer communication at high speed.
- Provide a method of accessing remote resources, thereby facilitating communication with the world outside the immediate organization.
More characteristic of LAN
LAN as a Productivity tool
Productivity depends on ensuring that people have timely access to the equipment and information required to perform their job. LAN increases productivity because key individuals in the organization will be able to get access to and share database, documents and expensive peripherals.
As a productivity tool, a LAN can:
- Enable wider distribution of information and the technologies needed to deal with it.
- Improve information retrieval, processing, storage and dissemination through a distributed database.
- Minimize or even if possible, eliminate redundant and repetitive tasks.
- Improve efficiency by facilitating the unification of systems and procedures.
- Provide graphic capabilities and other specialized application that are not cost-effective on stand-alone micros.
More characteristic of LAN
LAN as a Resource Sharing Tool
LAN eliminates the possibility of overspending by allowing workstations to share peripherals like printers, plotters, digitizers, tape drives and hard disks. The lowers the overall cost of data processing. Provides for efficient and flexible communication. By providing a facility through which a wide variety of computer equipment can shared by many people, the local area network presents a cost-effective solution. In a LAN the shared resources need not be just hardware, software and information also may be shared.
As a resource sharing tool, a LAN can:
- Permit sharing of expensive hardware.
- Facilitate sharing of complex programs and the information that they generate and manage.
Aid in the integration of all aspects of information processing, particularly transforming a group of individual, not very powerful microcomputers into a powerful distributed processing system.
More characteristics of LAN:
Characteristics of LAN
A LAN is characterized by the following:
LAN Cable
LAN uses coaxial cable RG-62. This is a relatively superior cable that allows for base band transmission. The cable is capable of transferring up to 10 Mbps. Special end connectors are used to interface with network interface card or hubs.
The advantage of the coaxial cable are:
- Wider band width
- Interference resistance
- High conductivity without distortion
- Longer distance covered.
Active Hub and Passive Hub
Active Hub
An active hub is a powered distribution point with active devices which drive distant nodes up to 1 kilometer away. Active hubs can be cascaded to connect 8 connections to which passive hubs, file servers or another active hubs can be connected Maximum distance covered by an active hub is about 2000 ft.
Passive Hub
It is passive distribution point which dies not use power or active devices in a network to connect up to 4 nodes within a very short distance. Maximum distance covered by a passive hub is about 300 ft.