/*Google Adsense */

Bridges

Bridges were originally designed to interconnect Ethernet segments together. Most bridges today support filtering and forwarding, as well as Spanning Tree Algorithm. The IEEE 802.1D specification is the standard for bridges.

During initialization, the bridges learns about the network and the routes. Packets are passed onto other network segments based on the MAC layer. Each time the bridge is presented with a fame, the source address is stored. The bridge builds up a table which identifies the segment to which the device is located on. This internal table is then used to determine which segment incoming frames should be forwarded to. The size of this table is important, especially if the network has a large number of workstations/servers.

The advantages of bridges are:

  • increase the number of attached workstations and network segments.
  • since bridges buffer frames, it is possible to interconnect different segments which use different MAC protocols.
  • since bridges work at the MAC layer, they are transparent to higher level protocols.
  • by subdividing the LAN into smaller segments, this increases overall reliability, and makes the network easier to maintain.

The disadvantages of bridges are

  • the buffering of frames introduces network delays.
  • bridges may overload during periods of high traffic.
  • bridges which combine different MAC protocols require the frames to be modified before transmission onto the new segment. This causes delays.

Transparent bridges (also known as spanning tree IEEE 802.1D) make all routing decisions. The bridge is said to be transparent (invisible) to the workstations. The bridge will automatically initialize itself and configure its own routing information after it has been enabled.

Bridges are ideally used in environments where there a number of well defined workgroups, each operating more or less independent of each other, with occasional access to servers outside of their localized workgroup or network segment. Bridges do not offer performance improvements when used in diverse or scattered workgroups, where the majority of access occurs outside of the local segments.

The two separate network segments can be connected via a bridge. Note that each segment must have a unique network address number in order for the bridge to be able to forward packets from one segment to the other.

Ideally, if workstations on network segment A needed access to a server, the best place to locate that server is on the same segment as the workstations, as this minimizes traffic on the other segment and avoids the delay incurred by the bridge.

Repeaters

Repeaters connect multiple network segments together. They amplify the incoming signal received from one segment and send it on to all other attached segments. This allows the distance limitations of network cabling to be extended. There are limits on the number of repeaters which can be used. The repeaters counts as a single node in the maximum node count associated with the Ethernet standard.

Repeaters also allow isolator of segments in the event of failures or fault conditions. Disconnecting one side of a repeaters effectively isolates the associated segments from the network. Using repeaters simply allows extending the network distance limitations.

It should be noted that the network number assigned to the main network segment and the network number assigned to the other side of the repeater are the same. In addition, the traffic generated on one segment is propagated onto the other segment. This causes a rise in the total amount of traffic, so if the network segments are already heavily loaded, it is not a good idea to use a repeater.

Network Management Components

Large networks are made by combining several individual network segments together, using appropriate devices like routers and/or bridges. When network segments are combined into a single large network, paths exist between the individual network segments. These paths are called routes and devices keep tables which define how to get to a particular path. When a packet arrives, the router/bridge will look at the destination address of the packet and determine which network segment the packet is to be transmitted on in order to get to its destination.

Approaches to Network Management

Managing Computer networks can be a reactive process, set in motion by one or more indicators of an existing problem, or it can be a predictive process, initiated by indicators of the potential for problems in the near future. It is better to predict and avoid network faults (when possible) than it is to detect and repair faults once they occur. This approach is called network steering because the network manager tries to steer the network away from potentially dangerous interactions. Network steering distributes the network manager's work over time, freeing resources for unpredictable faults when they arise.

Consider a trap message generated in response to some feature of a managed object's state exceeding a threshold, such as the number of packets dropped by a router due to a lack of buffer space. It may be that values of that feature as they change over time are correlated with other features of the same object's sate or with features of the state of other objects in the network. If one could find such correlations and use them to predict future states of managed objects, then it would be possible to intervene before the threshold is exceeded and avoid the pathological state that would generate a trap. Note that predicted and existing faults are handled in much the same way. The isolation, diagnosis, and remediation phases following predication or detection of a fault are the same and the same mechanisms can be used in both cases. The advantage afforded by a predictive component is that problems are solved before they reach significant levels, thereby keeping the operation and performance of the network more stable.

Introduction to Network Management

Network Management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks. It involves a distributed database, autopolling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic.

Mail Server

When a mail is sent it has to reach its proper destination. In order that the mail is sent to the proper destination, the destination site has to be running a program called a mail server that listens for requests to deliver mail. The mail server does the following:
  • Accept the message and store it in the expected mailbox.
  • Forward the message somewhere else, usually to a place specified by the owner of the mailbox,but possibly to a mailing list.
  • Reject the message as undeliverable, either because the mailbox does not exist or because the mailbox is full or because the server is facing some temporary problems.

There are basically two kinds of mail servers based on the protocol. They are:

  • Basic Simple Mail Transfer Protocol (SMTP) delivery. The server translates the mailbox name into a local file name and appends the message to the file.
  • Post Office Protocol (POP) delivery. The server still stores messages somewhere, in a place derived from the mailbox name. However, it allows mail-receiving connections from other Internet sites. The mail agent on the recipient's site knows to open an Internet connection to the POP server, request contents of particular messages and (optionally) remove messages from the server's mailbox.

POP service is newer than SMTP service; it has the large advantage that the mail can be accessed from anywhere on the Internet, without logging into the server.

Electronic Mail

Electronic mail or e-mail involves transmission of messages over communication network. The messages can be notes or files. Some electronic-mail systems are confined to a single computer or network, but others have gateways to other computer systems, enabling users to send electronic mail anywhere in the world. Companies that are fully computerized make extensive use of e-mail because it is fast, flexible and reliable.

Electronic communication, because of its speed and broadcasting ability, is fundamentally different from paper-based communication. Because the turnaround time can be so fast, email is more conversational than traditional paper-based media.

Most e-mail systems include a rudimentary text editor for composing and editing messages. A message is sent to the recipient by specifying the recipient's address. An address is a text string of the form mailbox@site. The second part is a string identifying a particular site on the Internet; the first part is a string identifying a particular mailbox at that site. For example, consider a mail id like abc_def@yahoo.com. In this case, abc_def is the username and yahoo.com is the website.

Every Internet site has an Internet Protocol (IP) address, specified as four decimal numbers (each in the range 0-255) separated by dots. The transport service sends the site name string to a Domain Name Server (DNS), which translates the name into an IP address. The transport service then starts up an Internet connection to the given IP address and task the destination site to deliver mail to the given mailbox.


A message can be sent to several users at once. This is called broadcasting. The sent message are stored in electronic mailboxes of the recipient. The recipient has to check the mailbox to see if mail has been received and can decide on whether to save it or remove it off the mailbox.

Different e-mail systems use different formats and there are some emerging standards that make it possible for users on all systems to exchange messages. An important e-mail standards is MAPI. The CCITT standards organization has developed the X.400 standard, which attempts to provide a universal way of addressing messages.

Network Security

Computer security is primarily concerned with controlling how data are shared for reading and modifying. Often it becomes necessary that people inside and outside of the organization need to share information. An examination of the potential problems that can arise on a poorly secured system will help in understanding the need for security. Three basic kinds of malicious behavior are:

  1. Denial of service: This occurs when a hostile entity uses a critical service of the computer system in such a way that no service or severely degraded service is available to others. Denial of service is a difficult attack to detect and protect against. An example of denial of service is an Internet attack, where an attacker requests a large number of connections to an Internet server. Through the use of an improper protocol, the attacker can leave a number of the connections half open. Most systems can handle only a small number of half-open connections before they are no longer able to communicate with other systems on the net. The attack completely disables the Internet server.

  2. Compromising the integrity of the information: Most people consider that the information stored on the computer system is accurate. If the information loses its accuracy, the consequences can be extreme. For example, if competitors hacked in to a company's data base and deleted customer records, a significant loss of revenues could result. Users must be able to trust that data are accurate and complete.

  3. Disclosure of information: Probably the most serious attack is disclosure of information. If the information taken off a system is important to the success of an organization, it has considerable value to a competitor. Corporate espionage is real threat, especially from foreign companies, where the legal reprisals are much more difficult to enforce. Insiders also pose a significant threat. Limiting user access to the information needed to perform specific jobs increases data security dramatically.

However, most secure systems are difficult to work with and require extra development time. Networks connect large numbers of users to share information and resources, but network security depends heavily on the corporation of each user. Security is a strong as the weakest link.

Organizations should have a security program to assure that each automated system has a level of security that is commensurate with the risk and magnitude of the harm that could result from the loss, misuse, disclosure or modification of the information contained in the system. Each system's level of security must protect the confidentiality, integrity and availability of the information. Specifically, this would require that the organization has appropriate technical personnel, administrative, environmental and telecommunications safeguards;a cost-effective security approach, adequate resources to support critical functions and to provide continuity of operation in the event of a disaster.

Companies continue to flock to the Internet in ever-increasing numbers, despite the fact that the overall and underlying environment is not secure. To further complicate the matter, vendors, standards bodies, security organizations and practitioners cannot agree on a standard, compliant and technically available approach. As a group of investors concerned with the success of the Internet for business purposes, it is critical to pull the collective resources and work together to quickly establish and support interoperable security standards; open security interfaces to existing security products and security products and security control mechanisms within other program products; and hardware and software solutions within heterogeneous operating systems which will facilitate smooth transitions.

Having the tools and solutions available within the marketplace is beginning, but strategies and migration paths are also needed to accommodate and integrate Internet, intranet and World Wide Web (WWW) technologies into the existing IT infrastructure. While there are always emerging challenges, introduction of newer technologies, and customers with challenging and perplexing problems to solve, this approach should enable in maximizing the effectiveness of the existing security investments, while bridging the gap to the long awaited and always sought after perfect solution.

Security solutions are slowly emerging, but interoperability, universally accepted security standards, application programming interfaces (APIs) for security, vendor support and cooperation and multi platform security products are still problematic. Where there are products and solutions , they tend to have niche applicability , be vendor-centric or only address one of larger set of security problems and requirements. For the most part, no single vendor or even software/vendor consortium has addressed the overall security problem within "open" systems and public networks. This indicates that the problem is very large.

It is important to keep in mind, as with any new and emerging technology, Internet, intranet and WWW technologies do not necessarily bring new and unique security concerns, risks and vulnerabilities, but rather introduce new problems, challenges and approaches within the existing security infrastructure.

Security requirements, goals and objectives remain the same, while the application of security, control mechanisms and solution sets are different and require the involvement and cooperation of multi disciplined technical and functional area teams. As in any distributed environment, there are more players and it is more difficult to fine or interpret the overall requirements or even talk to anyone who sees or understands the big picture. More people are involved than ever before, emphasizing the need to communicate both strategic and tactical security plans broadly and effectively throughout the entire enterprise. The security challenges and the resultant problems larger and more complex in this environment. Management must be kept up-to-date and thoroughly understand overall risk to the corporation's information assets with the implementation or decisions to implement new technologies. They must also understand, fund and support the influx of resources required to manage the security environment.