Thursday, July 28, 2016

Network Switching

Network Switching
Switches can be a valuable asset to networking. Overall, they can increase the capacity and speed of your network. However, switching should not be seen as a cure-all for network issues. Before incorporating network switching, you must first ask yourself two important questions: First, how can you tell if your network will benefit from switching? Second, how do you add switches to your network design to provide the most benefit?
This tutorial is written to answer these questions. Along the way, we’ll describe how switches work, and how they can both harm and benefit your networking strategy. We’ll also discuss different network types, so you can profile your network and gauge the potential benefit of network switching for your environment.

What is a Switch?

Switches occupy the same place in the network as hubs. Unlike hubs, switches examine each packet and process it accordingly rather than simply repeating the signal to all ports. Switches map the Ethernet addresses of the nodes residing on each network segment and then allow only the necessary traffic to pass through the switch. When a packet is received by the switch, the switch examines the destination and source hardware addresses and compares them to a table of network segments and addresses. If the segments are the same, the packet is dropped or “filtered”; if the segments are different, then the packet is “forwarded” to the proper segment. Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them.
Filtering packets and regenerating forwarded packets enables switching technology to split a network into separate collision domains. The regeneration of packets allows for greater distances and more nodes to be used in the total network design, and dramatically lowers the overall collision rates. In switched networks, each segment is an independent collision domain. This also allows for parallelism, meaning up to one-half of the computers connected to a switch can send data at the same time. In shared networks all nodes reside in a single shared collision domain.
Easy to install, most switches are self learning. They determine the Ethernet addresses in use on each segment, building a table as packets are passed through the switch. This “plug and play” element makes switches an attractive alternative to hubs.
Switches can connect different network types (such as Ethernet and Fast Ethernet) or networks of the same type. Many switches today offer high-speed links, like Fast Ethernet, which can be used to link the switches together or to give added bandwidth to important servers that get a lot of traffic. A network composed of a number of switches linked together via these fast uplinks is called a “collapsed backbone” network.
Dedicating ports on switches to individual nodes is another way to speed access for critical computers. Servers and power users can take advantage of a full segment for one node, so some networks connect high traffic nodes to a dedicated switch port.
Full duplex is another method to increase bandwidth to dedicated workstations or servers. To use full duplex, both network interface cards used in the server or workstation and the switch must support full duplex operation. Full duplex doubles the potential bandwidth on that link.

Network Congestion

tutors_p6-ethcapacity
As more users are added to a shared network or as applications requiring more data are added, performance deteriorates. This is because all users on a shared network are competitors for the Ethernet bus. A moderately loaded 10 Mbps Ethernet network is able to sustain utilization of 35 percent and throughput in the neighborhood of 2.5 Mbps after accounting for packet overhead, inter-packet gaps and collisions. A moderately loaded Fast Ethernet or Gigabit Ethernet shares 25 Mbps or 250 Mbps of real data in the same circumstances. With shared Ethernet and Fast Ethernet, the likelihood of collisions increases as more nodes and/or more traffic is added to the shared collision domain.
Ethernet itself is a shared media, so there are rules for sending packets to avoid conflicts and protect data integrity. Nodes on an Ethernet network send packets when they determine the network is not in use. It is possible that two nodes at different locations could try to send data at the same time. When both PCs are transferring a packet to the network at the same time, a collision will result. Both packets are retransmitted, adding to the traffic problem. Minimizing collisions is a crucial element in the design and operation of networks. Increased collisions are often the result of too many users or too much traffic on the network, which results in a great deal of contention for network bandwidth. This can slow the performance of the network from the user’s point of view. Segmenting, where a network is divided into different pieces joined together logically with switches or routers, reduces congestion in an overcrowded network by eliminating the shared collision domain.
Collision rates measure the percentage of packets that are collisions. Some collisions are inevitable, with less than 10 percent common in well-running networks.
The Factors Affecting Network Efficiency
  • Amount of traffic
  • Number of nodes
  • Size of packets
  • Network diameter
Measuring Network Efficiency
  • Average to peak load deviation
  • Collision Rate
  • Utilization Rate
Utilization rate is another widely accessible statistic about the health of a network. This statistic is available in Novell’s console monitor and WindowsNT performance monitor as well as any optional LAN analysis software. Utilization in an average network above 35 percent indicates potential problems. This 35 percent utilization is near optimum, but some networks experience higher or lower utilization optimums due to factors such as packet size and peak load deviation.
A switch is said to work at “wire speed” if it has enough processing power to handle full Ethernet speed at minimum packet sizes. Most switches on the market are well ahead of network traffic capabilities supporting the full “wire speed” of Ethernet, 14,480 pps (packets per second), and Fast Ethernet, 148,800 pps.

Routers

Routers work in a manner similar to switches and bridges in that they filter out network traffic. Rather than doing so by packet addresses, they filter by specific protocol. Routers were born out of the necessity for dividing networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Routers recalculate the checksum, and rewrite the MAC header of every packet. The price paid for this type of intelligent forwarding and filtering is usually calculated in terms of latency, or the delay that a packet experiences inside the router. Such filtering takes more time than that exercised in a switch or bridge which only looks at the Ethernet address. In more complex networks network efficiency can be improved. An additional benefit of routers is their automatic filtering of broadcasts, but overall they are complicated to setup.
Switch Benefits
  • Isolates traffic, relieving congestion
  • Separates collision domains, reducing collisions
  • Segments, restarting distance and repeater rules
Switch Costs
  • Price: currently 3 to 5 times the price of a hub
  • Packet processing time is longer than in a hub
  • Monitoring the network is more complicated

General Benefits of Network Switching

Switches replace hubs in networking designs, and they are more expensive. So why is the desktop switching market doubling ever year with huge numbers sold? The price of switches is declining precipitously, while hubs are a mature technology with small price declines. This means that there is far less difference between switch costs and hub costs than there used to be, and the gap is narrowing.
Since switches are self learning, they are as easy to install as a hub. Just plug them in and go. And they operate on the same hardware layer as a hub, so there are no protocol issues.
There are two reasons for switches being included in network designs. First, a switch breaks one network into many small networks so the distance and repeater limitations are restarted. Second, this same segmentation isolates traffic and reduces collisions relieving network congestion. It is very easy to identify the need for distance and repeater extension, and to understand this benefit of network switching. But the second benefit, relieving network congestion, is hard to identify and harder to understand the degree by which switches will help performance. Since all switches add small latency delays to packet processing, deploying switches unnecessarily can actually slow down network performance. So the next section pertains to the factors affecting the impact of switching to congested networks.

Network Switching

The benefits of switching vary from network to network. Adding a switch for the first time has different implications than increasing the number of switched ports already installed. Understanding traffic patterns is very important to network switching – the goal being to eliminate (or filter) as much traffic as possible. A switch installed in a location where it forwards almost all the traffic it receives will help much less than one that filters most of the traffic.
Networks that are not congested can actually be negatively impacted by adding switches. Packet processing delays, switch buffer limitations, and the retransmissions that can result sometimes slows performance compared with the hub based alternative. If your network is not congested, don’t replace hubs with switches. How can you tell if performance problems are the result of network congestion? Measure utilization factors and collision rates.
Good Candidates for Performance Boosts from Switching
  • Utilization more than 35%
  • Collision rates more than 10%
Utilization load is the amount of total traffic as a percent of the theoretical maximum for the network type, 10 Mbps in Ethernet, 100 Mbps in Fast Ethernet. The collision rate is the number of packets with collisions as a percentage of total packages
Network response times (the user-visible part of network performance) suffers as the load on the network increases, and under heavy loads small increases in user traffic often results in significant decreases in performance. This is similar to automobile freeway dynamics, in that increasing loads results in increasing throughput up to a point, then further increases in demand results in rapid deterioration of true throughput. In Ethernet, collisions increase as the network is loaded, and this causes retransmissions and increases in load which cause even more collisions. The resulting network overload slows traffic considerably.
Using network utilities found on most server operating systems network managers can determine utilization and collision rates. Both peak and average statistics should be considered.

Replacing a Central Hub with a Switch

This switching opportunity is typified by a fully shared network, where many users are connected in a cascading hub architecture. The two main impacts of switching will be faster network connection to the server(s) and the isolation of non-relevant traffic from each segment. As the network bottleneck is eliminated performance grows until a new system bottleneck is encountered – such as maximum server performance.

Adding Switches to a Backbone Switched Network

Congestion on a switched network can usually be relieved by adding more switched ports, and increasing the speed of these ports. Segments experiencing congestion are identified by their utilization and collision rates, and the solution is either further segmentation or faster connections. Both Fast Ethernet and Ethernet switch ports are added further down the tree structure of the network to increase performance.

Designing for Maximum Benefit

Changes in network design tend to be evolutionary rather than revolutionary-rarely is a network manager able to design a network completely from scratch. Usually, changes are made slowly with an eye toward preserving as much of the usable capital investment as possible while replacing obsolete or outdated technology with new equipment.
Fast Ethernet is very easy to add to most networks. A switch or bridge allows Fast Ethernet to connect to existing Ethernet infrastructures to bring speed to critical links. The faster technology is used to connect switches to each other, and to switched or shared servers to ensure the avoidance of bottlenecks.
Many client/server networks suffer from too many clients trying to access the same server which creates a bottleneck where the server attaches to the LAN. Fast Ethernet, in combination with switched Ethernet, creates the perfect cost-effective solution for avoiding slow client server networks by allowing the server to be placed on a fast port.
Distributed processing also benefits from Fast Ethernet and switching. Segmentation of the network via switches brings big performance boosts to distributed traffic networks, and the switches are commonly connected via a Fast Ethernet backbone.
Good Candidates for Performance Boosts from Switching
  • Important to know network demand per node
  • Try to group users with the nodes they communicate with most often on the same segment
  • Look for departmental traffic patterns
  • Avoid switch bottlenecks with fast uplinks
  • Move users switch between segments in an iterative process until all nodes seeing less than 35% utilization
clsrvtrfdistitrf

Advanced Switching Technology Issues

There are some technology issues with switching that do not affect 95% of all networks. Major switch vendors and the trade publications are promoting new competitive technologies, so some of these concepts are discussed here.

Managed or Unmanaged

Management provides benefits in many networks. Large networks with mission critical applications are managed with many sophisticated tools, using SNMP to monitor the health of devices on the network. Networks using SNMP or RMON (an extension to SNMP that provides much more data while using less network bandwidth to do so) will either manage every device, or just the more critical areas. VLANs are another benefit to management in a switch. A VLAN allows the network to group nodes into logical LANs that behave as one network, regardless of physical connections. The main benefit is managing broadcast and multicast traffic. An unmanaged switch will pass broadcast and multicast packets through to all ports. If the network has logical grouping that are different from physical groupings then a VLAN-based switch may be the best bet for traffic optimization.
Another benefit to management in the switches is Spanning Tree Algorithm. Spanning Tree allows the network manager to design in redundant links, with switches attached in loops. This would defeat the self learning aspect of switches, since traffic from one node would appear to originate on different ports. Spanning Tree is a protocol that allows the switches to coordinate with each other so that traffic is only carried on one of the redundant links (unless there is a failure, then the backup link is automatically activated). Network managers with switches deployed in critical applications may want to have redundant links. In this case management is necessary. But for the rest of the networks an unmanaged switch would do quite well, and is much less expensive.

Store-and-Forward vs. Cut-Through

LAN switches come in two basic architectures, cut-through and store-and-forward. Cut-through switches only examine the destination address before forwarding it on to its destination segment. A store-and-forward switch, on the other hand, accepts and analyzes the entire packet before forwarding it to its destination. It takes more time to examine the entire packet, but it allows the switch to catch certain packet errors and collisions and keep them from propagating bad packets through the network.
Today, the speed of store-and-forward switches has caught up with cut-through switches to the point where the difference between the two is minimal. Also, there are a large number of hybrid switches available that mix both cut-through and store-and-forward architectures.

Blocking vs. Non-Blocking Switches

Take a switch’s specifications and add up all the ports at theoretical maximum speed, then you have the theoretical sum total of a switch’s throughput. If the switching bus, or switching components cannot handle the theoretical total of all ports the switch is considered a “blocking switch”. There is debate whether all switches should be designed non-blocking, but the added costs of doing so are only reasonable on switches designed to work in the largest network backbones. For almost all applications, a blocking switch that has an acceptable and reasonable throughput level will work just fine.
Consider an eight port 10/100 switch. Since each port can theoretically handle 200 Mbps (full duplex) there is a theoretical need for 1600 Mbps, or 1.6 Gbps. But in the real world each port will not exceed 50% utilization, so a 800 Mbps switching bus is adequate. Consideration of total throughput versus total ports demand in the real world loads provides validation that the switch can handle the loads of your network.

Switch Buffer Limitations

As packets are processed in the switch, they are held in buffers. If the destination segment is congested, the switch holds on to the packet as it waits for bandwidth to become available on the crowded segment. Buffers that are full present a problem. So some analysis of the buffer sizes and strategies for handling overflows is of interest for the technically inclined network designer.
In real world networks, crowded segments cause many problems, so their impact on switch consideration is not important for most users, since networks should be designed to eliminate crowded, congested segments. There are two strategies for handling full buffers. One is “backpressure flow control” which sends packets back upstream to the source nodes of packets that find a full buffer. This compares to the strategy of simply dropping the packet, and relying on the integrity features in networks to retransmit automatically. One solution spreads the problem in one segment to other segments, propagating the problem. The other solution causes retransmissions, and that resulting increase in load is not optimal. Neither strategy solves the problem, so switch vendors use large buffers and advise network managers to design switched network topologies to eliminate the source of the problem – congested segments.

Layer 3 Switching

A hybrid device is the latest improvement in internetworking technology. Combining the packet handling of routers and the speed of switching, these multilayer switches operate on both layer 2 and layer 3 of the OSI network model. The performance of this class of switch is aimed at the core of large enterprise networks. Sometimes called routing switches or IP switches, multilayer switches look for common traffic flows, and switch these flows on the hardware layer for speed. For traffic outside the normal flows, the multilayer switch uses routing functions. This keeps the higher overhead routing functions only where it is needed, and strives for the best handling strategy for each network packet.
Many vendors are working on high end multilayer switches, and the technology is definitely a “work in process”. As networking technology evolves, multilayer switches are likely to replace routers in most large networks.

Tuesday, July 19, 2016

Part III: Sharing Devices


A Look at Device Server Technology
Device networking starts with a device server, which allows almost any device with serial connectivity to connect to Ethernet networks quickly and cost-effectively. These products include all of the elements needed for device networking and because of their scalability; they do not require a server or gateway.
This tutorial provides an introduction to the functionality of a variety of device servers.  It will cover print servers, terminal servers and console servers, as well as embedded and external device servers.  For each of these categories, there will also be a review of specific Lantronix offerings.

An Introduction to Device Servers

A device server is characterized by a minimal operating architecture that requires no per seat network operating system license, and client access that is independent of any operating system or proprietary protocol. In addition the device server is a “closed box,” delivering extreme ease of installation, minimal maintenance, and can be managed by the client remotely via a web browser.
By virtue of its independent operating system, protocol independence, small size and flexibility, device servers are able to meet the demands of virtually any network-enabling application. The demand for device servers is rapidly increasing because organizations need to leverage their networking infrastructure investment across all of their resources. Many currently installed devices lack network ports or require dedicated serial connections for management — device servers allow those devices to become connected to the network.
Device servers are currently used in a wide variety of environments in which machinery, instruments, sensors and other discrete devices generate data that was previously inaccessible through enterprise networks. They are also used for security systems, point-of-sale applications, network management and many other applications where network access to a device is required.
As device servers become more widely adopted and implemented into specialized applications, we can expect to see variations in size, mounting capabilities and enclosures. Device servers are also available as embedded devices, capable of providing instant networking support for developers of future products where connectivity will be required.
Print servers, terminal servers, remote access servers and network time servers are examples of device servers which are specialized for particular functions. Each of these types of servers has unique configuration attributes in hardware or software that help them to perform best in their particular arena.

External Device Servers

External device servers are stand-alone serial-to-wireless (802.11b) or serial-to-Ethernet device servers that can put just about any device with serial connectivity on the network in a matter of minutes so it can be managed remotely.

External Device Servers from Lantronix

Lantronix external device servers provide the ability to remotely control, monitor, diagnose and troubleshoot equipment over a network or the Internet.  By opting for a powerful external device with full network and web capabilities, companies are able to preserve their present equipment investments.
Lantronix offers a full line of external device servers:  Ethernet or wireless, advanced encryption for maximum security, and device servers designed for commercial or heavy-duty industrial applications.
Wireless
Providing a whole new level of flexibility and mobility, these devices allow users to connect devices that are inaccessible via cabling.  Users can also add intelligence to their businesses by putting mobile devices, such as medical instruments or warehouse equipment, on networks.
Security:
Ideal for protecting data such as business transactions, customer information, financial records, etc., these devices provide enhanced security for networked devices.
Commercial:
These devices enable users to network-enable their existing equipment (such as POS devices, AV equipment, medical instruments, etc.) simply and cost-effectively, without the need for special software.
Industrial:
For heavy-duty factory applications, Lantronix offers a full complement of industrial-strength external device servers designed for use with manufacturing, assembly and factory automation equipment. All models support Modbus industrial protocols.

Embedded Device Servers

Embedded device servers integrate all the required hardware and software into a single embedded device.  They use a device’s serial port to web-enable or network-enable products quickly and easily without the complexities of extensive hardware and software integration. Embedded device servers are typically plug-and-play solutions that operate independently of a PC and usually include a wireless or Ethernet connection, operating system, an embedded web server, a full TCP/IP protocol stack, and some sort of encryption for secure communications.

Embedded Device Servers from Lantronix

Lantronix recognizes that design engineers are looking for a simple, cost-effective and reliable way to seamlessly embed network connectivity into their products.  In a fraction of the time it would take to develop a custom solution, Lantronix embedded device servers provide a variety of proven, fully integrated products.  OEMs can add full Ethernet and/or wireless connectivity to their products so they can be managed over a network or the Internet.
Module
These devices allow users tonetwork-enable just about any electronic device with Ethernet and/or wireless connectivity.
Board-Level: 
Users can integrate networking capabilities onto the circuit boards of equipment like factory machinery, security systems and medical devices.
Single-Chip Solutions: 
These powerful, system-on-chip solutions help users address networking issues early in the design cycle to support the most popular embedded networking technologies.

Terminal Servers

Terminal servers are used to enable terminals to transmit data to and from host computers across LANs, without requiring each terminal to have its own direct connection. And while the terminal server’s existence is still justified by convenience and cost considerations, its inherent intelligence provides many more advantages. Among these is enhanced remote monitoring and control. Terminal servers that support protocols like SNMP make networks easier to manage.
Devices that are attached to a network through a server can be shared between terminals and hosts at both the local site and throughout the network. A single terminal may be connected to several hosts at the same time (in multiple concurrent sessions), and can switch between them. Terminal servers are also used to network devices that have only serial outputs. A connection between serial ports on different servers is opened, allowing data to move between the two devices.
Given its natural translation ability, a multi-protocol server can perform conversions between the protocols it knows such as LAT and TCP/IP. While server bandwidth is not adequate for large file transfers, it can easily handle host-to-host inquiry/response applications, electronic mailbox checking, etc. In addition, it is far more economical than the alternatives — acquiring expensive host software and special-purpose converters. Multiport device and print servers give users greater flexibility in configuring and managing their networks.
Whether it is moving printers and other peripherals from one network to another, expanding the dimensions of interoperability or preparing for growth, terminal servers can fulfill these requirements without major rewiring. Today, terminal servers offer a full range of functionality, ranging from 8 to 32 ports, giving users the power to connect terminals, modems, servers and virtually any serial device for remote access over IP networks.

Print Servers

Print servers enable printers to be shared by other users on the network. Supporting either parallel and/or serial interfaces, a print server accepts print jobs from any person on the network using supported protocols and manages those jobs on each appropriate printer.
The earliest print servers were external devices, which supported printing via parallel or serial ports on the device. Typically, only one or two protocols were supported. The latest generations of print servers support multiple protocols, have multiple parallel and serial connection options and, in some cases, are small enough to fit directly on the parallel port of the printer itself. Some printers have embedded or internal print servers. This design has an integral communication benefit between printer and print server, but lacks flexibility if the printer has physical problems.
Print servers generally do not contain a large amount of memory; printers simply store information in a queue. When the desired printer becomes available, they allow the host to transmit the data to the appropriate printer port on the server. The print server can then simply queue and print each job in the order in which print requests are received, regardless of protocol used or the size of the job.
tutors_p3-servers

Device Server Technology in the Data Center

The IT/data center is considered the pulse of any modern business.  Remote management enables users to monitor and manage global networks, systems and IT equipment from anywhere and at any time.  Device servers play a major role in allowing for the remote capabilities and flexibility required for businesses to maximize personnel resources and technology ROI.

Console Servers

Console servers provide the flexibility of both standard and emergency remote access via attachment to the network or to a modem. Remote console management serves as a valuable tool to help maximize system uptime and system operating costs.
Secure console servers provide familiar tools to leverage the console or emergency management port built into most serial devices, including servers, switches, routers, telecom equipment – anything in a rack – even if the network is down. They also supply complete in-band and out-of-band local and remote management for the data center with tools such as telnet and SSH that help manage the performance and availability of critical business information systems.

Console Management Solutions from Lantronix

Lantronix provides complete in-band and out-of-band local and remote management solutions for the data center. Lantronix secure console management products give IT managers unsurpassed ability to securely and remotely manage serial devices, including servers, switches, routers, telecom equipment – anything in a rack – even if the network is down.

Conclusion

The ability to manage virtually any electronic device over a network or the Internet is changing the way the world works and does business. With the ability to remotely manage, monitor, diagnose and control equipment, a new level of functionality is added to networking — providing business with increased intelligence and efficiency.  Lantronix leads the way in developing new network intelligence and has been a tireless pioneer in machine-to-machine (M2M) communication technology.
We hope this introduction to networking has been helpful and informative. This tutorial was meant to be an overview and not a comprehensive guide that explains everything there is to know about planning, installing, administering and troubleshooting a network. There are many Internet websites, books and magazines available that explain all aspects of computer networks, from LANs to WANs, network hardware to running cable. To learn about these subjects in greater detail, check your local bookstore, software retailer or newsstand for more information.

Sunday, July 17, 2016

Part II: Adding Speed to a Network

The phrase “you can never get too much of a good thing” can certainly be applied to networking. Once the benefits of networking are demonstrated, there is a thirst for even faster, more reliable connections to support a growing number of users and highly-complex applications.

How to obtain that added bandwidth can be an issue. While repeaters allow LANs to extend beyond normal distance limitations, they still limit the number of nodes that can be supported.
Bridges and switches on the other hand allow LANs to grow significantly larger by virtue of their ability to support full Ethernet segments on each port. Additionally, bridges and switches selectively filter network traffic to only those packets needed on each segment, significantly increasing throughput on each segment and on the overall network.
Network managers continue to look for better performance and more flexibility for network topologies, bridges and switches. To provide a better understanding of these and related technologies, this tutorial will cover:
  • Bridges
  • Ethernet Switches
  • Routers
  • Network Design Criteria
  • When and Why Ethernets Become Too Slow
  • Increasing Performance with Fast and Gigabit Ethernet

Bridges

Bridges connect two LAN segments of similar or dissimilar types, such as Ethernet and Token Ring. This allows two Ethernet segments to behave like a single Ethernet allowing any pair of computers on the extended Ethernet to communicate. Bridges are transparent therefore computers don’t know whether a bridge separates them.
Bridges map the Ethernet addresses of the nodes residing on each network segment and allow only necessary traffic to pass through the bridge. When a packet is received by the bridge, the bridge determines the destination and source segments. If the segments are the same, the packet is dropped or also referred to as “filtered”; if the segments are different, then the packet is “forwarded” to the correct segment. Additionally, bridges do not forward bad or misaligned packets.
Bridges are also called “store-and-forward” devices because they look at the whole Ethernet packet before making filtering or forwarding decisions. Filtering packets and regenerating forwarded packets enables bridging technology to split a network into separate collision domains. Bridges are able to isolate network problems; if interference occurs on one of two segments, the bridge will receive and discard an invalid frame keeping the problem from affecting the other segment. This allows for greater distances and more repeaters to be used in the total network design.

Dealing with Loops

Most bridges are self-learning task bridges; they determine the user Ethernet addresses on the segment by building a table as packets that are passed through the network. However, this self-learning capability dramatically raises the potential of network loops in networks that have many bridges. A loop presents conflicting information on which segment a specific address is located and forces the device to forward all traffic. The Distributed Spanning Tree (DST) algorithm is a software standard (found in the IEEE 802.1d specification) that describes how switches and bridges can communicate to avoid network loops.

Ethernet Switches

Ethernet switches are an expansion of the Ethernet bridging concept. The advantage of using a switched Ethernet is parallelism. Up to one-half of the computers connected to a switch can send data at the same time.
LAN switches link multiple networks together and have two basic architectures: cut-through and store-and-forward. In the past, cut-through switches were faster because they examined the packet destination address only before forwarding it on to its destination segment. A store-and-forward switch works like a bridge in that it accepts and analyzes the entire packet before forwarding it to its destination.
Historically, store-and-forward took more time to examine the entire packet, although one benefit was that it allowed the switch to catch certain packet errors and keep them from propagating through the network. Today, the speed of store-and-forward switches has caught up with cut-through switches so the difference between the two is minimal. Also, there are a large number of hybrid switches available that mix both cut-through and store-and-forward architectures.
Both cut-through and store-and-forward switches separate a network into collision domains, allowing network design rules to be extended. Each of the segments attached to an Ethernet switch has a full 10 Mbps of bandwidth shared by fewer users, which results in better performance (as opposed to hubs that only allow bandwidth sharing from a single Ethernet). Newer switches today offer high-speed links, either Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet or ATM. These are used to link switches together or give added bandwidth to high-traffic servers. A network composed of a number of switches linked together via uplinks is termed a “collapsed backbone” network.
tutors_p2-switches

Routers

A router is a device that forwards data packets along networks, and determines which way to send each data packet based on its current understanding of the state of its connected networks. Routers are typically connected to at least two networks, commonly two LANs or WANs or a LAN and its Internet Service Provider’s (ISPs) network. Routers are located at gateways, the places where two or more networks connect.
Routers filter out network traffic by specific protocol rather than by packet address. Routers also divide networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Network speed often decreases due to this type of intelligent forwarding. Such filtering takes more time than that exercised in a switch or bridge, which only looks at the Ethernet address. However, in more complex networks, overall efficiency is improved by using routers.

Network Design Criteria

Ethernets and Fast Ethernets have design rules that must be followed in order to function correctly. The maximum number of nodes, number of repeaters and maximum segment distances are defined by the electrical and mechanical design properties of each type of Ethernet media.
A network using repeaters, for instance, functions with the timing constraints of Ethernet. Although electrical signals on the Ethernet media travel near the speed of light, it still takes a finite amount of time for the signal to travel from one end of a large Ethernet to another. The Ethernet standard assumes it will take roughly 50 microseconds for a signal to reach its destination.
Ethernet is subject to the “5-4-3” rule of repeater placement: the network can only have five segments connected; it can only use four repeaters; and of the five segments, only three can have users attached to them; the other two must be inter-repeater links.
If the design of the network violates these repeater and placement rules, then timing guidelines will not be met and the sending station will resend that packet. This can lead to lost packets and excessive resent packets, which can slow network performance and create trouble for applications. New Ethernet standards (Fast Ethernet, GigE, and 10 GigE) have modified repeater rules, since the minimum packet size takes less time to transmit than regular Ethernet. The length of the network links allows for a fewer number of repeaters. In Fast Ethernet networks, there are two classes of repeaters. Class I repeaters have a latency of 0.7 microseconds or less and are limited to one repeater per network. Class II repeaters have a latency of 0.46 microseconds or less and are limited to two repeaters per network. The following are the distance (diameter) characteristics for these types of Fast Ethernet repeater combinations:
Fast EthernetCopperFiber
No Repeaters
One Class I Repeater
One Class II Repeater
Two Class II Repeaters
100m
200m
200m
205m
412m*
272m
272m
228m
* Full Duplex Mode 2 km
When conditions require greater distances or an increase in the number of nodes/repeaters, then a bridge, router or switch can be used to connect multiple networks together. These devices join two or more separate networks, allowing network design criteria to be restored. Switches allow network designers to build large networks that function well. The reduction in costs of bridges and switches reduces the impact of repeater rules on network design.
Each network connected via one of these devices is referred to as a separate collision domain in the overall network.

When and Why Ethernets Become Too Slow

As more users are added to a shared network or as applications requiring more data are added, performance deteriorates. This is because all users on a shared network are competitors for the Ethernet bus. On a moderately loaded 10Mbps Ethernet network that is shared by 30-50 users, that network will only sustain throughput in the neighborhood of 2.5Mbps after accounting for packet overhead, interpacket gaps and collisions.
Increasing the number of users (and therefore packet transmissions) creates a higher collision potential. Collisions occur when two or more nodes attempt to send information at the same time. When they realize that a collision has occurred, each node shuts off for a random time before attempting another transmission. With shared Ethernet, the likelihood of collision increases as more nodes are added to the shared collision domain of the shared Ethernet. One of the steps to alleviate this problem is to segment traffic with a bridge or switch. A switch can replace a hub and improve network performance. For example, an eight-port switch can support eight Ethernets, each running at a full 10 Mbps. Another option is to dedicate one or more of these switched ports to a high traffic device such as a file server.
Greater throughput is required to support multimedia and video applications. When added to the network, Ethernet switches provide a number of enhancements over shared networks that can support these applications. Foremost is the ability to divide networks into smaller and faster segments. Ethernet switches examine each packet, determine where that packet is destined and then forward that packet to only those ports to which the packet needs to go. Modern switches are able to do all these tasks at “wirespeed,” that is, without delay.
Aside from deciding when to forward or when to filter the packet, Ethernet switches also completely regenerate the Ethernet packet. This regeneration and re-timing allows each port on a switch to be treated as a complete Ethernet segment, capable of supporting the full length of cable along with all of the repeater restrictions. The standard Ethernet slot time required in CSMA/CD half-duplex modes is not long enough for running over 100m copper, so Carrier Extension is used to guarantee a 512-bit slot time.
Additionally, bad packets are identified by Ethernet switches and immediately dropped from any future transmission. This “cleansing” activity keeps problems isolated to a single segment and keeps them from disrupting other network activity. This aspect of switching is extremely important in a network environment where hardware failures are to be anticipated. Full duplex doubles the bandwidth on a link, and is another method used to increase bandwidth to dedicated workstations or servers. Full duplex modes are available for standard Ethernet, Fast Ethernet, and Gigabit Ethernet. To use full duplex, special network interface cards are installed in the server or workstation, and the switch is programmed to support full duplex operation.

Increasing Performance with Fast and Gigabit Ethernet

Implementing Fast or Gigabit Ethernet to increase performance is the next logical step when Ethernet becomes too slow to meet user needs. Higher traffic devices can be connected to switches or each other via Fast Ethernet or Gigabit Ethernet, providing a great increase in bandwidth. Many switches are designed with this in mind, and have Fast Ethernet uplinks available for connection to a file server or other switches. Eventually, Fast Ethernet can be deployed to user desktops by equipping all computers with Fast Ethernet network interface cards and using Fast Ethernet switches and repeaters.
With an understanding of the underlying technologies and products in use in Ethernet networks, the next tutorial will advance to a discussion of some of the most popular real-world applications.

Saturday, July 16, 2016

Part I: Networking Basics

Computer networking has become an integral part of business today. Individuals, professionals and academics have also learned to rely on computer networks for capabilities such as electronic mail and access to remote databases for research and communication purposes. Networking has thus become an increasingly pervasive, worldwide reality because it is fast, efficient, reliable and effective. Just how all this information is transmitted, stored, categorized and accessed remains a mystery to the average computer user.

This tutorial will explain the basics of some of the most popular technologies used in networking, and will include the following:
  • Types of Networks – including LANs, WANs and WLANs
  • The Internet and Beyond – The Internet and its contributions to intranets and extranets
  • Types of LAN Technology – including Ethernet, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet,
    ATM, PoE and Token Ring
  • Networking and Ethernet Basics – including standard code, media, topographies, collisions and CSMA/CD
  • Ethernet Products – including transceivers, network interface cards, hubs and repeaters

Types of Networks

In describing the basics of networking technology, it will be helpful to explain the different types of networks in use.

Local Area Networks (LANs)

A network is any collection of independent computers that exchange information with each other over a shared communication medium. Local Area Networks or LANs are usually confined to a limited geographic area, such as a single building or a college campus. LANs can be small, linking as few as three computers, but can often link hundreds of computers used by thousands of people. The development of standard networking protocols and media has resulted in worldwide proliferation of LANs throughout business and educational organizations.

Wide Area Networks (WANs)

Often elements of a network are widely separated physically. Wide area networking combines multiple LANs that are geographically separate. This is accomplished by connecting the several LANs with dedicated leased lines such as a T1 or a T3, by dial-up phone lines (both synchronous and asynchronous), by satellite links and by data packet carrier services. WANs can be as simple as a modem and a remote access server for employees to dial into, or it can be as complex as hundreds of branch offices globally linked. Special routing protocols and filters minimize the expense of sending data over vast distances.

Wireless Local Area Networks (WLANs)

Wireless LANs, or WLANs, use radio frequency (RF) technology to transmit and receive data over the air. This minimizes the need for wired connections. WLANs give users mobility as they allow connection to a local area network without having to be physically connected by a cable. This freedom means users can access shared resources without looking for a place to plug in cables, provided that their terminals are mobile and within the designated network coverage area. With mobility, WLANs give flexibility and increased productivity, appealing to both entrepreneurs and to home users. WLANs may also enable network administrators to connect devices that may be physically difficult to reach with a cable.
The Institute for Electrical and Electronic Engineers (IEEE) developed the 802.11 specification for wireless LAN technology. 802.11 specifies over-the-air interface between a wireless client and a base station, or between two wireless clients. WLAN 802.11 standards also have security protocols that were developed to provide the same level of security as that of a wired LAN.
The first of these protocols is Wired Equivalent Privacy (WEP). WEP provides security by encrypting data sent over radio waves from end point to end point.
The second WLAN security protocol is Wi-Fi Protected Access (WPA). WPA was developed as an upgrade to the security features of WEP. It works with existing products that are WEP-enabled but provides two key improvements: improved data encryption through the temporal key integrity protocol (TKIP) which scrambles the keys using a hashing algorithm. It has means for integrity-checking to ensure that keys have not been tampered with. WPA also provides user authentication with the extensible authentication protocol (EAP).
Wireless Protocols
SpecificationData RateModulation SchemeSecurity
802.111 or 2 Mbps in the 2.4 GHz bandFHSS, DSSSWEP and WPA
802.11a54 Mbps in the 5 GHz bandOFDMWEP and WPA
802.11b/High Rate/Wi-Fi11 Mbps (with a fallback to 5.5, 2, and 1 Mbps) in the 2.4 GHz bandDSSS with CCKWEP and WPA
802.11g/Wi-Fi54 Mbps in the 2.4 GHz bandOFDM when above 20Mbps, DSSS with CCK when below 20MbpsWEP and WPA

The Internet and Beyond

More than just a technology, the Internet has become a way of life for many people, and it has spurred a revolution of sorts for both public and private sharing of information. The most popular source of information about almost anything, the Internet is used daily by technical and non-technical users alike.

The Internet:  The Largest Network of All

With the meteoric rise in demand for connectivity, the Internet has become a major communications highway for millions of users. It is a decentralized system of linked networks that are worldwide in scope. It facilitates data communication services such as remote log-in, file transfer, electronic mail, the World Wide Web and newsgroups. It consists of independent hosts of computers that can designate which Internet services to use and which of their local services to make available to the global community.
Initially restricted to military and academic institutions, the Internet now operates on a three-level hierarchy composed of backbone networks, mid-level networks and stub networks. It is a full-fledged conduit for any and all forms of information and commerce. Internet websites now provide personal, educational, political and economic resources to virtually any point on the planet.

Intranet:  A Secure Internet-like Network for Organizations

With advancements in browser-based software for the Internet, many private organizations have implemented intranets. An intranet is a private network utilizing Internet-type tools, but available only within that organization. For large organizations, an intranet provides easy access to corporate information for designated employees.

Extranet:  A Secure Means for Sharing Information with Partners

While an intranet is used to disseminate confidential information within a corporation, an extranet is commonly used by companies to share data in a secure fashion with their business partners. Internet-type tools are used by content providers to update the extranet. Encryption and user authentication means are provided to protect the information, and to ensure that designated people with the proper access privileges are allowed to view it.

Types of LAN Technology

Ethernet

Ethernet is the most popular physical layer LAN technology in use today. It defines the number of conductors that are required for a connection, the performance thresholds that can be expected, and provides the framework for data transmission. A standard Ethernet network can transmit data at a rate up to 10 Megabits per second (10 Mbps). Other LAN types include Token Ring, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, Fiber Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM) and LocalTalk.
Ethernet is popular because it strikes a good balance between speed, cost and ease of installation. These benefits, combined with wide acceptance in the computer marketplace and the ability to support virtually all popular network protocols, make Ethernet an ideal networking technology for most computer users today.
The Institute for Electrical and Electronic Engineers developed an Ethernet standard known as IEEE Standard 802.3. This standard defines rules for configuring an Ethernet network and also specifies how the elements in an Ethernet network interact with one another. By adhering to the IEEE standard, network equipment and network protocols can communicate efficiently.

Fast Ethernet

The Fast Ethernet standard (IEEE 802.3u) has been established for Ethernet networks that need higher transmission speeds. This standard raises the Ethernet speed limit from 10 Mbps to 100 Mbps with only minimal changes to the existing cable structure. Fast Ethernet provides faster throughput for video, multimedia, graphics, Internet surfing and stronger error detection and correction.
There are three types of Fast Ethernet: 100BASE-TX for use with level 5 UTP cable; 100BASE-FX for use with fiber-optic cable; and 100BASE-T4 which utilizes an extra two wires for use with level 3 UTP cable. The 100BASE-TX standard has become the most popular due to its close compatibility with the 10BASE-T Ethernet standard.
Network managers who want to incorporate Fast Ethernet into an existing configuration are required to make many decisions. The number of users in each site on the network that need the higher throughput must be determined; which segments of the backbone need to be reconfigured specifically for 100BASE-T; plus what hardware is necessary in order to connect the 100BASE-T segments with existing 10BASE-T segments. Gigabit Ethernet is a future technology that promises a migration path beyond Fast Ethernet so the next generation of networks will support even higher data transfer speeds.

Gigabit Ethernet

Gigabit Ethernet was developed to meet the need for faster communication networks with applications such as multimedia and Voice over IP (VoIP). Also known as “gigabit-Ethernet-over-copper” or 1000Base-T, GigE is a version of Ethernet that runs at speeds 10 times faster than 100Base-T. It is defined in the IEEE 802.3 standard and is currently used as an enterprise backbone. Existing Ethernet LANs with 10 and 100 Mbps cards can feed into a Gigabit Ethernet backbone to interconnect high performance switches, routers and servers.
From the data link layer of the OSI model upward, the look and implementation of Gigabit Ethernet is identical to that of Ethernet. The most important differences between Gigabit Ethernet and Fast Ethernet include the additional support of full duplex operation in the MAC layer and the data rates.

10 Gigabit Ethernet

10 Gigabit Ethernet is the fastest and most recent of the Ethernet standards. IEEE 802.3ae defines a version of Ethernet with a nominal rate of 10Gbits/s that makes it 10 times faster than Gigabit Ethernet.
Unlike other Ethernet systems, 10 Gigabit Ethernet is based entirely on the use of optical fiber connections. This developing standard is moving away from a LAN design that broadcasts to all nodes, toward a system which includes some elements of wide area routing. As it is still very new, which of the standards will gain commercial acceptance has yet to be determined.

Asynchronous Transfer Mode (ATM)

ATM is a cell-based fast-packet communication technique that can support data-transfer rates from sub-T1 speeds to 10 Gbps. ATM achieves its high speeds in part by transmitting data in fixed-size cells and dispensing with error-correction protocols. It relies on the inherent integrity of digital lines to ensure data integrity.
ATM can be integrated into an existing network as needed without having to update the entire network. Its fixed-length cell-relay operation is the signaling technology of the future and offers more predictable performance than variable length frames. Networks are extremely versatile and an ATM network can connect points in a building, or across the country, and still be treated as a single network.

Power over Ethernet (PoE)

PoE is a solution in which an electrical current is run to networking hardware over the Ethernet Category 5 cable or higher. This solution does not require an extra AC power cord at the product location. This minimizes the amount of cable needed as well as eliminates the difficulties and cost of installing extra outlets.
LAN Technology Specifications
NameIEEE StandardData RateMedia TypeMaximum Distance
Ethernet802.310 Mbps10Base-T100 meters
Fast Ethernet/
100Base-T
802.3u100 Mbps100Base-TX
100Base-FX
100 meters
2000 meters
Gigabit Ethernet/
GigE
802.3z1000 Mbps1000Base-T
1000Base-SX
1000Base-LX
100 meters
275/550 meters
550/5000 meters
10 Gigabit EthernetIEEE 802.3ae10 Gbps10GBase-SR
10GBase-LX4
10GBase-LR/ER
10GBase-SW/LW/EW
300 meters
300m MMF/ 10km SMF
10km/40km
300m/10km/40km

Token Ring

Token Ring is another form of network configuration. It differs from Ethernet in that all messages are transferred in one direction along the ring at all times. Token Ring networks sequentially pass a “token” to each connected device. When the token arrives at a particular computer (or device), the recipient is allowed to transmit data onto the network. Since only one device may be transmitting at any given time, no data collisions occur. Access to the network is guaranteed, and time-sensitive applications can be supported. However, these benefits come at a price. Component costs are usually higher, and the networks themselves are considered to be more complex and difficult to implement. Various PC vendors have been proponents of Token Ring networks.

Networking and Ethernet Basics

Protocols

After a physical connection has been established, network protocols define the standards that allow computers to communicate. A protocol establishes the rules and encoding specifications for sending data. This defines how computers identify one another on a network, the form that the data should take in transit, and how this information is processed once it reaches its final destination. Protocols also define procedures for determining the type of error checking that will be used, the data compression method, if one is needed, how the sending device will indicate that it has finished sending a message, how the receiving device will indicate that it has received a message, and the handling of lost or damaged transmissions or “packets”.
The main types of network protocols in use today are: TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms); IPX (for Novell NetWare); DECnet (for networking Digital Equipment Corp. computers); AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks).
Although each network protocol is different, they all share the same physical cabling. This common method of accessing the physical network allows multiple protocols to peacefully coexist over the network media, and allows the builder of a network to use common hardware for a variety of protocols. This concept is known as “protocol independence,” which means that devices which are compatible at the physical and data link layers allow the user to run many different protocols over the same medium.

The Open System Interconnection Model

The Open System Interconnection (OSI) model specifies how dissimilar computing devices such as Network Interface Cards (NICs), bridges and routers exchange data over a network by offering a networking framework for implementing protocols in seven layers. Beginning at the application layer, control is passed from one layer to the next. The following describes the seven layers as defined by the OSI model, shown in the order they occur whenever a user transmits information.
Layer 7: Application
This layer supports the application and end-user processes. Within this layer, user privacy is considered and communication partners, service and constraints are all identified. File transfers, email, Telnet and FTP applications are all provided within this layer.
Layer 6: Presentation (Syntax)
Within this layer, information is translated back and forth between application and network formats.  This translation transforms the information into data the application layer and network recognize regardless of encryption and formatting.
Layer 5: Session
Within this layer, connections between applications are made, managed and terminated as needed to allow for data exchanges between applications at each end of a dialogue.
Layer 4: Transport
Complete data transfer is ensured as information is transferred transparently between systems in this layer. The transport layer also assures appropriate flow control and end-to-end error recovery.
Layer 3: Network
Using switching and routing technologies, this layer is responsible for creating virtual circuits to transmit information from node to node. Other functions include routing, forwarding, addressing, internet working, error and congestion control, and packet sequencing.
Layer 2: Data Link
Information in data packets are encoded and decoded into bits within this layer. Errors from the physical layer flow control and frame synchronization are corrected here utilizing transmission protocol knowledge and management. This layer consists of two sub layers: the Media Access Control (MAC) layer, which controls the way networked computers gain access to data and transmit it, and the Logical Link Control (LLC) layer, which controls frame synchronization, flow control and error checking.
Layer 1: Physical
This layer enables hardware to send and receive data over a carrier such as cabling, a card or other physical means. It conveys the bitstream through the network at the electrical and mechanical level. Fast Ethernet, RS232, and ATM are all protocols with physical layer components.
This order is then reversed as information is received, so that the physical layer is the first and application layer is the final layer that information passes through.

Standard Ethernet Code

In order to understand standard Ethernet code, one must understand what each digit means. Following is a guide:
Guide to Ethernet Coding
10at the beginning means the network operates at 10Mbps.
BASEmeans the type of signaling used is baseband.
2 or 5at the end indicates the maximum cable length in meters.
Tthe end stands for twisted-pair cable.
Xat the end stands for full duplex-capable cable.
FLat the end stands for fiber optic cable.
For example: 100BASE-TX indicates a Fast Ethernet connection (100 Mbps) that uses a
twisted pair cable capable of full-duplex transmissions.

Media

An important part of designing and installing an Ethernet is selecting the appropriate Ethernet medium. There are four major types of media in use today: Thickwire for 10BASE5 networks; thin coax for 10BASE2 networks; unshielded twisted pair (UTP) for 10BASE-T networks; and fiber optic for 10BASE-FL or Fiber-Optic Inter-Repeater Link (FOIRL) networks. This wide variety of media reflects the evolution of Ethernet and also points to the technology’s flexibility. Thickwire was one of the first cabling systems used in Ethernet, but it was expensive and difficult to use. This evolved to thin coax, which is easier to work with and less expensive. It is important to note that each type of Ethernet, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, has its own preferred media types.
The most popular wiring schemes are 10BASE-T and 100BASE-TX, which use unshielded twisted pair (UTP) cable. This is similar to telephone cable and comes in a variety of grades, with each higher grade offering better performance. Level 5 cable is the highest, most expensive grade, offering support for transmission rates of up to 100 Mbps. Level 4 and level 3 cable are less expensive, but cannot support the same data throughput speeds; level 4 cable can support speeds of up to 20 Mbps; level 3 up to 16 Mbps. The 100BASE-T4 standard allows for support of 100 Mbps Ethernet over level 3 cables, but at the expense of adding another pair of wires (4 pair instead of the 2 pair used for 10BASE-T). For most users, this is an awkward scheme and therefore 100BASE-T4 has seen little popularity. Level 2 and level 1 cables are not used in the design of 10BASE-T networks.
For specialized applications, fiber-optic, or 10BASE-FL, Ethernet segments are popular. Fiber-optic cable is more expensive, but it is invaluable in situations where electronic emissions and environmental hazards are a concern. Fiber-optic cable is often used in inter-building applications to insulate networking equipment from electrical damage caused by lightning. Because it does not conduct electricity, fiber-optic cable can also be useful in areas where heavy electromagnetic interference is present, such as on a factory floor. The Ethernet standard allows for fiber-optic cable segments up to two kilometers long, making fiber-optic Ethernet perfect for connecting nodes and buildings that are otherwise not reachable with copper media.
Cable Grade Capabilities
Cable NameMakeupFrequency SupportData RateNetwork Compatibility
Cat-54 twisted pairs of copper wire — terminated by RJ45 connectors100 MHzUp to 1000MbpsATM, Token Ring,1000Base-T, 100Base-TX, 10Base-T
Cat-5e4 twisted pairs of copper wire — terminated by RJ45 connectors100 MHzUp to 1000Mbps10Base-T, 100Base-TX, 1000Base-T
Cat-64 twisted pairs of copper wire — terminated by RJ45 connectors250 MHz1000Mbps10Base-T, 100Base-TX, 1000Base-T

Topologies

Network topology is the geometric arrangement of nodes and cable links in a LAN. Two general configurations are used, bus and star. These two topologies define how nodes are connected to one another in a communication network. A node is an active device connected to the network, such as a computer or a printer. A node can also be a piece of networking equipment such as a hub, switch or a router.
A bus topology consists of nodes linked together in a series with each node connected to a long cable or bus. Many nodes can tap into the bus and begin communication with all other nodes on that cable segment. A break anywhere in the cable will usually cause the entire segment to be inoperable until the break is repaired. Examples of bus topology include 10BASE2 and 10BASE5.
tutors_p1-topo

General Topology Configurations

10BASE-T Ethernet and Fast Ethernet use a star topology where access is controlled by a central computer. Generally a computer is located at one end of the segment, and the other end is terminated in central location with a hub or a switch. Because UTP is often run in conjunction with telephone cabling, this central location can be a telephone closet or other area where it is convenient to connect the UTP segment to a backbone. The primary advantage of this type of network is reliability, for if one of these ‘point-to-point’ segments has a break; it will only affect the two nodes on that link. Other computer users on the network continue to operate as if that segment were non-existent.

Collisions

Ethernet is a shared medium, so there are rules for sending packets of data to avoid conflicts and to protect data integrity. Nodes determine when the network is available for sending packets. It is possible that two or more nodes at different locations will attempt to send data at the same time. When this happens, a packet collision occurs.
Minimizing collisions is a crucial element in the design and operation of networks. Increased collisions are often the result of too many users on the network. This leads to competition for network bandwidth and can slow the performance of the network from the user’s point of view. Segmenting the network is one way of reducing an overcrowded network, i.e., by dividing it into different pieces logically joined together with a bridge or switch.

CSMA/CD

In order to manage collisions Ethernet uses a protocol called Carrier Sense Multiple Access/Collision Detection (CSMA/CD). CSMA/CD is a type of contention protocol that defines how to respond when a collision is detected, or when two devices attempt to transmit packages simultaneously. Ethernet allows each device to send messages at any time without having to wait for network permission; thus, there is a high possibility that devices may try to send messages at the same time.
After detecting a collision, each device that was transmitting a packet delays a random amount of time before re-transmitting the packet. If another collision occurs, the device waits twice as long before trying to re-transmit.

Ethernet Products

The standards and technology just discussed will help define the specific products that network managers use to build Ethernet networks. The following presents the key products needed to build an Ethernet LAN.

Transceivers

Transceivers are also referred to as Medium Access Units (MAUs). They are used to connect nodes to the various Ethernet media. Most computers and network interface cards contain a built-in 10BASE-T or 10BASE2 transceiver which allows them to be connected directly to Ethernet without the need for an external transceiver.
Many Ethernet devices provide an attachment unit interface (AUI) connector to allow the user to connect to any type of medium via an external transceiver. The AUI connector consists of a 15-pin D-shell type connector, female on the computer side, male on the transceiver side.
For Fast Ethernet networks, a new interface called the MII (Media Independent Interface) was developed to offer a flexible way to support 100 Mbps connections. The MII is a popular way to connect 100BASE-FX links to copper-based Fast Ethernet devices.

Network Interface Cards

Network Interface Cards, commonly referred to as NICs, are used to connect a PC to a network. The NIC provides a physical connection between the networking cable and the computer’s internal bus. Different computers have different bus architectures. PCI bus slots are most commonly found on 486/Pentium PCs and ISA expansion slots are commonly found on 386 and older PCs. NICs come in three basic varieties: 8-bit, 16-bit, and 32-bit. The larger the number of bits that can be transferred to the NIC, the faster the NIC can transfer data to the network cable. Most NICs are designed for a particular type of network, protocol, and medium, though some can serve multiple networks.
Many NIC adapters comply with plug-and-play specifications. On these systems, NICs are automatically configured without user intervention, while on non-plug-and-play systems, configuration is done manually through a set-up program and/or DIP switches.
Cards are available to support almost all networking standards. Fast Ethernet NICs are often 10/100 capable, and will automatically set to the appropriate speed. Gigabit Ethernet NICs are 10/100/1000 capable with auto negotiation depending on the user’s Ethernet speed. Full duplex networking is another option where a dedicated connection to a switch allows a NIC to operate at twice the speed.

Hubs/Repeaters

Hubs/repeaters are used to connect together two or more Ethernet segments of any type of medium. In larger designs, signal quality begins to deteriorate as segments exceed their maximum length. Hubs provide the signal amplification required to allow a segment to be extended a greater distance. A hub repeats any incoming signal to all ports.
Ethernet hubs are necessary in star topologies such as 10BASE-T. A multi-port twisted pair hub allows several point-to-point segments to be joined into one network. One end of the point-to-point link is attached to the hub and the other is attached to the computer. If the hub is attached to a backbone, then all computers at the end of the twisted pair segments can communicate with all the hosts on the backbone. The number and type of hubs in any one-collision domain is limited by the Ethernet rules. These repeater rules are discussed in more detail later.
A very important fact to note about hubs is that they only allow users to share Ethernet. A network of hubs/repeaters is termed a “shared Ethernet,” meaning that all members of the network are contending for transmission of data onto a single network (collision domain). A hub/repeater propagates all electrical signals including the invalid ones. Therefore, if a collision or electrical interference occurs on one segment, repeaters make it appear on all others as well. This means that individual members of a shared network will only get a percentage of the available network bandwidth.
Basically, the number and type of hubs in any one collision domain for 10Mbps Ethernet is limited by the following rules:

Network TypeMax Nodes Per SegmentMax Distance Per Segment
10BASE-T2100m
10BASE-FL22000m

IP Addressing and Subnetting for New Users Document ID: 13788

Contents Introduction  Prerequisites       Requirements       Components Used       Additional Information       Conventions ...