CCIE RS Workbook | CCIE Security Workbook | CCIE SP Workbook| CCIE Voice Workbook
All dynamic routing protocols are built around an algorithm. Generally, an algorithm is a step-by-step procedure for solving a problem. A routing algorithm must, at a minimum, specify the following:
A few issues common to any routing protocol are path determination, metrics, convergence, and load balancing.
All subnets within a network must be connected to a router, and wherever a router has an interface on a network, that interface must have an address on the network. This address is the originating point for reachability information.
 Many point-to-point links are configured as “unnumbered” linksthat is, there is no address assigned to the connected point-to-point interfaceso as to conserve addresses. But unnumbered links do not violate the rule that every interface must have an address; they use another address on the router, usually the loopback address, as a proxy address.
Figure 4-1 shows a simple three-router network. Router A recognizes networks 192.168.1.0, 192.168.2.0, and 192.168.3.0 because it has interfaces on those networks with corresponding addresses and appropriate address masks. Likewise, Router B recognizes 192.168.3.0, 192.168.4.0, 192.168.5.0, and 188.8.131.52; Router C recognizes 192.168.6.0, 192.168.7.0, and 184.108.40.206. Each interface implements the data link and physical protocols of the network to which it is attached, so the router also recognizes the state of the network (up or down).
At first glance, the information-sharing procedure seems simple. Look at Router A:
Routers B and C, having performed the same steps, have sent updates with their directly connected networks to A. Router A enters the received information into its route table, along with the source address of the router that sent the update packet. Router A now recognizes all the networks and the addresses of the routers to which they are attached.
This procedure does seem quite simple. So why are routing protocols so much more complicated than this? Look again at Figure 4-1:
These questions are almost as simplistic as the preceding preliminary explanation of routing protocols, but they should give you an indication for some of the issues that contribute to the complexity of the protocols. Each routing protocol addresses these questions one way or another, which will become clear in following sections and chapters.
When there are multiple routes to the same destination, a router must have a mechanism for calculating the best path. A metric is a variable assigned to routes as a means of ranking them from best to worst or from most preferred to least preferred. Consider the following example of why metrics are needed.
Assuming that information sharing has properly occurred in the network of Figure 4-1, Router A might have a route table that looks like Table 4-1.
This route table says that the first three networks are directly connected and that no routing is needed from Router A to reach them, which is correct. The last four networks, according to this table, can be reached via Router B or Router C. This information is also correct. But if network 192.168.7.0 can be reached via either Router B or Router C, which path is the preferable path? Metrics are needed to rank the alternatives.
Different routing protocols use different metrics. For example, RIP defines the “best” route as the one with the least number of router hops; EIGRP defines the “best” route based on a combination of the lowest bandwidth along the route and the total delay of the route. The following sections provide basic definitions of these and other commonly used metrics. Further complexitiessuch as how some routing protocols such as EIGRP use multiple parameters to compute a metric and deal with routes that have identical metric valuesare covered later, in the protocol-specific chapters of this book.
A hop-count metric simply counts router hops. For instance, from Router A, it is one hop to network 192.168.5.0 if packets are sent out interface 192.168.3.1 (through Router B) and two hops if packets are sent out 192.168.1.1 (through Routers C and B). Assuming hop count is the only metric being applied, the best route is the one with the fewest hops, in this case, A-B.
But is the A-B link really the best path? If the A-B link is a DS-0 link and the A-C and C-B links are T-1 links, the two-hop route might actually be best, because bandwidth plays a role in how efficiently traffic travels through the network.
A bandwidth metric would choose a higher-bandwidth path over a lower-bandwidth link. However, bandwidth by itself still might not be a good metric. What if one or both of the T1 links are heavily loaded with other traffic and the 56K link is lightly loaded? Or what if the higher-bandwidth link also has a higher delay?
This metric reflects the amount of traffic utilizing the links along the path. The best path is the one with the lowest load.
Unlike hop count and bandwidth, the load on a route changes, and, therefore, the metric will change. Care must be taken here. If the metric changes too frequently, route flappingthe frequent change of preferred routesmight occur. Route flaps can have adverse effects on the router’s CPU, the bandwidth of the data links, and the overall stability of the network.
Delay is a measure of the time a packet takes to traverse a route. A routing protocol using delay as a metric would choose the path with the least delay as the best path. There might be many ways to measure delay. Delay might take into account not only the delay of the links along the route, but also such factors as router latency and queuing delay. On the other hand, the delay of a route might not be measured at all; it might be a sum of static quantities defined for each interface along the path. Each individual delay quantity would be an estimate based on the type of link to which the interface is connected.
Reliability measures the likelihood that the link will fail in some way and can be either variable or fixed. Examples of variable-reliability metrics are the number of times a link has failed, or the number of errors it has received within a certain time period. Fixed-reliability metrics are based on known qualities of a link as determined by the network administrator. The path with highest reliability would be selected as best.
This metric is configured by a network administrator to reflect more- or less-preferred routes. Cost might be defined by any policy or link characteristic or might reflect the arbitrary judgment of the network administrator. Therefore, “cost” is a term of convenience describing a dimensionless metric.
The term cost is often used as a generic term when speaking of route choices. For example, “RIP chooses the lowest-cost path based on hop count.” Another generic term is shortest, as in “RIP chooses the shortest path based on hop count.” When used in this context, either lowest-cost (or highest-cost) and shortest (or longest) merely refer to a routing protocol’s view of paths based on its specific metrics.
A dynamic routing protocol must include a set of procedures for a router to inform other routers about its directly connected networks, to receive and process the same information from other routers, and to pass along the information it receives from other routers. Further, a routing protocol must define a metric by which best paths might be determined.
A further criterion for routing protocols is that the reachability information in the route tables of all routers in the network must be consistent. If Router A in Figure 4-1 determines that the best path to network 192.168.5.0 is via Router C and if Router C determines that the best path to the same network is through Router A, Router A will send packets destined for 192.168.5.0 to C, C will send them back to A, A will again send them to C, and so on. This continuous circling of traffic between two or more destinations is referred to as a routing loop.
The process of bringing all route tables to a state of consistency is called convergence. The time it takes to share information across a network and for all routers to calculate best paths is the convergence time.
Figure 4-2 shows a network that was converged, but now a topology change has occurred. The link between the two left-most routers has failed; both routers, being directly connected, know about the failure from the data link protocol and proceed to inform their neighbors of the unavailable link. The neighbors update their route tables accordingly and inform their neighbors, and the process continues until all routers know about the change.
Notice that at time t2 the three left-most routers recognize the topology change but the three right-most routers have not yet received that information. Those three have old information and will continue to switch packets accordingly. It is during this intermediate time, when the network is in an unconverged state, that routing errors might occur. Therefore, convergence time is an important factor in any routing protocol. The faster a network can reconverge after a topology change, the better.
Recall from Chapter 3, “Static Routing,” that load balancing is the practice of distributing traffic among multiple paths to the same destination, so as to use bandwidth efficiently. As an example of the usefulness of load balancing, consider Figure 4-1 again. All the networks in Figure 4-1 are reachable from two paths. If a device on 192.168.2.0 sends a stream of packets to a device on 192.168.6.0, Router A might send them all via Router B or Router C. In both cases, the network is one hop away. However, sending all packets on a single route probably is not the most efficient use of available bandwidth. Instead, load balancing should be implemented to alternate traffic between the two paths. As noted in Chapter 3, load balancing can be equal cost or unequal cost, and per packet or per destination.
As cisco instructors we provide this free offer to help any one who is interested in being a cisco certificate engineer . All the below tips are FREE!!!.