Firewall_qosSettings

Firewall Settings > QoS Mapping

Quality of Service (QoS) refers to a diversity of methods intended to provide predictable network behavior and performance. This sort of predictability is vital to certain types of applications, such as Voice over IP (VoIP), multimedia content, or business-critical applications such as order or credit-card processing. No amount of bandwidth can provide this sort of predictability, because any amount of bandwidth will ultimately be used to its capacity at some point in a network. Only QoS, when configured and implemented correctly, can properly manage traffic, and guarantee the desired levels of network service.

This section contains the following subsections:

Classification

Classification is necessary as a first step so that traffic in need of management can be identified. SonicOS Enhanced uses Access Rules as the interface to classification of traffic. This provides fine controls using combinations of Address Object, Service Object, and Schedule Object elements, allowing for classification criteria as general as all HTTP traffic and as specific as SSH traffic from hostA to serverB on Wednesdays at 2:12am.

SonicOS Enhanced on SonicWALL NSA series appliances has the ability to recognize, map, modify, and generate the industry-standard external CoS designators, DSCP and 802.1p (refer to the 802.1p and DSCP QoS).

Once identified, or classified, it can be managed. Management can be performed internally by SonicOS’ BWM, which is perfectly effective as long as the network is a fully contained autonomous system. Once external or intermediate elements are introduced, such as foreign network infrastructures with unknown configurations, or other hosts contending for bandwidth (e.g. the Internet) the ability to offer guarantees and predictability are diminished. In other words, as long as the endpoints of the network and everything in between are within your management, BWM will work exactly as configured. Once external entities are introduced, the precision and efficacy of BWM configurations can begin to degrade.

But all is not lost. Once SonicOS Enhanced classifies the traffic, it can tag the traffic to communicate this classification to certain external systems that are capable of abiding by CoS tags; thus they too can participate in providing QoS.

Note         Many service providers do not support CoS tags such as 802.1p or DSCP. Also, most network equipment with standard configurations will not be able to recognize 802.1p tags, and could drop tagged traffic.
Although DSCP will not cause compatibility issues, many service providers will simply strip or ignore the DSCP tags, disregarding the code points.
If you wish to use 802.1p or DSCP marking on your network or your service provider’s network, you must first establish that these methods are supported. Verify that your internal network equipment can support CoS priority marking, and that it is correctly configured to do so. Check with your service provider – some offer fee-based support for QoS using these CoS methods.

Marking

Once the traffic has been classified, if it is to be handled by QoS capable external systems (e.g. CoS aware switches or routers as might be available on a premium service provider’s infrastructure, or on a private WAN), it must be tagged so that the external systems can make use of the classification, and provide the correct handling and Per Hop Behaviors (PHB).

Originally, this was attempted at the IP layer (layer 3) with RFC791’s three Precedence bits and RFC1394 ToS (type of service) field, but this was used by a grand total of 17 people throughout history. Its successor, RFC2474 introduced the much more practical and widely used DSCP (Differentiated Services Code Point) which offered up to 64 classifications, as well as user-definable classes. DSCP was further enhanced by RFC2598 (Expedited Forwarding, intended to provide leased-line behaviors) and RFC2697 (Assured Forwarding levels within classes, also known as Gold, Silver, and Bronze levels).

DSCP is a safe marking method for traffic that traverses public networks because there is no risk of incompatibility. At the very worst, a hop along the path might disregard or strip the DSCP tag, but it will rarely mistreat or discard the packet.

The other prevalent method of CoS marking is IEEE 802.1p. 802.1p occurs at the MAC layer (layer 2) and is closely related to IEEE 802.1Q VLAN marking, sharing the same 16-bit field, although it is actually defined in the IEEE 802.1D standard. Unlike DSCP, 802.1p will only work with 802.1p capable equipment, and is not universally interoperable. Additionally, 802.1p, because of its different packet structure, can rarely traverse wide-area networks, even private WANs. Nonetheless, 802.1p is gaining wide support among Voice and Video over IP vendors, so a solution for supporting 802.1p across network boundaries (i.e. WAN links) was introduced in the form of 802.1p to DSCP mapping.

802.1p to DSCP mapping allows 802.1p tags from one LAN to be mapped to DSCP values by SonicOS Enhanced, allowing the packets to safely traverse WAN links. When the packets arrive on the other side of the WAN or VPN, the receiving SonicOS Enhanced appliance can then map the DSCP tags back to 802.1p tags for use on that LAN. Refer to the 802.1p and DSCP QoS for more information.

Conditioning

The traffic can be conditioned (or managed) using any of the many policing, queuing, and shaping methods available. SonicOS provides internal conditioning capabilities with its Egress and Ingress Bandwidth Management (BWM), detailed in the Bandwidth Management. SonicOS’s BWM is a perfectly effective solution for fully autonomous private networks with sufficient bandwidth, but can become somewhat less effective as more unknown external network elements and bandwidth contention are introduced. Refer to the Example Scenario in the Example Scenario for a description of contention issues.

Site to Site VPN over QoS Capable Networks

If the network path between the two end points is QoS aware, SonicOs can DSCP tag the inner encapsulate packet so that it is interpreted correctly at the other side of the tunnel, and it can also DSCP tag the outer ESP encapsulated packet so that its class can be interpreted and honored by each hop along the transit network. SonicOS can map 802.1p tags created on the internal networks to DSCP tags so that they can safely traverse the transit network. Then, when the packets are received on the other side, the receiving SonicWALL appliance can translate the DSCP tags back to 802.1p tags for interpretation and honoring by that internal network.

Site to Site VPN over Public Networks

SonicOS integrated BWM is very effective in managing traffic between VPN connected networks because ingress and egress traffic can be classified and controlled at both endpoints. If the network between the endpoints is non QoS aware, it regards and treats all VPN ESP equally. Because there is typically no control over these intermediate networks or their paths, it is difficult to fully guarantee QoS, but BWM can still help to provide more predictable behavior.

site_to_site_QoS.jpg

 

To provide end-to-end QoS, business-class service providers are increasingly offering traffic conditioning services on their IP networks. These services typically depend on the customer premise equipment to classify and tag the traffic, generally using a standard marking method such as DSCP. SonicOS Enhanced has the ability to DSCP mark traffic after classification, as well as the ability to map 802.1p tags to DSCP tags for external network traversal and CoS preservation. For VPN traffic, SonicOS can DSCP mark not only the internal (payload) packets, but the external (encapsulating) packets as well so that QoS capable service providers can offer QoS even on encrypted VPN traffic.

The actual conditioning method employed by service providers varies from one to the next, but it generally involves a class-based queuing method such as Weighted Fair Queuing for prioritizing traffic, as well a congestion avoidance method, such as tail-drop or Random Early Detection.

802.1p and DSCP QoS

The following sections detail the 802.1p standard and DSCP QoS. These features are supported on SonicWALL NSA platforms, except for the SonicWALL NSA 210 appliance.

Enabling 802.1p

SonicOS Enhanced supports layer 2 and layer 3 CoS methods for broad interoperability with external systems participating in QoS enabled environments. The layer 2 method is the IEEE 802.1p standard wherein 3-bits of an additional 16-bits inserted into the header of the Ethernet frame can be used to designate the priority of the frame, as illustrated in the following figure:

.

1-ethernet_data_frame.jpg

802.1p support begins by enabling 802.1p marking on the interfaces which you wish to have process 802.1p tags. 802.1p can be enabled on any Ethernet interface on any SonicWALL appliance.

The behavior of the 802.1p field within these tags can be controlled by Access Rules. The default 802.1p Access Rule action of None will reset existing 802.1p tags to 0, unless otherwise configured (see Managing QoS Marking for details).

Enabling 802.1p marking will allow the target interface to recognize incoming 802.1p tags generated by 802.1p capable network devices, and will also allow the target interface to generate 802.1p tags, as controlled by Access Rules. Frames that have 802.1p tags inserted by SonicOS will bear VLAN ID 0.

802.1p tags will only be inserted according to Access Rules, so enabling 802.1p marking on an interface will not, at its default setting, disrupt communications with 802.1p-incapable devices.

802.1p requires the specific support by the networking devices with which you wish to use this method of prioritization. Many voice and video over IP devices provide support for 802.1p, but the feature must be enabled. Check your equipment’s documentation for information on 802.1p support if you are unsure. Similarly, many server and host network cards (NICs) have the ability to support 802.1p, but the feature is usually disabled by default. On Win32 operating systems, you can check for and configure 802.1p settings on the Advanced tab of the Properties page of your network card. If your card supports 802.1p, it will list it as 802.1p QoS, 802.1p Support, QoS Packet Tagging or something similar:

three.jpg

 

To process 802.1p tags, the feature must be present and enabled on the network interface. The network interface will then be able to generate packets with 802.1p tags, as governed by QoS capable applications. By default, general network communications will not have tags inserted so as to maintain compatibility with 802.1p-incapable devices.

Note         If your network interface does not support 802.1p, it will not be able to process 802.1p tagged traffic, and will ignore it. Make certain when defining Access Rules to enable 802.1p marking that the target devices are 802.1p capable.

It should also be noted that when performing a packet capture (for example, with the diagnostic tool Ethereal) on 802.1p capable devices, some 802.1p capable devices will not show the 802.1q header in the packet capture. Conversely, a packet capture performed on an 802.1p-incapable device will almost invariably show the header, but the host will be unable to process the packet.

Before moving on to Managing QoS Marking, it is important to introduce ‘DSCP Marking’ because of the potential interdependency between the two marking methods, as well as to explain why the interdependency exists.

Example Scenario

qos_example1.jpg

 

In the scenario above, we have Remote Site 1 connected to ‘Main Site’ by an IPsec VPN. The company uses an internal 802.1p/DSCP capable VoIP phone system, with a private VoIP signaling server hosted at the Main Site. The Main Site has a mixed gigabit and Fast-Ethernet infrastructure, while Remote Site 1 is all Fast Ethernet. Both sites employ 802.1p capable switches for prioritization of internal traffic.

  1. PC-1 at Remote Site 1 is transferring a 23 terabyte PowerPoint™ presentation to File Server 1, and the 100mbit link between the workgroup switch and the upstream switch is completely saturated.

  2.    At the Main Site, a caller on the 802.1p/DSCP capable VoIP Phone 10.50.165.200 initiates a call to the person at VoIP phone 192.168.168.200. The calling VoIP phone 802.1p tags the traffic with priority tag 6 (voice), and DSCP tags the traffic with a tag of 48.

  1. If the link between the Core Switch and the firewall is a VLAN, some switches will include the received 802.1p priority tag, in addition to the DSCP tag, in the packet sent to the firewall; this behavior varies from switch to switch, and is often configurable.

  2. If the link between the Core Switch and the firewall is not a VLAN, there is no way for the switch to include the 802.1p priority tag. The 802.1p priority is removed, and the packet (including only the DSCP tag) is forwarded to the firewall.

  3. When the firewall sent the packet across the VPN/WAN link, it could include the DSCP tag in the packet, but it is not possible to include the 802.1p tag. This would have the effect of losing all prioritization information for the VoIP traffic, because when the packet arrived at the Remote Site, the switch would have no 802.1p MAC layer information with which to prioritize the traffic. The Remote Site switch would treat the VoIP traffic the same as the lower-priority file transfer because of the link saturation, introducing delay—maybe even dropped packets—to the VoIP flow, resulting in call quality degradation.

    So how can critical 802.1p priority information from the Main Site LAN persist across the VPN/WAN link to Remote Site LAN? Through the use of QoS Mapping.

    QoS Mapping is a feature which converts layer 2 802.1p tags to layer 3 DSCP tags so that they can safely traverse (in mapped form) 802.1p-incapable links; when the packet arrives for delivery to the next 802.1p-capable segment, QoS Mapping converts from DSCP back to 802.1p tags so that layer 2 QoS can be honored.

    In our above scenario, the firewall at the Main Site assigns a DSCP tag (e.g. value 48) to the VoIP packets, as well as to the encapsulating ESP packets, allowing layer 3 QoS to be applied across the WAN. This assignment can occur either by preserving the existing DSCP tag, or by mapping the value from an 802.1p tag, if present. When the VoIP packets arrive at the other side of the link, the mapping process is reversed by the receiving SonicWALL, mapping the DSCP tag back to an 802.1p tag.

  4. The receiving SonicWALL at the Remote Site is configured to map the DSCP tag range 48-55 to 802.1p tag 6. When the packet exits the SonicWALL, it will bear 802.1p tag 6. The Switch will recognize it as voice traffic, and will prioritize it over the file-transfer, guaranteeing QoS even in the event of link saturation.

DSCP Marking

DSCP (Differentiated Services Code Point) marking uses 6-bits of the 8-bit ToS field in the IP Header to provide up to 64 classes (or code points) for traffic. Since DSCP is a layer 3 marking method, there is no concern about compatibility as there is with 802.1p marking. Devices that do not support DSCP will simply ignore the tags, or at worst, they will reset the tag value to 0.

4-IP_packet.jpg

 

The above diagram depicts an IP packet, with a close-up on the ToS portion of the header. The ToS bits were originally used for Precedence and ToS (delay, throughput, reliability, and cost) settings, but were later repurposed by RFC2474 for the more versatile DSCP settings.

The following table shows the commonly used code points, as well as their mapping to the legacy Precedence and ToS settings.

DSCP

DSCP Description   

   Legacy IP Precedence

   Legacy IP ToS (D, T, R)

0

   Best effort

   0 (Routine – 000)   

-

8

Class 1

   1 (Priority – 001)   

-

10

   Class 1, gold (AF11)

   1 (Priority – 001)   

T

12

Class 1, silver (AF12)

   1 (Priority – 001)   

D

14

   Class 1, bronze (AF13)

   1 (Priority – 001)   

D, T

16

   Class 2

   2 (Immediate – 010)   

-

18

Class 2, gold (AF21)

      2 (Immediate – 010)   

T

20

   Class 2, silver (AF22)

   2 (Immediate – 010)   

D

22

Class 2, bronze (AF23)

      2 (Immediate – 010)   

D, T

24

   Class 3

   3 (Flash – 011)   

-

26

Class 3, gold (AF31)

   3 (Flash – 011)   

T

27

Class 3, silver (AF32)

3 (Flash – 011)   

D

30

   Class 3, bronze (AF33)

   3 (Flash – 011)   

D, T

32

Class 4

   4 (Flash Override – 100)   

-

34

   Class 4, gold (AF41)

   4 (Flash Override – 100)   

T

36

   Class 4, silver (AF42)

   4 (Flash Override – 100)   

D

38

   Class 4, bronze (AF43)

   4 (Flash Override – 100)   

D, T

40

Express forwarding    

5 (CRITIC/ECP – 101)   

-

46

Expedited forwarding (EF)

   5 (CRITIC/ECP – 101)   

D, T

48

   Control

   6 (Internet Control – 110)   

-

56

Control

      7 (Network Control – 111)   

-

DSCP marking can be performed on traffic to/from any interface and to/from any zone type, without exception. DSCP marking is controlled by Access Rules, from the QoS tab, and can be used in conjunction with 802.1p marking, as well as with SonicOS’ internal bandwidth management.

DSCP Marking and Mixed VPN Traffic

Among their many security measures and characteristics, IPsec VPNs employ anti-replay mechanisms based upon monotonically incrementing sequence numbers added to the ESP header. Packets with duplicate sequence numbers are dropped, as are packets that do not adhere to sequence criteria. One such criterion governs the handling of out-of-order packets. SonicOS Enhanced provides a replay window of 64 packets, i.e. if an ESP packet for a Security Association (SA) is delayed by more than 64 packets, the packet will be dropped.

This should be considered when using DSCP marking to provide layer 3 QoS to traffic traversing a VPN. If you have a VPN tunnel that is transporting a diversity of traffic, some that is being DSCP tagged high priority (e.g. VoIP), and some that is DSCP tagged low-priority, or untagged/best-effort (e.g. FTP), your service provider will prioritize the handling and delivery of the high-priority ESP packets over the best-effort ESP packets. Under certain traffic conditions, this can result in the best-effort packets being delayed for more than 64 packets, causing them to be dropped by the receiving SonicWALL’s anti-replay defenses.

If symptoms of such a scenario emerge (e.g. excessive retransmissions of low-priority traffic), it is recommended that you create a separate VPN policy for the high-priority and low-priority classes of traffic. This is most easily accomplished by placing the high-priority hosts (e.g. the VoIP network) on their own subnet.

Configure for 802.1p CoS 4 – Controlled load

If you want to change the inbound mapping of DSCP tag 15 from its default 802.1p mapping of 1 to an 802.1p mapping of 2, it would have to be done in two steps because mapping ranges cannot overlap. Attempting to assign an overlapping mapping will give the error DSCP range already exists or overlaps with another range. First, you will have to remove 15 from its current end-range mapping to 802.1p CoS 1 (changing the end-range mapping of 802.1p CoS 1 to DSCP 14), then you can assign DSCP 15 to the start-range mapping on 802.1p CoS 2.

QoS Mapping

The primary objective of QoS Mapping is to allow 802.1p tags to persist across non-802.1p compliant links (e.g. WAN links) by mapping them to corresponding DSCP tags before sending across the WAN link, and then mapping from DSCP back to 802.1p upon arriving at the other side:

qos_mapping.jpg

 

Note         Mapping will not occur until you assign Map as an action of the QoS tab of an Access Rule. The mapping table only defines the correspondence that will be employed by an Access Rule’s Map action.

five.jpg

 

For example, according to the default table, an 802.1p tag with a value of 2 will be outbound mapped to a DSCP value of 16, while a DSCP tag of 43 will be inbound mapped to an 802.1 value of 5.

Each of these mappings can be reconfigured. If you wanted to change the outbound mapping of 802.1p tag 4 from its default DSCP value of 32 to a DSCP value of 43, you can click the Configure icon for 4 – Controlled load and select the new To DSCP value from the drop-down box:

Firewall_Managing_QoS00011.jpg

 

You can restore the default mappings by clicking the Reset QoS Settings button.

Managing QoS Marking

QoS marking is configured from the QoS tab of Access Rules under the Firewall > Access Rules page of the management interface. Both 802.1p and DSCP marking as managed by SonicOS Enhanced Access Rules provide 4 actions: None, Preserve, Explicit, and Map. The default action for DSCP is Preserve and the default action for 802.1p is None.

The following table describes the behavior of each action on both methods of marking:

Action

   802.1p (layer 2 CoS)   

DSCP (layer 3)

Notes

None

When packets matching this class of traffic (as defined by the Access Rule) are sent out the egress interface, no 802.1p tag will be added.   

   The DSCP tag is explicitly set (or reset) to 0.   

If the target interface for this class of traffic is a VLAN subinterface, the 802.1p portion of the 802.1q tag will be explicitly set to 0. If this class of traffic is destined for a VLAN and is using 802.1p for prioritization, a specific Access Rule using the Preserve, Explicit, or Map action should be defined for this class of traffic.

Preserve

   Existing 802.1p tag will be preserved.   

Existing DSCP tag value will be preserved.   

 

 

Explicit   

An explicit 802.1p tag value can be assigned (0-7) from a drop-down menu that will be presented.   

An explicit DSCP tag value can be assigned (0-63) from a drop-down menu that will be presented.

 

   If either the 802.1p or the DSCP action is set to Explicit while the other is set to Map, the explicit assignment occurs first, and then the other is mapped according to that assignment.

Map

The mapping setting defined in the Firewall Settings > QoS Mapping page will be used to map from a DSCP tag to an 802.1p tag   .

   The mapping setting defined in the Firewall Settings > QoS Mapping page will be used to map from an 802.1 tag to a DSCP tag. An additional checkbox will be presented to Allow 802.1p Marking to override DSCP values. Selecting this checkbox will assert the mapped 802.1p value over any DSCP value that might have been set by the client. This is useful to override clients setting their own DSCP CoS values.

   If Map is set as the action on both DSCP and 802.1p, mapping will only occur in one direction: if the packet is from a VLAN and arrives with an 802.1p tag, then DSCP will be mapped from the 802.1p tag; if the packet is destined to a VLAN, then 802.1p will be mapped from the DSCP tag.

For example, refer to the following figure which provides a bi-directional DSCP tag action.

qos_mapping.jpg

 

HTTP access from a Web-browser on 192.168.168.100 to the Web server on 10.50.165.2 will result in the tagging of the inner (payload) packet and the outer (encapsulating ESP) packets with a DSCP value of 8. When the packets emerge from the other end of the tunnel, and are delivered to 10.50.165.2, they will bear a DSCP tag of 8. When 10.50.165.2 sends response packets back across the tunnel to 192.168.168.100 (beginning with the very first SYN/ACK packet) the Access Rule will tag the response packets delivered to 192.168.168.100 with a DSCP value of 8.

This behavior applies to all four QoS action settings for both DSCP and 802.1p marking.

One practical application for this behavior would be configuring an 802.1p marking rule for traffic destined for the VPN zone. Although 802.1p tags cannot be sent across the VPN, reply packets coming back across the VPN can be 802.1p tagged on egress from the tunnel. This requires that 802.1p tagging is active of the physical egress interface, and that the [Zone] > VPN Access Rule has an 802.1p marking action other than None.

 

After ensuring 802.1p compatibility with your relevant network devices, and enabling 802.1p marking on applicable SonicWALL interfaces, you can begin configuring Access Rules to manage 802.1p tags.

Referring to the following figure, the Remote Site 1 network could have two Access Rules configured as follows:

Setting

Access Rule 1

Access Rule 2

General Tab

Action

Allow

Allow

From Zone

LAN

VPN

To Zone

VPN

LAN

Service

VOIP

VOIP

Source

Lan Primary Subnet

Main Site Subnets

Destination

Main Site Subnets

Lan Primary Subnet

Users Allowed

All

All

Schedule

Always on

Always on

Enable Logging

Enabled

Enabled

Allow Fragmented Packets

Enabled

Enabled

Qos Tab

DSCP Marking Action

Map

Map

Allow 802.1p Marking to override DSCP values

Enabled

Enabled

802.1p Marking Action

Map

Map

The first Access Rule (governing LAN>VPN) would have the following effects:

Assuming returned traffic has been DSCP tagged (CoS = 48) by the SonicWALL at the Main Site, the return traffic will be 802.1p tagged with CoS = 6 on egress.

To examine the effects of the second Access Rule (VPN>LAN), we’ll look at the Access Rules configured at the Main Site.

Setting

Access Rule 1

Access Rule 2

General Tab

Action

Allow

Allow

From Zone

LAN

VPN

To Zone

VPN

LAN

Service

VOIP

VOIP

Source

Lan Subnets

Remote Site 1 Subnets

Destination

Remote Site 1 Subnets

Lan Subnets

Users Allowed

All

All

Schedule

Always on

Always on

Enable Logging

Enabled

Enabled

Allow Fragmented Packets

Enabled

Enabled

Qos Tab

DSCP Marking Action

Map

Map

Allow 802.1p Marking to override DSCP values

Enabled

Enabled

802.1p Marking Action

Map

Map

 

VoIP traffic (as defined by the Service Group) arriving from Remote Site 1 Subnets across the VPN destined to LAN Subnets on the LAN zone at the Main Site would hit the Access Rule for inbound VoIP calls. Traffic arriving at the VPN zone will not have any 802.1p tags, only DSCP tags.

Bandwidth Management

Although bandwidth management (BWM) is a fully integrated QoS service, wherein classification and shaping is performed on the single SonicWALL appliance, effectively eliminating the dependency on external systems and thus obviating the need for marking, it is possible to concurrently configure BWM and QoS (layer 2 and/or layer 3 marking) settings on a single Access Rule. This allows those external systems to benefit from the classification performed on the SonicWALL even after it has already shaped the traffic. For details on how to configure BWM, see Methods of Configuring Bandwidth Management.

Outbound Bandwidth Management

The available bandwidth on a WAN link is tracked by means of adjusting a link credit (token) pool for each packet sent. Providing that the link has not yet reached a point of saturation, the prioritized queues are deemed eligible for processing.

fifteen.jpg

 

Like CBQ, SonicOS BWM is based on a class structure, where traffic queues are classified according to Access Rules—for example SSH, Telnet, or HTTP—and then scheduled according to their prescribed priority. Each participating Access Rule is assigned three values: Guaranteed bandwidth, Maximum bandwidth, and Bandwidth priority. Scheduling prioritization is achieved by assignment to one of eight priority queues, starting at 0 (zero) for the highest priority, and descending to 7 (seven) for the lowest priority. The resulting queuing hierarchy can be best thought of as a node tree structure that is always one level deep, where all nodes are leaf nodes, containing no children.

Queue processing utilizes a time division scheme of approximately 1/256th of a second per time-slice. Within a time-slice, evaluation begins with priority 0 queues, and on a packet-by-packet basis transmission eligibility is determined by measuring the packet’s length against the queue credit pool. If sufficient credit is available, the packet is transmitted and the queue and link credit pools are decremented accordingly. As long as packets remain in the queue, and as long as Guaranteed link and queue credits are available, packets from that queue will continue to be processed. When Guaranteed queue credits are depleted, the next queue in that priority queue is processed. The same process is repeated for the remaining priority queues, and upon completing priority 7 begins again with priority 0.

The scheduling for excess bandwidth is strict priority, with per-packet round-robin within each priority. In other words, if there is excess bandwidth for a given time-slice all the queues within that priority would take turns sending packets until the excess was depleted, and then processing would move to the next priority.

This credit-based method obviates the need for CBQ’s concept of overlimit, and addresses one of the largest problems of traditional CBQ, namely, bursty behavior (which can easily flood downstream devices and links). This more prudent approach spares SonicOS the wasted CPU cycles that would normally be incurred by the need for re-transmission due to the saturation of downstream devices, as well as avoiding other congestive and degrading behaviors such as TCP slow-start (see Sally Floyd’s Limited Slow-Start for TCP with Large Congestion Windows), and Global Synchronization (as described in RFC 2884):

Queue management algorithms traditionally manage the length of packet queues in the router by dropping packets only when the buffer overflows. A maximum length for each queue is configured. The router will accept packets till this maximum size is exceeded, at which point it will drop incoming packets. New packets are accepted when buffer space allows. This technique is known as Tail Drop. This method has served the Internet well for years, but has the several drawbacks. Since all arriving packets (from all flows) are dropped when the buffer overflows, this interacts badly with the congestion control mechanism of TCP. A cycle is formed with a burst of drops after the maximum queue size is exceeded, followed by a period of underutilization at the router as end systems back off. End systems then increase their windows simultaneously up to a point where a burst of drops happens again. This phenomenon is called Global Synchronization. It leads to poor link utilization and lower overall throughput. Another problem with Tail Drop is that a single connection or a few flows could monopolize the queue space, in some circumstances. This results in a lock out phenomenon leading to synchronization or other timing effects. Lastly, one of the major drawbacks of Tail Drop is that queues remain full for long periods of time. One of the major goals of queue management is to reduce the steady state queue size.

Algorithm for Outbound Bandwidth Management

Each packet through the SonicWALL is initially classified as either a Real Time or a Firewall packet. Firewall packets are user-generated packets that always pass through the BWM module. Real time packets are usually firewall generated packets that are not processed by the BWM module, and are implicitly given the highest priority. Real Time (firewall generated) packets include:

Outbound BWM Packet Processing Path

  1. Determine that the packet is bound for the WAN zone.

  2. Determine that the packet is classifiable as a Firewall packet.

  3. Match the packet to an Access Rule to determine BWM setting.

  4. Queue the packet in the appropriate rule queue.

Guaranteed Bandwidth Processing

This algorithm depicts how all the policies use up the GBW.

  1. Start with a link credit equal to available link BW.

  2. Initialize the class credit with configured GBW for the rule.

  3.    If that packet length is less than or equal to the class credit, transmit the packet and deduct the length from class credit and link credit.

  4. Choose the next packet from queue and repeat step c until class credit is lesser or rule queue is empty.

  5.    Choose the next rule queue and repeat steps b through d.

Maximum Bandwidth Processing

This algorithm depicts how the unutilized link BW is used up by the policies. We start with the highest priority and transmit packets from all the rule queues in a round robin fashion until link credit is exhausted or all queues are empty. Then we move on to the next lowest priority and repeat the same.

  1. Start with the link credit equal to the left over link BW after GBW utilization.

  2. Choose the highest priority.

  3. Initialize class credit to (MBW - GBW).

  4.    Check if the length of a packet from the rule queue is below class credit as well as link credit.

  5.    If yes, transmit the packet and deduct the length from class credit and link credit.

  6. Choose the next rule queue and repeat steps c through f until link credit gets exhausted or this priority has all its queues empty.

  7. Choose the next lowest priority and repeat steps c through f.

Example of Outbound BWM

sixteen.jpg

 

The above diagram shows 4 policies are configured for OBWM with a link capacity of 100 Kbps. This means that the link capacity is 12800 Bytes/sec. Below table gives the BWM values for each rule in Bytes per second.

BWM values

FTP

H323

Yahoo Messenger

VNC

GBW

1280

2560

640

2560

MBW

2560

5120

1920

3200

  1.    For GBW processing, we start with the first queue in the rule queue list which is FTP. Link credit is 12800 and class credit is 1280. Pkt1 of 400B is sent out on the WAN link and link credit becomes 12400 and class credit becomes 880. Pkt2 is not sent out because there is not enough class credit to send 1500 Bytes. The remaining class credit is carried over to the next time slice.

  2. We move on to the next rule queue in this list which is for H323. Pkt1 of 1500B is sent out and link credit becomes 10900 and class credit for H323 becomes 1060. Pkt2 is also sent from queue hence link credit = 10200 and class credit = 360. Pkt3 is not sent since there is not enough class credit. The remaining class credit is carried over to the next time slice.

  3.    Now we move onto Yahoo Messenger queue. Since Pkt1 cannot be accommodated with its class credit of 640 Bytes, no packets are processed from this queue. However, its class credit is carried over to the next time slice.

  4. From VNC queue, Pkt1 and Pkt2 are sent out leaving link credit = 8000 and class credit = 360. Class credit is carried over.

  5. Since all the queues have been processed for GBW we now move onto use up the left over link credit of 8000.

  6.    Start off with the highest priority 0 and process all queues in this priority in a round robin fashion. H323 has Pkt3 of 500B which is sent since it can use up to max = 2560 (MBW-GBW). Now Link credit = 7500 and max = 2060.

  7.    Move to the next queue in this priority which is VNC queue. Pkt3 of 500B is sent out leaving link credit = 7000B and class max = 140 (MBW-GBW - 500).

  8. Move to the next queue in this priority. Since H323 queue is empty already we move to the next queue which is VNC again.

  9.    From VNC queue Pkt4 of 40B is sent out leaving link credit = 6960 and class max = 100. Pkt5 of 500B is not sent since class max is not enough.

  10. Now we move onto next lower priority queue. Since priorities 1 through 3 are empty we choose priority 4 which has the rule queue for FTP. Pkt2 of 1000B is sent which leaves with link credit = 6000 and class max = 280. Since there are no other queues in this priority, FTP queue is processed again. But since class max is not enough for Pkt3 of 1500B it is not sent.

  11.    Move to the next lower priority which is 7 for Yahoo Messenger. Pkt1 of 1200B is sent leaving link credit = 4800 and class max = 80. Since no other queues exist in this priority, this queue is processed again. Pkt2 of 1500B is not sent since it cannot be accommodated with max = 80.

  12. At this point, all the queues under all priorities are processed for the current time slice.

Inbound Bandwidth Management

Inbound BWM can be used to shape inbound TCP and UDP traffic. TCP’s intrinsic flow control behavior is used to manage ingress bandwidth. To manage inbound UDP traffic, CBQ is used by the ingress module to queue the incoming packets. TCP rate is inherently controlled by the rate of receipt of ACKs; i.e. TCP sends out packets out on the network at the same rate as it receives ACKs. For IBWM, the sending rate of a TCP source will be reduced by controlling the rate of ACKs to the source. By delaying an ACK to the source, round-trip time (RTT) for the flow is increased, thus reducing the source’s sending rate.

An ingress module monitors and records the ingress rate for each traffic class. It also monitors the egress ACKs and queues them if the ingress rate has to be reduced. According to ingress BW availability and average rate, the ACKs will be released.

17-inbound_flow.jpg

 

Algorithm for Inbound Bandwidth Management

IBWM maintains eight priority queues, where each priority has one rule that has IBWM enabled. The IBWM pool is processed from the highest to lowest priority further shaping the traffic. IBWM employs three key algorithms:

Ingress Rate Update

This algorithm processes each packet from the WAN and updates the ingress rate of the class to which it belongs. It also marks the traffic class if it has over utilized the link.

  1.    Determine that the packet is from the WAN zone and is a firewall packet.

  2. Add the packet length to the sum of packet lengths received so far in the current time slice. Deduct the minimum of (GBW, packet length) from link’s credit.

  3.    If the sum is greater than the class’s credit, mark the class to be over utilizing the link.

  4.    If the packet length is greater than the link’s credit, mark the link as well as the class to be over utilized.

Egress ACK monitor

This algorithm depicts how the egress ACKs are monitored and processed.

  1. Determine that the packet is to the WAN zone and is a TCP ACK.

  2.    If class or interface is marked as over utilizing, queue the packet in the appropriate ingress rule queue.

Process ACKs

This algorithm is used to update the BW parameters per class according to the amount of BW usage in the previous time slice. Amount of BW usage is given by the total number of bytes received for the class in the previous time slice. The algorithm is also used to process the packets from the ingress module queues according to the available credit for the class.

Credit-Based Processing

A class will be in debt when its BW usage is more than the GBW for a particular time slice. All the egress ACKs for the class are then queued until the debt is reduced to zero. At each successive time slice, debt is deducted by GBW and if link BW is left, (MBW – GBW) is also deducted.

Compute BW usage in the previous time slice:

  1.    Compute average ingress rate using the amount of BW usage by the class.

  2. If the BW usage is more than the class credit, record the difference as debt. If link BW is left over, deduct (MBW - GBW) from debt.

  3. Compute the class and link credit for the current time slice:

  4. Process packets from ingress pool from highest priority to lowest priority.

  5. Record class credit as remaining credit.

  6.    If remaining credit is greater than or equal to average rate, process the ACK packet and deduct average rate from remaining credit.

  7.    Repeat g until remaining credit is not enough or the ingress ACK queue is empty.

  8. Repeat steps f through h for the next rule queue.

  9.    Repeat steps f through i for the next lowest priority.

Example of Inbound Bandwidth Management

Consider a class with GBW = 5 Kbps, MBW = 10 Kbps and Link BW = 100 Kbps. In terms of bytes per second we have GBW=640, excess BW = (MBW - GBW) = 640 and link BW = 12800.

No.   

Ingress

Egress

Credit

Debt

Rate

               Link BW

   #Acks

1.

0

0

640

0

0

12800

 

0

2.

1300

0

620

0

1300

12780

0

2a.

0

40

620

0

1300

12780

1

3.

0

0

1260

0

1300

12800

1

4.

0

0

1900

0

1300

12800

0

  1.    Class credit starts with 640. In row 2, 1300 bytes are received for this class in the previous time slice. Since it is more than the class credit, debt = 20 (1300-GBW-excess BW). For the current time slice class credit = 620 (GBW - debt), debt = 0 and link BW = 12780 since 20 bytes of debt is already used up from GBW for the class.

  2.    Row 2a shows an egress ACK for the class. Since class credit is less than the rate this packet is queued in the appropriate ingress queue. And it will not be processed until class credit is at least equal to the rate.

  3.    In the following time slices, class credit gets accumulated until it matches the rate. Hence, after two time slices class credit becomes 1900 (620 + 640 + 640). The queued ACK packet is process from the ingress pool at this point.

In row 2a, an ACK packet is received that needs to be sent to the TCP source on the WAN zone. Sending this ACK immediately would have caused the TCP source to send more packets immediately. By queuing the ACK and sending it only after the class credit reaches the average rate, we have reduced the TCP’s sending rate; i.e. by doing this we have slowed down the ingress rate.

Glossary