Consider the following scenarios:
(1) You notice during large network uploads or downloads that other network applications are unresponsive.
(2) You run a busy web or ftp server that is frequently overloaded resulting in spotty access.
(3) You are using a Macintosh to share your Internet connection and would like the sharing to be more fair so that one users large file transfers don't disrupt other users.
TCP/IP is designed to be adaptive to take advantage of a wide range of network media from slow dialup to fast Ethernet. The basic behavior of a TCP/IP connection during bulk data transfer is to suck up as much bandwidth as it can get until it detects congestion in the form of lost data or retransmit time outs and then back off conservatively to avoid exacerbating the congestion. Eventually, it usually stabilizes to use whatever bandwidth is available. If congestion becomes frequent however from additional users trying to gain access, network throughput can suffer.
IPNetSentryX and IPNetRouterX provide a TCP Rate Limiting feature to divide bandwidth evenly among multiple competing connections without congestion, and to limit or reserve the bandwidth available to certain services so that others can remain responsive. This application note shows how it works.
Example:
Consider a host on a LAN behind an internet access router attached to a cable modem. We might want to limit the Internet bandwidth consumed by each service on this host to minimize congestion and leave room for others. The nominal upstream bandwidth of my cable modem is 2 Mbps. The figure below shows the data transferred while uploading a 1.1 MB file over an unrestricted connection:
By adding firewall rules like the ones below we can limit the internet bandwidth used by external connections.
Rule 2.7 is used for grouping. In rule 2.7.1 we consider outbound traffic for our Internet access router by using its MAC address obtained using the Address Scan tool. The effect is to reshape the connection traffic as follows:
The first picture (far left) shows the traffic is evenly dispersed over a longer period. The second picture (middle left) is the TCP Info tool which shows control packets as orange, and any duplicate or retransmit segments as blue (scale is exponential to reveal even a single retransmit). The transfer is nearly 100% efficient with no retransmissions. 512Kbps (bits/sec) is 64KBps (bytes/sec) so the actual transfer rate using common tools (IPNRx 1.1c7 and Fetch to my web site) corresponds to the limit we set. Bumping the rate limit to "1M" provides a corresponding increase as shown in the 3rd picture (middle right). Finally, we can limit inbound as well as outbound traffic by setting a corresponding inbound limit (rule 2.7.2) as shown in the 4th picture (far right). In this case the sender is many Internet hops away but the rate limit is still effective.
How It Works
The Rate Limit In/Out action allows you to specify the maximum bandwidth (bits/second) available to matching TCP/IP connections. The bandwidth is specified in the parameter field. You can use K or M immediately following a string of digits to specify Kilobits or Megabits as in 100K or 1M. IPNetRouterX provides TCP rate limiting (pacing) by adjusting the advertised window size of packets from corresponding connections. As packets match a Rate Limit rule, the corresponding connection table entry is set to point to that rule. When the connection table is aged, a tally of the active connections (that exceed a traffic threshold) is calculated for that rule and the available bandwidth is divided evenly among the corresponding connections. In other words, each active TCP/IP connection matching this filter rule gets an even share of the bytes/second available while avoiding congestion.
Possible Applications
Maintain network responsiveness during heavy Email or file transfers
During the holiday season many people like to send Email greetings to their friends with pictures or other images, but uploading Email is not urgent. It doesn't matter if these messages are delivered in 5 minutes or 30 minutes when traffic is heavy. By limiting the upload rate for Email (SMTP), we can avoid slowing the network to a crawl during heavy Email periods. Similarly, we might limit the bandwidth for transferring via FTP so that our network remains responsive while downloading the latest software updates.
Improve network utilization for uploading and downloading at the same time
Many Cable or DSL systems impose an upload rate cap and have transmit buffers that collect data until the system is ready to accept it. As the buffer fills, protocol acknowledgements are delayed waiting behind the data in the buffer causing retransmissions. By limiting the send rate, we can increase overall network efficiency.
You can visualize network packets as cars travelling on a highway but with a speed limit measured in bits-per-second instead of miles-per-hour. As the network becomes congested, traffic slows regardless of the actual speed limit. Rate limiting synchronizes when cars are allowed to enter the highway to allow the maximum amount of traffic to flow at or near the speed limit. In severe cases similar to stop-and-go traffic, rate limiting can actually increase the average transfer rate.
Related link: "Simultaneous download and upload performance"
Improve server responsiveness when frequently accessed
Imagine you are running a web server behind a T1 line (1.544 Mbps) and that you receive two requests for a large web page simultaneously. On Mac OS X, the TCP send window defaults to 32K, so the sender ramps up to 32K in flight for each request. At T1 speeds it will take just over 300ms to send the 64K bytes in flight. Other traffic or connection requests will have to wait behind this 300ms backlog.
With a rate limit in effect, IPNetSentryX limits the total send window to 1/10 the specified rate limit. If we set a rate limit of 1.5 Mbps, our total send window would be limited to 20K or about 100ms allowing our server to remain more responsive. With rate limiting, our server still responds right away and pages arrive just as quickly. The difference is we parcel out the data more slowly to avoid creating a large backlog in the send buffer. Our server can now handle more requests per second and still remain responsive.
Allocate bandwidth among different users
If we know the total bandwidth available from our Internet provider, we might use this bandwidth as our rate limit to avoid congestion by forcing all TCP/IP connections to share the available bandwidth evenly. If our Internet connection is shared among multiple users, we can limit the bandwidth available to one or more users on our network.
Reserving Bandwidth
In addition to limiting bandwidth for a class of traffic, we want to be able to reserve bandwidth without leaving it unused when it is not needed. We do this by making the last rate-limit-in and last rate-limit-out rules adjustable based on how much of their limit any previous rate limit rules consumed. Each rate limit rule reserves bandwidth for any matching traffic up to the specified limit. If more than half the bandwidth limit is used, the specified limit is subtracted from the limit of the corresponding last rate limit rule. If less than half is used, we only subtract the amount actually used.
The last rate limit rule becomes a "catch all" that limits everything else based on the amount actually used by any previous rules. Thus previous rate limit rules effectively reserve bandwidth when needed, but still allow it to be used by other traffic when not needed.
Suppose we are using Voice over IP (VoIP) and want to reserve some bandwidth to insure high quality voice connections. Each voice call needs about 128K bps. We can create a rule that matches UDP traffic or more specifically Skype traffic with an upstream limit of 128K. We then create a last rule that matches everything else specifying the total upstream bandwidth available. If the Skype rule sees traffic, it will reduce the bandwidth available to everything else. If it doesn't, the corresponding unused bandwidth remains available for other traffic.
Deployment Options
You can use TCP rate limiting to throttle one or more services such as SMTP or FTP on a single system. In this case IPNetSentryX might be appropriate.
If you want to control an entire network, you need a two port bridge or gateway. You can add a second Ethernet port to a Mac Mini, or other compact Macintosh using a USB-To-Ethernet adaptor as described here: <http://sustworks.com/site/news_usb_ethernet.html>. Once you have a two port machine, you can enable Ethernet bridging to create a transparent firewall, or use IPNetRouterX to configure a full blown NAT gateway.
Some Technical Perspective
TCP/IP is a sliding window protocol which means the receiver tells the transmitter how much buffer space is available to receive data (the window size) and the sender cannot send more data than the advertised buffer space until previous data is acknowledged. Notice the window has both a "window start" which corresponds to the last byte acknowledged, and a "window limit" which determines how much data can be sent before waiting for an acknowledgement that advertises more space. Once the rate limit for each connection is determined, we modify the advertised window size to limit the rate of subsequent data transfer by controlling when and how much the window limit moves.
This technique uses TCP/IPs built-in flow control mechanism and avoids adding latency that can reduce network efficiency. Most packet shaping solutions use pipes or queues (weighted fair queuing) that were originally intended to simulate impaired network connections. Since TCP/IP is adaptive, it will self adjust to the impaired network bandwidth available. While much easier to implement, these solutions have some limitations:
- Queuing (buffering and waiting) adds latency and does not pro actively prevent congestion before it occurs.
- Cannot limit inbound traffic through the last mile or critical network segment since it hasn't reached the Queue yet.
- Shaping amounts to reordering and dropping packets.
- Limited number of queues compared to TCP itself which controls the rate of every connection.
On UNIX systems, it is common to use an ipfw divert socket to intercept packets at the data link layer and then modify them at the application layer. While easier to implement and port among UNIX environments, shuffling packets between kernel and application address spaces imposes high overhead limiting throughput (this same issue applies to UNIX natd). IPNetRouterX uses a Network Kernel Extension to extract maximum performance on Mac OS X.
Finally, a pre-condition for being able to apply bandwidth management is to first classify traffic so you can limit the specific data flows of interest. IPNetSentryXs hierarchical filter rules are designed with this flexibility in mind.
Key Features
TCP Rate Limiting is a powerful tool that allows you to allocate network resources to reflect your business priorities, or work around congestion problems that might otherwise sap network efficiency. TCP Rate Limiting is a first step toward more comprehensive bandwidth management in IPNetSentryX and IPNetRouterX.
Please send questions, comments, or suggestions using our general requests form:
http://www.sustworks.com/site/sup_questions.html
Top
Back to IPNetSentryX Application Notes
|