[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Using state and routing inbound traffic



Ah, I think I get what you mean. You don't want to rate-limit your
outgoing replies to achieve this effect on incoming traffic. Instead,
you simply rate-limit the incoming traffic to some rate X, assuming the
peer will converge to send at exactly that rate through the feedback
effects of TCP. Is that it?
Like, you have a firewall with two links, one internal and one external.
Let's assume the external link has more bandwidth (that's not true for
most people, I guess, but let's assume it is for this example):
                       /----
      ----\            | internal |---- fw ----| external
      ----/            |                       \----
Say, the internal link has 10 mbps bandwidth and the external one 30
mbps.
Now, let's assume all links are idle except for an internal host
downloading a large file from an external server, through a TCP protocol
like HTTP. In the best case, the internal client will receive the file
at a steady rate of 10 mbps. The external server will start sending
slower (or faster), but due to TCP mechanisms, it will eventually
discover that optimal rate and soon start sending at a steady rate of 10
mbps. (Always assuming that the uplink chain to the server has >= 30
mbps bandwidth all the way through).
By the same theory, if you artificially rate-limit outgoing packets on
the internal interface of the firewall to, say, 2 mbps, the external
server will "tune in" and send a steady stream of 2 mbps, leaving 8 mbps
of internal bandwidth idle.
That's the theory, right?
I'm not sure how well this works in practice. The TCP mechanisms are
designed to achieve this, but the convergence of the rate is not
immediate and perfect. The sender will regularly try to increase its
rate (think of a perfectly normal case where you start a download
slowly, because of other concurrent downloads, and the download gets
faster as the other downloads are complete), and it reacts rather
drastically to any lost packets. So there are bursts in the stream the
server sends, and there's feedback effects. TCP window sizes, window
scaling and SACK all affect this.
We're certainly not the first ones discussing this, there must be
volumes of papers about dynamics of TCP like these, maybe someone can
comment on whether this simple strategy is supposed to work like that :)
And, no, this won't work for UDP at all, unless some application is
using UDP and emulating TCP-like flow-control artificially.
Daniel