[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

difficulty queueing fragments

  i'm trying to setup a simple pf.conf for a machine who is the
  YP master, NFS server, and Samba server.  most of my nfs traffic
  is coming across the wire as fragments, so i'm trying to catch
  those fragments into the nfs queue with the keyword 'fragment'.
  i have put a label on that rule of 'fragment', and per pfctl -vsl, 
  i can see packets incrementing the counters on that label, but
  the packets are being queued into the 'default' queue.
  i checked on pf.conf and didn't see any warnings about queueing fragments,
  other than it mentioning you might need to be very vague in the rule
  and that you can't keep/match states on fragments; i checked the queueing
  part and it didn't mention that a packet has to be stateful to be queued,
  and since they are being put into the default queue, it imples that 
  queueing is happening to them ( i think ).
  here is my entire pf.conf currently.  i have a more elaborate one, but
  this is a slimmed down pf.conf i made for testing that does duplicate
  the issue:
doublewide.hklocal.net $ sudo cat /etc/pffrag.conf
trustedhosts="{" "<VPN>" "<HKLOCAL>" "}"
table <VPN>     persist const {}
table <HKLOCAL> persist const {$e:network}
table <DOUBLEWIDE> persist const {$e $e:broadcast}
altq on $e priq bandwidth 100Mb queue {q-nfs q-bulk q-ack}
queue q-nfs     priority 7 priq
queue q-bulk    priority 4 priq(default)
queue q-ack     priority 8 priq
block return log on $e all
pass on $e proto {icmp icmp6} all keep state queue q-bulk
pass on $e all keep state queue (q-bulk q-ack)
pass in on $e inet proto udp from $trustedhosts to <DOUBLEWIDE> port $nfs \
	keep state queue q-nfs label "nfs"
pass out on $e inet proto udp from <DOUBLEWIDE> port 2049 to $trustedhosts \
	keep state queue q-nfs label "nfs"
pass on $e all fragment queue q-nfs label "fragment"
  here is pfctl -vsq and -vsl output:
doublewide.hklocal.net $ sudo pfctl -vsl
icmp 24438 0 0
icmp 24438 0 0
bulk 24438 43 2968
nfs 24438 0 0
nfs 24414 0 0
nfs 24430 0 0
nfs 0 0 0
fragment 24441 24417 33951340
doublewide.hklocal.net $ sudo pfctl -vsq
queue q-nfs priority 7
  [ pkts:      11368  bytes:    8741512  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50 ]
queue q-bulk priority 4 priq( default )
  [ pkts:      26356  bytes:   36511292  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50 ]
queue q-ack priority 8
  [ pkts:          1  bytes:         90  dropped pkts:      0 bytes:      0 ]
  [ qlength:   0/ 50 ]
  also if i watch with -vvsl or in pftop, i can see the majority of my
  nfs traffic being queued into the default queue.
  i checked tcpdump and here's what a typical nfs exchange is looking like:
doublewide.hklocal.net $ sudo tcpdump -ni fxp0 udp
tcpdump: listening on fxp0
13:19:10.794335 > xid 0x9d096e1a reply ok 96
13:19:10.794820 > xid 0x9d096eae 1472 write [|nfs] (frag 26549:[email protected]+)
13:19:10.794992 > (frag 26549:[email protected]+)
13:19:10.795176 > (frag 26549:[email protected]+)
13:19:10.795356 > (frag 26549:[email protected]+)
13:19:10.795541 > (frag 26549:[email protected]+)
13:19:10.795619 > (frag 26549:[email protected])
  so that first one, at 13:19:10.794820, that one seems to be queued into
  the nfs queue, but the subsequent 5 end up in default.  i did a bit
  of math on them, and proportionately, the 5 subsequent fragments add
  up to be about 82% of the nfs traffic.  if i look up in the -vsq output, 
  add the bytes up, and do the math there, 36511292 is also about
  80% of the total traffic between bulk and nfs.  since there are currently
  other things in bulk too, because of the simple pf.conf, those seem 
  to corroborate with one another.
  is my 'label "fragment"' rule just written wrong?  i tried also including
  'fragment reassemble' into the rule, but couldn't figure out how to get it
  in without incurring a syntax error.
[ openbsd 3.6 GENERIC ( nov 4 ) // i386 ]