Prev: [PATCH] vesafb: use platform_driver_probe() instead of platform_driver_register()
Next: benet: Fix compile warnnings in drivers/net/benet/be_ethtool.c
From: Alexander E. Patrakov on 26 Mar 2010 13:00 26.03.2010 21:02, Alan Cox wrote: > You can then do this > > static int ml_explode = 1; > module_param(ml_explode, int, 0600); > MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \ > fragmentation when talking to cisco devices"); > > which will let you load the module with the option ml_explode = 0 if you > want that property. > > Making it runtime per link selectable would be nicer but thats a bit more > work. Doesn't it work already via echoing values to /sys/module/ppp/generic/parameters/ml_explode in the above code? -- Alexander E. Patrakov -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Ben McKeegan on 31 Mar 2010 06:30
>>> Making it runtime per link selectable would be nicer but thats a bit more >>> work. >> Doesn't it work already via echoing values to >> /sys/module/ppp/generic/parameters/ml_explode in the above code? > > Thats runtime (and why I set 0600 in the permissions for the example) but > not per link. > I needed to do something similar a while back and I took a very different approach, which I think is more flexible. Rather than implement a new round-robin scheduler I simply introduced a target minimum fragment size into the fragment size calculation, as a per bundle parameter that can be configured via a new ioctl. This modifies the algorithm so that it tries to limit the number of fragments such that each fragment is at least the minimum size. If the minimum size is greater than the packet size it will not be fragmented all but will instead just get sent down the next available channel. A pppd plugin generates the ioctl call allowing this to be tweaked per connection. It is more flexible in that you can still have the larger packets fragmented if you wish. We've used a variant of this patch on our ADSL LNS pool for a few years now with varying results. We originally did it to save bandwidth as we have a per packet overhead and fragmenting tiny packets such as VoIP across a bundle of 4 lines made no sense at all. We've experimented with higher minimum settings up to and above the link MTU, thus achieving the equivalent of Richard's patch. In some cases this has improved performance, others it makes it worse. It depends a lot on the lines and traffic patterns, and it is certainly not a change we would wish to have on by default. Any solution going into mainline kernel would need to be tunable per connection. One of the issues seems to be with poor recovery from packet loss on low volume, highly delay sensitive traffic on large bundles of lines. With Linux at both ends you are relying on received sequence numbers to detect loss. When packets are being fragmented across all channels and a fragment is lost, the receiving system is able to spot the lost fragment fairly quickly. Once you start sending some multilink frames down individual channels, it takes a lot longer for the receiver to notice the packet loss on an individual channel. Until another fragment is successfully received on the lossy channel, the fragments of the incomplete frame sit in the queue clogging up the other channels (the receiver is attempting to preserve the original packet order and is still waiting for the lost fragment). Original patch attached. This almost certainly needs updating to take account of other more recent changes in multi link algorithm but it may provide some inspiration. Regards, Ben. |