Coldy wrote:
I think that need modify a scheduler algorithm - it need to switch to user threads with waiting sockets just after system threads is ends
Hmm, not sure, any idea about how much time we currently 'lose' ?
Quote:
My driver no downloads foreign packets (multicast/broadcast/promiscuous recive is disabled). Also I tryed to increase coutn of Rx descriptors, but this does not help.
Right, well actually to check performance of the driver, it would be better to use iperf in UDP mode.
There is "udpserv" demo program on the SVN which should be enough to test IIRC
Quote:
As far as I know not all cards support this feature
True, but currently it is not used even on those cards that do support it.
I have not done any performance checks on the checksum validation routines in kernel yet (Ipv4/TCP/..).
In any case, there are some settings you can play with before compiling the kernel.
Just remember, bigger is not always bigger! (
https://www.bufferbloat.net/)
Here are the things that come to mind:
NET_BUFFERS in network/stack.inc - Number of NET_BUFFs (as used in your driver for example) that will be created by kernel at boot. It is the maximum number of NET_BUFFs that can be in use at any time. (Increase if you see packet overruns.)
SOCKET_BUFFER_SIZE in network/stack.inc - it is the size of the buffer that kernel copies all data into, before it is copied into application buffer during recv call.
only used of STREAM sockets like TCP.
SOCKET_QUEUE_SIZE in network/socket.inc - it is the maximum amount of packets that can be queued to be transferred into the application buffer. Like the above, but with one less copying phase. Used for stateless sockets like UDP.
TCP_MAX_WIN in network/tcp.inc - Maximum TCP window size (should fit in SOCKET_BUFFER_SIZE)