Привет!
Какая у вас скорость загрузки файлов на реальном железе? Также напишите какая сетевая карта и ее пропускная способность.
Для теста запустить /sys/Network/Dl, ввести http://speedtest.tele2.net/100MB.zip
Далее запускаем /sys/Network/Netstat и смотрим поле Download (kb/s)
Тестирую драйвер для своей карты с пропускной скоростью 100 МБит/с, максимальная скорость скачивания чуть больше 1 МБ/с, а бывает очень сильно проседает. При этом в прогрессе загрузки в Dl наблюдаются замирания скачивания.
В Windows на этой карте загрузка с этого ресурса оболо 2,2 МБ/с
Hello!
What is your download speed on real hardware? Also write you network card and its bandwidth.
To test run /sys/Network/Dl, enter http://speedtest.tele2.net/100MB.zip
Next, run /sys/Network/Netstat and look at the Download (kb / s) field
I am testing the driver for my card with a bandwidth of 100 Mbps, the maximum download speed is a little over 1 Mbps, and sometimes it sags very much. At the same time, download freezes are observed in the download progress in Dl.
In Windows on this card speed of download from this link about 2.2 MBps
Скорость загрузки файла
Yes, it's a known issue.
Some reasons:
1. TCP implementation is unfinished. "At the same time, download freezes are observed in the download progress in Dl." for example is an effect of the congestion control algorithm.
2. Nothing is really optimized or "tuned" for performance so far:
- Some drivers work only in promiscuous mode or use very few descriptors.
- There is still some unnecessary copying of the data.
- Checksum offloading is not enabled.
- ...
Some reasons:
1. TCP implementation is unfinished. "At the same time, download freezes are observed in the download progress in Dl." for example is an effect of the congestion control algorithm.
2. Nothing is really optimized or "tuned" for performance so far:
- Some drivers work only in promiscuous mode or use very few descriptors.
- There is still some unnecessary copying of the data.
- Checksum offloading is not enabled.
- ...
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." Albert Einstein
I think that need modify a scheduler algorithm - it need to switch to user threads with waiting sockets just after system threads is endseffect of the congestion control algorithm
My driver no downloads foreign packets (multicast/broadcast/promiscuous recive is disabled). Also I tryed to increase coutn of Rx descriptors, but this does not help.Some drivers work only in promiscuous mode or use very few descriptors.
As far as I know not all cards support this featureChecksum offloading is not enabled.
Hmm, not sure, any idea about how much time we currently 'lose' ?Coldy wrote:I think that need modify a scheduler algorithm - it need to switch to user threads with waiting sockets just after system threads is ends
Right, well actually to check performance of the driver, it would be better to use iperf in UDP mode.My driver no downloads foreign packets (multicast/broadcast/promiscuous recive is disabled). Also I tryed to increase coutn of Rx descriptors, but this does not help.
There is "udpserv" demo program on the SVN which should be enough to test IIRC
True, but currently it is not used even on those cards that do support it.As far as I know not all cards support this feature
I have not done any performance checks on the checksum validation routines in kernel yet (Ipv4/TCP/..).
In any case, there are some settings you can play with before compiling the kernel.
Just remember, bigger is not always bigger! (https://www.bufferbloat.net/)
Here are the things that come to mind:
NET_BUFFERS in network/stack.inc - Number of NET_BUFFs (as used in your driver for example) that will be created by kernel at boot. It is the maximum number of NET_BUFFs that can be in use at any time. (Increase if you see packet overruns.)
SOCKET_BUFFER_SIZE in network/stack.inc - it is the size of the buffer that kernel copies all data into, before it is copied into application buffer during recv call.
only used of STREAM sockets like TCP.
SOCKET_QUEUE_SIZE in network/socket.inc - it is the maximum amount of packets that can be queued to be transferred into the application buffer. Like the above, but with one less copying phase. Used for stateless sockets like UDP.
TCP_MAX_WIN in network/tcp.inc - Maximum TCP window size (should fit in SOCKET_BUFFER_SIZE)
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." Albert Einstein
I'm not found udpserv in SVN. Please tell me where it located.
Added #9228Coldy wrote:I'm not found udpserv in SVN. Please tell me where it located.
But after reading doc about iPerf, I'm afraid my proposed setup will not work with UDP and we need something more sophisticated to test with UDP...
"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." Albert Einstein
Coldy
That was a good idea to add such simple test so I have added it into System Panel #9246.
Rough, simple and useful.
That was a good idea to add such simple test so I have added it into System Panel #9246.
Rough, simple and useful.
- Attachments
-
-
Screenshot_1.png (38.76 KiB)Viewed 12204 times
-
Из хаоса в космос
Leency, you're welcome
Who is online
Users browsing this forum: No registered users and 3 guests