View Single Post
Old 11-17-2007, 04:12 PM  
Too much time...
Join Date: May 2005
Posts: 1,194

It's a matter of lan verse wan. A 100mbit connection with remote 10mbit users connected to it is a very different thing than a gbit lan.

I don't believe send/recv buffers go above 64k so not sure that really makes much of a difference but I haven't looked into that on windows and in ioFTPD specifically so I'm just going on what I know of generic socket implementations. Ftp clients often default to 32k as well for the same reasons I outlined earlier. It is true that the internet for the most part is FAR more reliable about packet delivery than it was a decade ago so perhaps maxing the buffer sizes makes sense now.

On the other hand transfer buffers are internal and raising them may see a noticable difference especially on multi-core CPUs. In particular if the server is fast enough to max out a drives b/w streaming data then the larger IO requests probably helps with disk caching and user/kernel overhead. I'd definitely raise the transfer buffer size, and if we get a few other datapoints I might increase the default.

The one drawback is ioFTPD doesn't use memory pools for those large allocations and memory fragmentation/growth will go up with such large buffers. I wouldn't worry about that since even in bad cases the server still uses very little memory.
Yil is offline   Reply With Quote