Quote:
Originally posted by DYN_DaTa
This method is, according to the RFC draft, based on the ZLIB Compression Engine, so it could be interesting; anyway i would like to see real benchmarks of this method applied to ftp normal operations.
|
It's all about how you define world 'real'. Mode Z has it's advantages on small not so busy server, with enough cpu-power to take advantage of whole bandwidth with the compression. On large servers it will degrade performance, if compressed output is not cached. (building cache from stream, isn't easiests tasks to do
) My own biased opinion is that lifetime of 'Mode Z' will be comparable to what diskcompression utilities had, perhaps even shorter. As usual this approach was developed by unix people, who know nothing about scalability
(sorry, but that's the way it is... apache2, has been in developement for years, and they still not haven't got a stable release :\)
Summa summarum:
- Increases network efficiency on files that can be compressed. Decreases efficiency on certain files.
- Adds latency in 'not so well programmed' implementations
- Increases transfer cpu-usage by hundreds of percentages
- Adds memory bandwidth usage (buffer copying)
- Not usable on gbit networks, if server is to take advantage of full bandwidth
- Works very well on 100mbit and slower connections with 'properly' coded daemon and client implementation. (read(), zip(), send() is not such... :])
- For best efficiency requires more than one CPU... (in well coded application
)
.... but then again, what do I know