View Single Post
Old 10-27-2005, 08:12 PM  
_panic_
Senior Member
 
Join Date: Jul 2005
Posts: 153
Default

wow, thanks for such a thorough post.

let me see if i understand, since there is a lot going on here.

you're running a benchmark tool called hd tac. when you run this benchmark with no users, you get the first graph (~74MB/sec).

in your second graph, with ten users connected, you run hd tac again, and generate the middle benchmark statistics (~40MB/sec). the users themselves are using ~10MB/sec, so we can't account for ~24MB/sec of capacity somewhere.

when the users begin to generate ~25MB/sec of usage, your benchmark drops to 6.5MB/sec, so we have lost ~40MB/sec of capacity somewhere.

does that fairly resummarize your problem?

i have a couple questions. are the users some kind of "control", or are they just random users connected? i ask because i'm trying to figure out how much of this issue is a controlled measurement vs how much is random chance. this is important for i/o contention. (the pattern should look different if 10 people are grabing the same file, or if they all are getting different files.)

second, are you experiencing a problem that caused you to want to run a benchmark, or are did you get concerned after you saw the benchmark result? what does running two hd tac programs at the same time report? i'm trying to figure whether the issue is ioftpd, or whether ioftpd is a canary for a deeper configuration/contention problem.
_panic_ is offline   Reply With Quote