Go Back   FlashFXP Forums > > > >

ioFTPD General New releases, comments, questions regarding the latest version of ioFTPD.

Reply
 
Thread Tools Rate Thread Display Modes
Old 04-11-2010, 01:06 PM   #1
isteana
Senior Member
 
Join Date: Mar 2006
Posts: 110
Default The issue of auto wiping

A first of all until now picks small bug and adds a few thing function about the upgrade which is thanking Middle doing...

but problem running 24 time server without, the ranking which is a precedence in the part which is core for, Tries and efficient intelligence thought isn' that works.

With currently, operates with unmanned server most grows big problem point hour blood which already several times refers to also a past AUTO WIPING functions not to operate well and is. Uses FTP servers which sees and does not use AUTO WIPING functions, type I see how much will not be. But the being a function whose AUTO WIPING are important so far there is a bug of past WARCHIVE, SCRIPT where comes out newly other than only as many as one is not the thing.

But SCRIPT where relates with this so far currently anyone does not make with WARCHIVE after that has a bug. When WARCHIVE the biggest problem point when eliminating a file, folder people passes over a specific length (Accurately, when being an above of some letter, the problem will fall and occurs will not know) The folders and the files which are included inside are not eliminated with automatic not to be, the script stops on the middle and throws away, As end where SCRIPT works are not completed from the point of view to throw away and is to wear out.

About like that rank RELEASE where becomes with long named folder people does not know how much the many sides which does not become but hour blood which knows from the section which specifies Quality as a matter of the section the case where the directory which has become with long named folder people are many quite isn't?

As when to depend to let alone consequently like this sections in only WARCHIVE managements Justly existing long the dir/file which becomes listens and to wipe support as is not being stagnant and In compliance with new release hard is made to throw away tight and the moment any more the upload becomes impossibly.

About WARCHIVE source only that oneself of the developer to be having but, consequently about this part That developer amends a bug and there is not a valence which grudge anyone which does not update will amend Currently WARCHIVE developers are in the process of disappearing but again returns but Is, or, about the nerve does not sweep even will not be able to believe firmly

Let you guys try to think! From server for 24 hours new files continuously is coming up If the files which, past are unnecessary are not eliminated with automatic will be able to manage a server how.

If it only server of small-scale ARCHIVE uses, will not know new release comes up endlessly The manager 24 hour monitor ring directly from huge server and in order to maintain a space hard is empty One day wiping the folders and the files which are unnecessary had become release in past wearing out truly the fool same act and It doesn' the manager must do think by manual operation that the thing is absurd?

Therefore is who related hereupon and the grudge which does not make SCRIPT problem point continuously only could not be being stagnant.

I' function which rather is basic does not do like other SCRIPT and modularization instead of from server dimension That appears to be recovering that has built-in, several times referred, appears listening attentively to that extent.

Rather using LINUX side servers(like a glftpd) which are stable all things is already established waiting it Is quicker the thing anyone is knowing, does not abandon the functions of WINDOWS base not to be, Passes around a server and the user exists, doesn' mean that now the server program will not be receiving love? Seriousness of this problem listens attentively please and period wishes.

With currently middle doing most the server problem the server which goes back without is basic and basic with unmanned From environment makes and the thing intelligence which puts, The issue of importance as to let alone later in only update of the function which is additional becomes the chart continuously The nerve that writes, sees that clearly there is a problem.

If even recently server version is upgraded, the brush is taken the flaw which is rolled up is substantial, or, the function which is renovation Happened, sees to, was not difficult?

Work judges a priority please and rationally progress haeju route wishes.

Last edited by isteana; 04-12-2010 at 02:10 PM.
isteana is offline   Reply With Quote
Old 04-11-2010, 06:55 PM   #2
monk-
Member
 
Join Date: Oct 2007
Posts: 32
Default

damn, the worst machine translation ever

i gather that u complain about a bug in warchive that it doesnt wipe folders from very long paths?
but cant u just workarround it, and use small paths in the vfs
if u put ur section in c:\Documents and Settings\user\My Documents\MOVIES
u will run into the long path problem faster then if it was c:\MOVIES
monk- is offline   Reply With Quote
Old 04-11-2010, 07:03 PM   #3
monk-
Member
 
Join Date: Oct 2007
Posts: 32
Default

it would be nice if someone can make a autowipe script in itcl with the same options as warchive
monk- is offline   Reply With Quote
Old 04-11-2010, 07:47 PM   #4
isteana
Senior Member
 
Join Date: Mar 2006
Posts: 110
Default

i means one "release" name. thats nothing todo with path lengthed
for example below.. this wont be deleted with warchive:

Bullshit.This.is.Longist.Name.of.National.Geograph ic.Nazi.Secret.Weapons.Worlds.First.Guided.Smart.B omb.1080p.HDTV.x264-DHDVD

Last edited by isteana; 04-11-2010 at 08:01 PM.
isteana is offline   Reply With Quote
Old 04-11-2010, 08:36 PM   #5
monk-
Member
 
Join Date: Oct 2007
Posts: 32
Default

i think its cos there are filesnames in that release with the same name as the release
which exceeds 256 chars in total
if it were 8.3 filenames, ten wipe goes fine
the problem is still the full path length
if u use warchive for a request section, u will run into this problem much faster
cos u get /REQUEST/[FILLED]-very.long.release-name/very.long.release-name/very.long.release-name.rar
say, very.long.release-name = 90 chars
3*90 = 360 chars
too beaucoup


and i think there are rules for releasename length

Last edited by monk-; 04-11-2010 at 09:02 PM.
monk- is offline   Reply With Quote
Old 04-12-2010, 04:18 PM   #6
Yil
Too much time...
FlashFXP Beta Tester
ioFTPD Administrator
 
Join Date: May 2005
Posts: 1,194
Default

Here's a few thoughts on the issues...

At one point I had some code in the server to detect when disk space got below some threshold before creating a directory and it would then call a new OnDiskSpace event which would have the task of making space or aborting. I thought this was a simple solution, but decided against it. What really needs to happen (and will in a later release) is that the already called OnNewDir event should be able to choose where to create the directory as well as be responsible for making sure enough space exists or it should abort the creation. The "where" is of course limited to the various possible paths of the merged directory unless a pure virtual directory in which case it could be anywhere. EVEN WITHOUT THIS CHANGE you can already make space on demand in the OnNewDir event right now...

That is the easy part, the hard part is deciding what should be deleted to make room when there is no other choice. I freed up 3 bits of every directory permission (the +x bit of owner, group, and other) and suggested using one of these bits was my long term plan. This would allow you to make individual directory trees or single directories immune from wiping. There are many other ways to get the same result using a database, chattr directory attributes, .ioFTPD.file hidden files, etc. I just liked this idea because you could immediately see in the directory listing which dirs were exempt.

In my previous effort I had some glob patterns you could define in the .ini file to detect complete/incomplete dirs. There were also some simple rules like: any dir with no subdirs or only empty subdirs was a "release", and another set of defined glob patterns were applied to all non-empty subdirs in a directory and if all matched then the directory was considered a "release". That's pretty common and simple stuff used by many scripts. Then the ages of directories could be sorted and the oldest non-exempt directory could be removed. Compiling all that info on demand all the time would be really bad, so it really needs to be stored in a database which is why ioFTPD itself will never auto-wipe stuff but must rely on a script to do it.

It's immediately obvious however that using that simple set of rules above could lead to some real problems. Rescanning/updating of imdb links in an XVID directory might adjust the directory timestamps in such a way that new stuff got deleted first. So it would be important to use the creation time rather than the last modified disk timestamp, but internally ioFTPD doesn't care or track creation time. A simple query against something like the nxTools dupedb which is based on creation time to get the oldest non-exempt directories on a particular disk would be the simplest solution. But what if you wanted to keep some directory trees longer than others but they are on the same disk? That's the specification problem and it's tricky. I don't presume to know the best solution, but that's why having a script do it makes sense since it would be easier to customize.

The most important point here is that everything but spanning disks on directory creation can be done today by just adding the functionality into something like nxTools. There really isn't any reason why the simple solution outlined above wouldn't solve the problem for 90% of users...
Yil is offline   Reply With Quote
Old 04-13-2010, 02:42 AM   #7
isteana
Senior Member
 
Join Date: Mar 2006
Posts: 110
Default

in nxdupe case, it could be solve with "site rebuild"
i think almost pepole already does input that command on scheduler atleast per 1day for
make maintain real dir/files information

Anyway the big point is how can we expect improvement if we do nothing but worry?
I don't think it is fudge to leave.

Even if we face difficulties, we should add a new function at least one,
then correct step by step, this is right way for now.

And I think the reason people don't raise up such a problem is
that it is faster to move flatform to glftpd on linux server in case of professional function
than to wait here to get something..
isteana is offline   Reply With Quote
Old 04-13-2010, 05:22 AM   #8
pion
Senior Member
 
Join Date: Feb 2006
Posts: 138
Default

I also don't see why it is ioftpds task to make sure the harddrive always has free space, especially when this can already be resolved by scripts as warchive today. warchive isn't perfect, but that's mainly due to bugs which can be worked around by making sure the configuratiion is accurate.

The only real reason to move to glftpd compared to ioftpd to date, is stability issues in my opinion (which is really severe in io 7+, 7.3.3 included). But I enjoy the fact that ioftpd is open source, and actively maintained, compared to glftpd.

Last edited by pion; 04-13-2010 at 08:11 AM.
pion is offline   Reply With Quote
Old 04-13-2010, 03:31 PM   #9
monk-
Member
 
Join Date: Oct 2007
Posts: 32
Default

though i agree with isteana
nxtools is the only useable site script,
with warchive doing all the autowiping, nxdupe has no notice of that, so u need to site rebuild all the time, to keep site search accurate
so an autowipe script, plugged in the sqlite nxdb should complete the the whole package
but i asked neoxed about this before, i detected no animo for writing such script
i made some adjustments to nxtools before, like hour offset instead of day offset on nxnewdate, or adding nukes and unnukes to the dupedb
but a whole autowipe script, i dont see me doing that :P
maybe someone else has the animo to script such thing
monk- is offline   Reply With Quote
Old 04-13-2010, 05:39 PM   #10
monk-
Member
 
Join Date: Oct 2007
Posts: 32
Default

http://code.google.com/p/nxscripts/issues/detail?id=1
good?
if anyone knows any bugs with nxtools, u can also use code.google for that
now is the time to report those bugs, to get a fully working sitescript. u too yil
maybe even with autowipe

Last edited by monk-; 04-13-2010 at 05:50 PM.
monk- is offline   Reply With Quote
Old 04-14-2010, 07:40 PM   #11
isteana
Senior Member
 
Join Date: Mar 2006
Posts: 110
Default

generally, to logging auto wiped data base from nxdupe is not so imporant for now
and ad i told it could be solved with site rebuild for the present........
i think Yil does worry himself needlessly that makes waste time to adding features
isteana is offline   Reply With Quote
Reply

Tags
ftp, problem, script, server, upload

Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 02:45 PM.

Parts of this site powered by vBulletin Mods & Addons from DragonByte Technologies Ltd. (Details)