Well I actually dont read the userfiles myself. My main plugins are simple commandline based plugins that get group/user/directory/filename/speed/size/.. parameters as arguments from iotpd. So as long as that is still possible I have very little to do.
Now I do not fiddle in the .ioftpd files myself with FTPLogger and OnDirCreated and OnDirDeleted, I simply check if these plugins are runnign as a subprocess in ioftpd. If so then I know I can use shared memory to recache a directory as my seperate ioRecache does. So it forces io to simply reread the dir and make a new .ioftpd file as such:
Code:
....
//Now mark the dir as dirty
lpMessage->dwIdentifier = DC_DIRECTORY_MARKDIRTY;
strcpy(PathToRefresh,Path);
DWORD dwRes=SendMessage(hIoFTPDWindow, WM_SHMEM, NULL, (LPARAM)hMemory);
if (!dwRes) WaitForSingleObject(hEvent,2000);
// Unlock shared memory
ShMem_Unlock(lpMessage);
...
There is a little piece in ftp logger that force deletes a .ioftpd file if needed, and I use the printf("!vfs:chattr 1 \"%s\" \"%s\"",dir,TagType); on the tag that is created to set the tag owner and rights to the owner and rights of the last uploaded and complete file. I use the USER and GROUP environment variables as well as the UID and GID. I can easily do that due to the check of being a subprocess of ioftpd.
Now the other apps, ioSiteKill, ioRecache, ioSiteWho, ioGUI, ioVFSDLL, ioSymlinkRemover.
ioSymlinkRemover has never been publically released that one altered the .ioftpd file itself, so if it changes I dont really mind. I can simply recode it, as long as I know the structure. But I see no real reason to use it if there is another better mechanism.
ioVFSDLL is never released and is a dll to handle the vfs filesystem and the .ioftpd files so the same goes here.
ioSiteWho does read the user and group files, it basically uses shared memory to loop through all the active users, then grabs their structure and then grabs the needed group and user data from the group and userfiles. Those text data's it uses to do pattern matching after which it can decide to show some info. Basically it is the well known sitewho but then somewhat extended.
THen there is ioSiteKill which was build for ioFTPD 5.x it also loops using shared mem through the connections, then grabs user and group data from the appropriate user and group files. Then also using a wildcard and a build up string I kill a connection e.g. sitekill user!=sitebot path=/test/dummy/* "idletime>120"
So basically for a user I get the full user data from its user name, then get the groupname and then get the groups he is in and then also get the admin groups. The kill then finally happens with a SendMessage using WM_Kill and the iPos (the running number in the connections loop).
ioRecache is rather simple, it allocates the shared memory and then where needed (from the parameters) does a SendMessage with WM_SHMEM which basically forces io to reread the dir and remake the ioftpd file.
So basically if you make big changes then I simply branch off a new version of my tools. Tho I may drop support for older versions if people dont use that anymore.. So far ioSiteKill is the most complicated one.
The ideltime and so I just retrieve from io's memory structure.
Now if it were up to me I would definitely make it simpler, act like it is a database. So the following routines would be my choice.
Give total number of connections
Give connection info for connection(x) including username
Give userinfo for user(y) including group(s)
Give groupinfo for group(z)
That would handle most loops. Knowing that whilst one program loops a user can logout you'd need an active marker at a connection but that was already there in some way in version 4.x.
Alternatively you could make semaphore like commands..
Lock Connection, Lock User, Lock Group and build command queues to a connection, user and group. So if connection 1 ups a file it locks the connection, then finds the user, locks the user, adds some data, unlocks the user, locks the groups, changes some data where needed, unlocks the groups, unlocks the connection. Then another connection of that same user can have an upload with its own changes next in the queue. All that you'd need in io is to avoid deadlocking which a timeout would handle.
Storing data in a database, an ini file, the rgistry, remotely in mysql or whereever would only need some plugin to read/write/update/delete. And it could even be simply written in tcl or lua or vbscript or powershell. Anyway those are just my ideas of course (I got tons of ideas as usual..).
I had build my tools on purpose to support more ftp servers and to be as flexible as possible so only the tools that start with io are ioftpd only...