View Single Post
Old 06-08-2004, 03:32 PM  
neoxed
Too much time...
 
Join Date: May 2003
Posts: 1,326
Default

Quote:
Originally posted by jeza
me dont like mysql and mysql dont like me
Whether it be MySQL, MS-SQL, Oracle, etc, using a database server for the database system would be the most appropriate solution. With a database, you can lock the records to avoid multiple updates at once for the same user or group (this way a site is forced to use the latest user and group data). Perhaps the biggest advantage of a database is a central location, you only have to update one place rather then 3,5,9+ sites.

Quote:
Originally posted by jeza
no need to monitor nuke,pre,...
when u add a array with commands to check
on pre_event site the script just checks if the command is in the array and execute on all others sites else return
That's exactly my point, if you understood anything about efficiency you would realize that it's bad design and bad idea to write it using TCL.

You would have to monitor all credit and stats changes, the only way *successful* to do this in TCL would be as follows:

1. Save all userfiles, groupfiles, list of UIDs (Tcl command: [user list]) and GIDs (Tcl command: [group list]) into memory. (Using ioFTPD's 'var set' command so that the variable is available to other interpreters, since ioFTPD uses several.)

2. Create a script that would compare the new userfile and groupfiles with the older one saved in memory, if there's a change.[list=a][*]Retrieve a list of the current UIDs and GIDs and compare them with the older ones to see if any users or groups have been added or removed.[*]Have the script find the differences in ALL userfiles and groupfiles between the updated one and the older one.[*]You would have to check ALL users and groups in case there were changes to several users or groups. (Like SITE CHANGE * BLAH or SITE NUKE)[*]If there's a difference in credits or stats, find the numerical difference between them.[/list=a]

3. You would have two options to trigger this script.[list=a][*]Commands: After all uploads, deletes, site change and other related site commands.[*]Timer: Create a ioFTPD TCL timer or schedule the script and have it run at even intervals (Ex. every 5 minutes), FAR better then after each command.[/list=a]

4. Finally, if a change in a userfile or groupfile was found, you would have to sync the other sites.[list=a][*]Have a listening socket on all sites, connect to it and send the specific parts of the userfile or groupfile that have changed.[*]In the event that connecting to the site fails, you would need a fail safe mechanism, such as a spinlock to ensure that the site was updated, if the spinlock fails 5 times break the loop.[*]Another option would be to connect to ioFTPD's Telnet or FTP service and issue the changes from there. (Although this method is much slower then the previous option.)[/list=a]

5. After the update, save all userfiles, groupfiles, UIDs and GIDs into memory once again.


Problems with this:
  • The updated userfile information exchanged between sites would be unencrypted, a HUGE security issue. You could use a TCL SSL/TLS socket wrapper or a HMAC MD5/Blowfish system using keys to encrypt the data between sites.
  • Both a socket wrapper or HMAC system would increase the load even more, and this script would already create a tremendous load to begin with.
  • If you used listening sockets on the sites, this would be a another security issue, unless you created a simple authentication system using hostname/IP specific passwords. Each site should have different authentication credentials to avoid any form of hijacking incase a siteop/owner goes rogue.
  • When connecting to the listening sockets, if a connection fails have a spinlock retry 5 (for example) before breaking break the loop. Mark the site as not-updated to retry at a later time. (In case the site was down at the moment.)
  • If using the scheduled update method there may be syncing problems. If the site shuts down unexpectedly before updating the other sites, it would be out of sync. (By out of sync, I mean that the userfiles wouldn't be the same on all sites). You would then need to create a method to ensure all sites are up-to-date.
  • However, if you update each site after each command you would never have this issue, but then updating on a command basis would create a tremendous load. Although, updating on even intervals is still a better idea despite the problems that may arise from an expected shutdown.

In conclusion:
Writing such a script using TCL would be compared to killing a fly with a hydrogen bomb. Using ioFTPD's module system, you would be to create a FAR more efficient and robust script. The user and group module system is there for this exact reason, I'm sure darkone, Mouton and other developers on this board would agree with me.

PS: I'm not speaking from observer’s perspective; I've been there and attempted to write such a script in TCL. I took a distributed approach, by updated all sites instead of having "hub" site. I found it created an excessive load on the system and wasn't reliable enough to use in a production environment, thus why I never publicly released it. I dubbed the script as ioMSS aka ioMultiSiteSync (the name originated from Turranius' MSS for glFTPD).

Wew! I haven't ranted in a long time, I was overdue. There is a few threads on this board related to this, they may be of some interest to you.
neoxed is offline   Reply With Quote