View Single Post
Old 02-25-2005, 02:58 PM  
andreag
Junior Member
 
Join Date: Feb 2005
Posts: 4
Default Resume issues.... a practical example !

Sorry for the double post, something got wrong here with Internet Explorer and the old session

About your answer:

>
>to test the file integrity, would require a test at the end of the >file. the best solution would be during the rollback, however the >corruption could of occured when the connection was lost and >the rest of the file might be ok. so if the integrity check is done >on the data that is going to be replaced, if the data doesn't >match and we overwrite the data the user could end up loosing >GBs of data that was in fact flawless.
>

Good point. That's why you should ENFORCE the rollback for at least 1-2 KBytes (maybe more ? You have more experience than me to understand) and then (just an example) checking only the last 4KBytes of the "rollbacked" file (so the "rollbacked" data won't be compared !)

Example:

I transfer file "A" from FTP server (2000 KBytes long), but something it breaks after 1000 Kbytes and the transfer stops.

File "A" on the client is 1000 KBytes, on the FTP server is 2000 KBytes.

Then, when resuming, I take 2 Kbytes (maybe more ? maybe it's better to simply use the "rollback" parameter we already have in FlashFXP) off the local file for rollbacking and thus avoiding errors due to network problems. Now the local file size "A" would (logically) be 998 KBytes, supposed free of trailing corruption. Now I can take last 4 KBytes of local file "A" (from 995th to 998th KByte) and put them in a small memory buffer (an array of bytes). Then ask to the FTP server to initiate a restore of the REMOTE file "A" from position of KByte 995 and read the first 4 KBytes in another memory buffer. At this point, I have simply to compare the 4 KBytes in the memory buffer from LOCAL "A" file with the one from the REMOTE "A" file. If the compare is OK, than it's maybe Ok to continue with the resume (at this point, memory buffers get written on the disk). If it's not,due to the rollbacking operation that cut out eventual network errors, we should overwrite because files are different for sure (if we had used enough rollback KBytes to avoid trailing corruption).

If you follow this logic, you can't go wrong. With this simple addition, you won't take up wrong resume operations, at least in situation like mine (where It happened to me a lot of time for sure).

Moreover, if you could have "remembered" the REMOTE size of file "A" when you started to download it, if the file had simply changed the dimensions then it could be used as an alert that the resume operation is pointless at this time, but I think that it shoud be let to the user to decide about it. (if the remote file changed its size it's there is a concrete doubt about the fact it's the very same file than the previous download session).

Understood ?

Let me know !

>
>If we were to test in any other part of the file it would require >starting/stopping and then resuming from the end of file, which >could lead to another problems.
>


True, but even so it could be useful for very big files and archive files to spot immediately the "directory" differences (the list of file is often at the very first KBytes of archives)

But my "system" explained before isn't requesting more "FTP" commands than the normal situation if the resume operation can really go on. It's obvious that if the buffers are different I have to overwrite the file, so I must restart from the first byte, but it would be the right choice in that case.

>
>Future versions of FlashFXP will support XCRC which will allow >us to validate the download to insure it's identical to the copy >on the ftp server.
>


But only a little number of FTP sites seems to support it... But having that will be surely helpful.


>
>Starting with build 1070 the "Auto-Save on change" referring to >the auto-generated queue file, saves are now spaced out to >eliminate performance lost when transferring lots of small files. >The original method of always saving after a transfer was >probably not the best way to do this, simply because saving so >often can result in the queue file being corrupted if windows >crashes/blue screens at the exact same time. Which is one of >the reasons no new option was made to allow saving after >every transfer.
>

Good point here too !
But then how do you save the information that a particular file is downloading at the moment ? How eventually could you save the information about remote file (size, time, date and I would suggest too the CRC of the first 4-8 Kbytes of both local and remote file that would be really useful to spot immediately different archives).

Ok, ok, maybe I asked too much... but the simple double memory buffer trick, done as I described before, would really solve a lot of problems for me (wrong resume that corrupt the files and force a second full download of the remote file because it has changed in the meantime between subsequent download sessions)

Thanks a lot.

Andrea
andreag is offline