PDA

View Full Version : Custom command improvement/discussion #2


DayCuts
02-24-2015, 01:57 AM
Hi bigstar, have encountered an issue with some custom commands that is likely related to the error fail-safe recently introduced. Tested on build 3808.

I have a custom command that recurses some directories and cleans up some old or unwanted files (using /select and /delete selected). It does some other stuff as well but it is only the delete operations that seem to be affected here.

If I have a previously failed transfer (eg a folder manually marked as failed) at the top of the queue then any delete operation added after it in the queue is not processed. I assume this is a side affect of the fail-safe that was put in place however it doesn't stop the entire queue, it only marks the delete operations as failed when it gets to them and continues with anything else (in my case there are also raw commands, mkdir, etc interspersed which are still processed). There is nothing in the log related to the delete operations and the queue is not stopped as would be expected by the fail-safe behavior.

I accept that having failed items already in the queue can complicate things and it is my responsibility to use custom command appropriately knowing this (always remember to open a new session). It does however point to unexpected and inconsistent behavior of the fail-safe. It may be worth mentioning that in my custom command I do not initiate the queue. I let it run to queue all the operations than review it before starting the queue (as I am still refining these particular commands).

Doing some testing I confirmed that everything functions as expected if I do not have those queue items already marked as failed.

A. If those failed items are at the top then they are left alone (failed) and all following delete operations are not processed (marked failed as it gets to them) but the queue is never stopped as might be expected if the command execution fail-safe is triggered.
B. If those failed items are anywhere else in the queue (middle for example, manually moved down for testing purposes) then they seem to be reset (no longer marked failed) when the queue begins. The queue then processed normally. If those items then fail (in my case they were folders queued for download that had been moved awaiting me to correct/requeue, so they did fail) they are of course marked as failed then the behavior continues as described in A above. This 'reset' does not occur if the failed items are at the top, confirmed by the status log not attempting to process them. I am assuming that this behavior is also tried to the fail-safe, perhaps resetting so that flashfxp can fully attempt to process a queue it thinks is entirely related to the custom command.

In short the issues I see here are...
1. Under these circumstances the fail-safe seems inconsistent as only delete operations are marked failed (no attempt to process) while the rest are performed. This could cause issues if other queue operations rely on on those files being removed (eg upload a new version, or move/rename another file to that filename/location). Given avoiding such potential problems was the point of the fail-safe it should cease processing the queue entirely.
2. It seems unwise that any items marked as failed should be reset, regardless of position, if there are un-failed items in the queue. I have not noticed this behavior in a regular transfer-only queue, only during custom commands that add non-transfer queue items. I have also not tested if this is a wider queue-reset behavior issue that would do the same if I had manually added a non-transfer queue item to a list of failed transfer items as I do not have time at the moment. My guess is that this is in fact a wider issue related to the reset behavior of the queue that may not be taking non-transfer items into consideration when determining if the queue should be reset, but the different behavior depending on the position of the items makes this unclear. Again I have not tested but my guess is that any transfer items are the top would remain un-reset, but any further down (after the first non-transfer item) would likely be reset. Will look closer at this when I have some more time.

Again, clearly I need to remember to always use a fresh session, but the above behaviors could just as easily arise in a clean session using custom commands that use both transfer and non-transfer operations (eg if a transfer operation fails at some stage it will then cause behavior B, rather than stop entirely).

bigstar
02-24-2015, 11:23 PM
I split this into a new thread for simplicity and so that I don't get confused with the long winded posts in the previous thread.

Tomorrow I will investigate this further but off the top of my head I have to wonder if some settings such as
"Options > Preferences > Transfers > On Transfer Complete > Reset and retry failed transfers" and/or
"Options > Preferences > Transfers > Retry failed transfers > Move to bottom before retrying" are active and causing this inconsistent behavior you noticed.

DayCuts
02-24-2015, 11:30 PM
Haha sorry about that.

Both of these options are unchecked.

bigstar
02-25-2015, 07:16 PM
If the queue is started and all the entries in the queue are marked as failed then these entries are reset and then transferred. (this is by design)

The sanity check on delete operation is currently resetting all failed entries that appear above the delete operation, even if they belong to a different servers, in fact its too vague and I will change it to be limited to the same origin server.

We only need to prevent the delete operation from deleting the origin before we have a chance to transfer it..

I will change the delete operation to abort the queue if there is a failed transfer from the same origin that cannot be retried.

bigstar
03-03-2015, 11:00 AM
I just wanted to let you know that these changes have been implemented in 5.1.0 (build 3812)
The beta release can be downloaded via Live Update (Make sure you're checking for stable and beta updates)

I am working on a couple new commands for the next build though I haven't completely hashed out all the details.

[B]/queue [-f | -d] <source> [target]

This will allow you to queue a file/folder directly if you already know the name.

-f defines the item as a file and -d as a directory
<source> using the active side as the source; this can either be an absolute path or relative to the current directory.
<target> using the opposite side as the target; to use the current directory and original filename omit this field, otherwise provide an absolute path+filename

This command doesn't validate the source or target, or whether or not they exist. This will be done later when the item is actually transferred.

/busybox

This will display a small busy dialog (a progress marquee and a cancel button). I thought this might make sense for long operations.

https://oss.azurewebsites.net/images/forum/busybox.png

graham.ranson1
03-08-2015, 06:23 AM
/queue [-f | -d] <source> [target]

That sounds interesting. So would that allow me to generate a queue of files to download without having to know the file sizes?

What I am trying to do is this:

There is an ftp site with over 400 directories, each containing 1000 files numbered sequentially, ie. /dir1/001001.ext /dir99/099001.ext ... etc.
I have a list of the numbered files that I want to download that are spread out throughout the 400+ directories.
To actually go to each directory and add a few files to the queue would be a nightmare.
I can easily write a queue file (externally - Delphi) but that way I need to know the file sizes (I think - correct me if I'm wrong).
I can write a batch file the same way to use the command line. But that would mean FlashFXP stopping and starting a LOT of times.
It would be much better if I could write a queue file without needing to know the file sizes in advance.

Unless I've got something wrong this sounds like what I need.

bigstar
03-08-2015, 02:29 PM
/queue [-f | -d] <source> [target]
What I am trying to do is this:


Either method would work, though I think generating the queue file externally would probably be the best method.

As for the file size, you should be able to get away with defining it as 0 in the queue file.

This command macro does not define the size for remote files and I haven't seen any problems with this, though further testing may be needed to determine if there are some unexpected side effects from this.

graham.ranson1
03-08-2015, 02:48 PM
As for the file size, you should be able to get away with defining it as 0 in the queue file.


Ah, that sounds like it's worth a try. That would be ideal for what I want to do.
Thank you very much.

In fact thank you for FlashFXP, I love it. I had tried v4 but found it rather slow for small files (what I tend to be using most of the time). I recently noticed that you had released v5 and tried it again - and bought it! It is so much faster now. Great work.

Edit: Excellent! Made a queue file in Delphi using "0" for file size, worked perfectly. You can actually see the file size being filled in in the queue window as it starts to download the file.

DayCuts
03-12-2015, 09:07 AM
Another method would be to import the file list into flashfxp's marked items list. Then a simple command that goes through the root folders and uses /selectmarked and /queue selected. But I agree generating an external queue file is the easiest.

Re: /busybox, very useful indeed.