Skip to content

Prevent infinite post processing with Keep Original Files enabled and ...#629

Closed
pyrocumulus wants to merge 5 commits intomidgetspy:developmentfrom
pyrocumulus:autoprocess_infinite_loop2
Closed

Prevent infinite post processing with Keep Original Files enabled and ...#629
pyrocumulus wants to merge 5 commits intomidgetspy:developmentfrom
pyrocumulus:autoprocess_infinite_loop2

Conversation

@pyrocumulus
Copy link
Copy Markdown

Issue 1538 - 'Keep Original Files' + 'Scan and Process' = infinite processing/notification loop
https://code.google.com/p/sickbeard/issues/detail?id=1538

If the Keep Original Files option is used and when the post processor copies a found video to the new location, this change will also create an empty file with the same name as the video - but ending in .processed.

In turn, when ProcessTV.processDir() checks videos for processing it will ignore the videos already having that empty helper file, thus not post process that file again.

This change is very important for users running a black hole torrent setup, with a torrent provider which requires them to keep seeding for good ratios. Without it the post process will process all the found videos again and again, every run.

@pisoj1
Copy link
Copy Markdown

pisoj1 commented Apr 1, 2013

Bit of a noob here...if I want to add this how can I do it? Should I just try and copy and paste your code into those files at those spots? I use it on windows and these files don't exist. I'm guessing I need to pull all the sickbeard files here and compile it or something?

@pisoj1
Copy link
Copy Markdown

pisoj1 commented Apr 1, 2013

Got it to work. I installed from source rather than the installer...Then I copied the files over...is there a smarter way to do that? Wouldn't mind knowing for next time.

Also it works really well, thanks!

@pyrocumulus
Copy link
Copy Markdown
Author

Well I think it can be done differently, but you'd have to have Git installed. Then checkout the development branch of Midgetspy and then get this pull request to your local copy. However, I'm still quite new to Git(hub) myself, so I can't yet fully explain how to do that.

The problem with overwriting your local files with the files from this pull request is that you might overwrite too much if my pull request gets outdated. Thanks for testing it btw, good to know it works well! :-)

@joshschmille
Copy link
Copy Markdown

I love you. Seriously. I had written a bash script to something with the same idea, but this is much better.

@pisoj1
Copy link
Copy Markdown

pisoj1 commented Apr 4, 2013

Would be incredible if it could change the torrent clients info on where it's being seeded. Probably completely out of the question to do sadly :(

@pyrocumulus
Copy link
Copy Markdown
Author

@RestingCoder thanks! :)

@pisoj1 Yes that would be quite difficult to do. It would have been awesome to check with the torrent client for information about seeding (is $file still being seeded yes/no?), but that's also quite difficult and requires much more work. Plus most torrent clients do not support such things afaik. This is the best I could do for this situation :)

@jimbul
Copy link
Copy Markdown

jimbul commented Apr 6, 2013

Superb :-) I think you've halved my electricity bill.

@ausey00
Copy link
Copy Markdown

ausey00 commented Apr 28, 2013

On first execution, should this copy everything (as it was before) then begin skipping all my downloaded files? Or should it skip them instantly after making these changes?

Sickbeard is still copying everything on my install....

@joshschmille
Copy link
Copy Markdown

It should copy everything, and create a new file for each copied file named [filename].processed. On the next process, it will check if there is a .processed file, and only copy/rename the file if it does not find it.

The easiest way to tell if it is working is to see if there are .processed files inside the directory where your original files are.

@pyrocumulus
Copy link
Copy Markdown
Author

What Josh said is exactly how it's supposed to be :)
On 28 Apr 2013 19:03, "Josh Schmille" notifications@github.com wrote:

It should copy everything, and create a new file for each copied file
named [filename].processed. On the next process, it will check if there is
a .processed file, and only copy/rename the file if it does not find it.

The easiest way to tell if it is working is to see if there are .processed
files inside the directory where your original files are.


Reply to this email directly or view it on GitHubhttps://github.com//pull/629#issuecomment-17137490
.

@ausey00
Copy link
Copy Markdown

ausey00 commented Apr 28, 2013

Hmm...

This is what it's upto at the moment. It's doing this for all the TV in my seeding folder.
http://pastebin.com/raw.php?i=bVDdmPVD

I can only see .ignore files in my seeding folder, but I think they're from CouchPotato...

Thanks for helping by the way :)

EDIT: Wow, I'm a tool, i was looking in the /seeding/ directory, i can see all the .processed files in /seeding/exampleTVshow/... Ignore me :)

@pyrocumulus
Copy link
Copy Markdown
Author

By the looks of it, i would think that my change isn't being used at all.
Are you sure you are running my pull request? No .processed files present
is strange and quite unlikely.

Did set post processing to Keep Original Files and Copy?

On 28 Apr 2013 19:09, "ausey00" notifications@github.com wrote:

Hmm...

This is what it's upto at the moment. It's doing this for all the TV in my
seeding folder.
http://pastebin.com/raw.php?i=bVDdmPVD

I can see .ignore files in my seeding folder, but I think they're from
CouchPotato...


Reply to this email directly or view it on GitHubhttps://github.com//pull/629#issuecomment-17137598
.

@ausey00
Copy link
Copy Markdown

ausey00 commented Apr 28, 2013

I merged yours with another that handles failed sabNZBD downloads.
It is working as I can see the processed files as I said in my edit above 👍

@pyrocumulus
Copy link
Copy Markdown
Author

Ah! I was responding to you by email, I couldn't see that edit. Glad to hear it's working! :)

@joshschmille
Copy link
Copy Markdown

Just to let you know, I have been having some minor issues with it processing (and creating the processed file) before a download is done. So, I might add the file size inside the .processed file, and compare those to avoid it. If you want, I'll send you my edits, but I imagine you could do that yourself if desired :P

@pyrocumulus
Copy link
Copy Markdown
Author

Ah to be honest I haven't encountered that situation, because my completed downloads get moved to the designated folder before seeding. So all the files in my watched folder are already done.

I do wonder though if checking the file size resolves this issue for all situations. I have known torrent clients to claim the entire file size before downloading; in that case the file size never changes. But I'll see what I can do to make this pull request even more robust :)

@joshschmille
Copy link
Copy Markdown

Yeah, I will have to set it up and see what happens with the file size being reported by rtorrent. Haven't really looked into it at all. I just started running into it in the last week or so when I download full seasons of a show, and it does an automatic scan/process before stuff gets done. With single episodes, they download fast enough that they don't get scanned in the middle.

@Ockingshay
Copy link
Copy Markdown

I see this request is still open, but it's something i use, therefore haven't updated sickbeard in months...Is this likely to become at least part of the development branch or is there now another solution?

@mr-orange
Copy link
Copy Markdown
Contributor

I'm sure the code will work but there are other better way to prevent infinite pp.
The best way imho is to access DB with the name of dir and/or the name of file. In DB tv_episodes table there is the field release_name where SB store releasename value once the file is postprocessed. So if you access db and found release_name=processing_file_name the episode was postprocess and so you can abort pp.
Also you need a 2° access to the same db but search on history_table, because you can store in your TV_DOWNLOAD_DIR two different quality for the same episode, first access will fail for episode with lower quality but it will be found on history.

@papertigers
Copy link
Copy Markdown

Is this planned to be merged in eventually? Or is there another preferred way to fix this issue?
A possible cleaner way of solving this is to take the PostProccessed name of the file and the path where it is to be moved to and do a simple stat to see if it exists. This way we don't end up with "lock" files all over the place.

Don't even have to use stat. Since we already have all the info from
self._copy(self.file_path, dest_path, new_base_name, sickbeard.MOVE_ASSOCIATED_FILES)
we can use
os.path.isfile(os.path.join(dest_path,new_base_name))

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants