If a message that is not (or incorrect) JSON is received by the worker, the script will explode. Additionally, it will restart and get the same frame, over and over again, which means no other job will be processed.
Created attachment 144401 [details] [review] Bug 32393: Prevent invalid job to block the job queue I have faced a problem when testing an incorrect version of bug 32370. The frame sent to the message broker was not a correct JSON encoded string, and its decoding was obviously failing, exploding the worker script. Additionally, as we don't send a ack for this frame, the next pull will result in processing the same message, and so in the same explosion. No more messages can be processed! This patch is logging the error and ack the message to the broker, in order to not get stuck. Test plan: 0. Dont' apply this patch 1. Enqueue a bad message a. Apply 32370 b. Comment the following line in Koha::BackgroundJob::enqueue $self->set_encoded_json_field( { data => $job_args, field => 'data' } ); c. restart_all d. Use the batch item modification tool to enqueue a new job => Notice the error in the log => Note that the status of the job is "new" => Inspect rabbitmq queue: % rabbitmq-plugins enable rabbitmq_management % rabbitmqadmin get queue=koha_kohadev-long_tasks You will notice there is a message in the "long_tasks" queue 2. Enqueue a good message a. Remove the change from 1.b b. restart_all c. Enqueue another job => Same error in the log => Both jobs are new => Inspect rabbitmq, there are 2 messages 3. Apply this patch 4. restart_all => Second (good) job is finished => rabbitmq long_tasks queue is empty We cannot mark the first job as done, we have no idea which job it was! QA: Note that this patch is dealing with another problem, not tested in this test plan. If an exception is not correctly caught by the ->process method of the job, we won't crash the worker. The job will be marked as failed.
Comment on attachment 144401 [details] [review] Bug 32393: Prevent invalid job to block the job queue Review of attachment 144401 [details] [review]: ----------------------------------------------------------------- ::: misc/background_jobs_worker.pl @@ +104,5 @@ > + Koha::Logger->get->warn(sprintf "Job and/or frame not processed - %s", $_); > + } finally { > + $job->status('failed')->store if $job; > + $conn->ack( { frame => $frame } ); > + }; The finally block is always executed regardless of the exit status of the try block. It reads wrong to be setting the status as failed. That line belongs to the catch IMHO.
Created attachment 144569 [details] [review] Bug 32393: Prevent invalid job to block the job queue I have faced a problem when testing an incorrect version of bug 32370. The frame sent to the message broker was not a correct JSON encoded string, and its decoding was obviously failing, exploding the worker script. Additionally, as we don't send a ack for this frame, the next pull will result in processing the same message, and so in the same explosion. No more messages can be processed! This patch is logging the error and ack the message to the broker, in order to not get stuck. Test plan: 0. Dont' apply this patch 1. Enqueue a bad message a. Apply 32370 b. Comment the following line in Koha::BackgroundJob::enqueue $self->set_encoded_json_field( { data => $job_args, field => 'data' } ); c. restart_all d. Use the batch item modification tool to enqueue a new job => Notice the error in the log => Note that the status of the job is "new" => Inspect rabbitmq queue: % rabbitmq-plugins enable rabbitmq_management % rabbitmqadmin get queue=koha_kohadev-long_tasks You will notice there is a message in the "long_tasks" queue 2. Enqueue a good message a. Remove the change from 1.b b. restart_all c. Enqueue another job => Same error in the log => Both jobs are new => Inspect rabbitmq, there are 2 messages 3. Apply this patch 4. restart_all => Second (good) job is finished => rabbitmq long_tasks queue is empty We cannot mark the first job as done, we have no idea which job it was! QA: Note that this patch is dealing with another problem, not tested in this test plan. If an exception is not correctly caught by the ->process method of the job, we won't crash the worker. The job will be marked as failed.
I've amended the patch with the following change: - $job->status('failed')->store if $job; + $job->status('failed')->store if $job && @_;
Hi I tried this: kohadev-koha@kohadevbox:/kohadevbox/koha$ perl -MTry::Tiny -e 'try { die "Boo"; } catch { warn "catch"; } finally { warn "finally" if $@; };' catch at -e line 1. kohadev-koha@kohadevbox:/kohadevbox/koha$ perl -MTry::Tiny -e 'try { die "Boo"; } catch { warn "catch"; } finally { warn "finally" if $_; };' catch at -e line 1. In both cases it never prints "finally", so I guess the error buffer is cleared after the catch block. Do you think it would work if we just move the $job->status('failed') to the catch block? It looks cleaner too.
(In reply to Tomás Cohen Arazi from comment #5) > Hi > > I tried this: > > kohadev-koha@kohadevbox:/kohadevbox/koha$ perl -MTry::Tiny -e 'try { die > "Boo"; } catch { warn "catch"; } finally { warn "finally" if $@; };' > catch at -e line 1. > kohadev-koha@kohadevbox:/kohadevbox/koha$ perl -MTry::Tiny -e 'try { die > "Boo"; } catch { warn "catch"; } finally { warn "finally" if $_; };' > catch at -e line 1. > > > In both cases it never prints "finally", so I guess the error buffer is > cleared after the catch block. Do you think it would work if we just move > the $job->status('failed') to the catch block? It looks cleaner too. It's @_ % perl -MTry::Tiny -e 'try { die "Boo"; } catch { warn "catch"; } finally { warn "finally" if @_; };' catch at -e line 1. finally at -e line 1. IMO it's better in the finally block, in case we add more stuff in catch later.
There are 2 ack sent, because of the fork I guess.
Created attachment 145064 [details] [review] Bug 32393: Remove fork
(In reply to Jonathan Druart from comment #8) > Created attachment 145064 [details] [review] [review] > Bug 32393: Remove fork Just a thought; removing that fork will cause the workers memory footprint to balloon as soon as the plugins require is evaluated which obviates the bug patch where you introduce the require. I think this pushes us farther in the direction of needing to ack each request before handling the job.
I agree that removing the fork is a bad idea.. it was implimented for the reasoning Kyle suggests I believe.. to keep the memory footprint from balooning.
I left the patch here yesterday after my day work. I was working on bug 32481, and thought that this first patch was a good base to work on. So far I am only trying to make things work correctly. IMO we should not add anything new to the worker, but focus on fixing its behaviours instead. From bug 32481 comment 22 "The best solution I found is the 2 patches from bug 32393." I didn't want to add the patches there and add more confusion so just dropped them here. I still don't have anything good to suggest, and remove the fork is indeed a very bad idea. But fixing the worker's behaviour is a higher priority than memory footprint right now anyway.
There's scope creep here.. the try encompasses the process_job which means it's also catching all sorts of other possible failures from each and every processor that's written. I think we should limit the scope of this bug to only catch bad JSON encoding and leave error handling for individual tasks to the tasks themselves.
Also.. if we do want to only acknowledge on process completion, should we not do it within the forked worker?
(In reply to Martin Renvoize from comment #12) > I think we should limit the scope of this bug to only catch bad JSON > encoding and leave error handling for individual tasks to the tasks > themselves. We should not trust the job and catch potential crashes IMO, to prevent a bad job to take the worker down.
We need to keep the fork, and catching all exceptions makes sense to me. You could say that the background job worker is really a background job server and servers really shouldn't go down. They should be reporting on the failures. Of course the tough part here is that the job ID is in the JSON, so if the JSON can't be decoded, there is no way to fail the job and it just stays stuck in "new" forever. I haven't been following the bad JSON bugs carefully enough though. Since we control both ends, we shouldn't be creating and transmitting bad JSON...
Just a simple observation: + my $job_id; You are not using this variable (after all).
} finally { $job->status('failed')->store if $job && @_; Looks like a typo. Did you mean $@ ? And would testing that be enough or even appropriate in a finally block ?
(In reply to Marcel de Rooy from comment #17) > } finally { > $job->status('failed')->store if $job && @_; > > Looks like a typo. Did you mean $@ ? And would testing that be enough or > even appropriate in a finally block ? Seeing comment6 now btw ;)
(In reply to Kyle M Hall from comment #9) > Just a thought; removing that fork will cause the workers memory footprint > to balloon as soon as the plugins require is evaluated which obviates the > bug patch where you introduce the require. I think this pushes us farther in > the direction of needing to ack each request before handling the job. Looks like there is consensus on keeping the fork. The problem does not seem to be related to forking. Did you consider moving the ->ack before the process start as Kyle suggested? It is simpler than the current code with finally. And it is still possible to run the worker without rabbitmq to recover if really needed?
(In reply to Marcel de Rooy from comment #19) > (In reply to Kyle M Hall from comment #9) > > Just a thought; removing that fork will cause the workers memory footprint > > to balloon as soon as the plugins require is evaluated which obviates the > > bug patch where you introduce the require. I think this pushes us farther in > > the direction of needing to ack each request before handling the job. > > Looks like there is consensus on keeping the fork. The problem does not seem > to be related to forking. > Did you consider moving the ->ack before the process start as Kyle > suggested? It is simpler than the current code with finally. And it is still > possible to run the worker without rabbitmq to recover if really needed? See bug 32573.
(In reply to Martin Renvoize from comment #12) > There's scope creep here.. the try encompasses the process_job which means > it's also catching all sorts of other possible failures from each and every > processor that's written. > > I think we should limit the scope of this bug to only catch bad JSON > encoding and leave error handling for individual tasks to the tasks > themselves. Actually, after looking at the code again, I think that Martin might be right. I'd have to test it, but in theory the forked process won't hit the "exit" in process_job if it throws a fatal error. Ah and that will be why you get that 2nd ACK.
So I think we should just wrap try{} around the code for getting $job. If we catch an error retrieving $job, then we log a warning. -- If we want to fail a job based on a fatal error in $job->process(), then we need to put a try/catch in the "process_job" function. It does need to be separate after all. I think we could rename this bug to be "Add exception handling to background jobs worker". (Note that this doesn't stop individual jobs from doing their own exception handling within the "process" function. This would just provide default exception handling.)
Created attachment 145207 [details] [review] Bug 32393: Split into 2 try catch blocks
I don't understand what you are asking for guys. I've written a follow-up patch that I think does what you want, but I don't see how it is better. Note that now we are acking early. Please provide your own version if you are still disagreeing with the patches.
Created attachment 145216 [details] [review] Bug 32393: Prevent invalid job to block the job queue I have faced a problem when testing an incorrect version of bug 32370. The frame sent to the message broker was not a correct JSON encoded string, and its decoding was obviously failing, exploding the worker script. Additionally, as we don't send a ack for this frame, the next pull will result in processing the same message, and so in the same explosion. No more messages can be processed! This patch is logging the error and ack the message to the broker, in order to not get stuck. Test plan: 0. Dont' apply this patch 1. Enqueue a bad message a. Apply 32370 b. Comment the following line in Koha::BackgroundJob::enqueue $self->set_encoded_json_field( { data => $job_args, field => 'data' } ); c. restart_all d. Use the batch item modification tool to enqueue a new job => Notice the error in the log => Note that the status of the job is "new" => Inspect rabbitmq queue: % rabbitmq-plugins enable rabbitmq_management % rabbitmqadmin get queue=koha_kohadev-long_tasks You will notice there is a message in the "long_tasks" queue 2. Enqueue a good message a. Remove the change from 1.b b. restart_all c. Enqueue another job => Same error in the log => Both jobs are new => Inspect rabbitmq, there are 2 messages 3. Apply this patch 4. restart_all => Second (good) job is finished => rabbitmq long_tasks queue is empty We cannot mark the first job as done, we have no idea which job it was! QA: Note that this patch is dealing with another problem, not tested in this test plan. If an exception is not correctly caught by the ->process method of the job, we won't crash the worker. The job will be marked as failed. Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Created attachment 145217 [details] [review] Bug 32393: Split into 2 try catch blocks Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Testing notes: 1.b didn't work for me, I did: - my $json_args = $self->_json->encode($job_args); + #my $json_args = $self->_json->encode($job_args); + my $json_args=$job_args; To check the queues I just did: rabbitmqctl list_queues
(In reply to Jonathan Druart from comment #24) > I don't understand what you are asking for guys. I've written a follow-up > patch that I think does what you want, but I don't see how it is better. Looks good. Thanks for that.
Upping the severity as jobs can get stuck and worker explodes.
For me this statement is also in the scope of this report (but not touched yet): while ( my $job = $jobs->next ) { my $args = $job->json->decode($job->data);
Note that I agree with acking earlier. We do this implicitly here. This would obsolete the separate report (32573).
Created attachment 145596 [details] [review] Bug 32393: Prevent invalid job to block the job queue I have faced a problem when testing an incorrect version of bug 32370. The frame sent to the message broker was not a correct JSON encoded string, and its decoding was obviously failing, exploding the worker script. Additionally, as we don't send a ack for this frame, the next pull will result in processing the same message, and so in the same explosion. No more messages can be processed! This patch is logging the error and ack the message to the broker, in order to not get stuck. Test plan: 0. Dont' apply this patch 1. Enqueue a bad message a. Apply 32370 b. Comment the following line in Koha::BackgroundJob::enqueue $self->set_encoded_json_field( { data => $job_args, field => 'data' } ); c. restart_all d. Use the batch item modification tool to enqueue a new job => Notice the error in the log => Note that the status of the job is "new" => Inspect rabbitmq queue: % rabbitmq-plugins enable rabbitmq_management % rabbitmqadmin get queue=koha_kohadev-long_tasks You will notice there is a message in the "long_tasks" queue 2. Enqueue a good message a. Remove the change from 1.b b. restart_all c. Enqueue another job => Same error in the log => Both jobs are new => Inspect rabbitmq, there are 2 messages 3. Apply this patch 4. restart_all => Second (good) job is finished => rabbitmq long_tasks queue is empty We cannot mark the first job as done, we have no idea which job it was! QA: Note that this patch is dealing with another problem, not tested in this test plan. If an exception is not correctly caught by the ->process method of the job, we won't crash the worker. The job will be marked as failed. Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Created attachment 145597 [details] [review] Bug 32393: Split into 2 try catch blocks Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Created attachment 145598 [details] [review] Bug 32393: Deal with the DB fallback part
(In reply to Marcel de Rooy from comment #31) > Note that I agree with acking earlier. We do this implicitly here. This > would obsolete the separate report (32573). Added the dependency.
(In reply to Marcel de Rooy from comment #30) > For me this statement is also in the scope of this report (but not touched > yet): > > while ( my $job = $jobs->next ) { > my $args = $job->json->decode($job->data); Done.
Applying: Bug 32393: Prevent invalid job to block the job queue Using index info to reconstruct a base tree... M misc/background_jobs_worker.pl Falling back to patching base and 3-way merge... Auto-merging misc/background_jobs_worker.pl CONFLICT (content): Merge conflict in misc/background_jobs_worker.pl ++<<<<<<< HEAD + my $body = $frame->body; + my $args = decode_json($body); # TODO Should this be from_json? Check utf8 flag. + + # FIXME This means we need to have create the DB entry before + # It could work in a first step, but then we will want to handle job that will be created from the message received + my $job = Koha::BackgroundJobs->find($args->{job_id}); + + $conn->ack( { frame => $frame } ); # Acknowledge the message was received + process_job( $job, $args ); ++======= + my $job; + try { + my $body = $frame->body; + my $args = decode_json($body); # TODO Should this be from_json? Check utf8 flag. + + # FIXME This means we need to have create the DB entry before + # It could work in a first step, but then we will want to handle job that will be created from the message received + $job = Koha::BackgroundJobs->find($args->{job_id}); + + process_job( $job, $args ); + } catch { + Koha::Logger->get->warn(sprintf "Job and/or frame not processed - %s", $_); + } finally { + $job->status('failed')->store if $job && @_; + $conn->ack( { frame => $frame } ); + }; ++>>>>>>> Bug 32393: Prevent invalid job to block the job queue Here ack and process_job are reversing order again..
Created attachment 145609 [details] [review] Bug 32393: Prevent invalid job to block the job queue I have faced a problem when testing an incorrect version of bug 32370. The frame sent to the message broker was not a correct JSON encoded string, and its decoding was obviously failing, exploding the worker script. Additionally, as we don't send a ack for this frame, the next pull will result in processing the same message, and so in the same explosion. No more messages can be processed! This patch is logging the error and ack the message to the broker, in order to not get stuck. Test plan: 0. Dont' apply this patch 1. Enqueue a bad message a. Apply 32370 b. Comment the following line in Koha::BackgroundJob::enqueue $self->set_encoded_json_field( { data => $job_args, field => 'data' } ); c. restart_all d. Use the batch item modification tool to enqueue a new job => Notice the error in the log => Note that the status of the job is "new" => Inspect rabbitmq queue: % rabbitmq-plugins enable rabbitmq_management % rabbitmqadmin get queue=koha_kohadev-long_tasks You will notice there is a message in the "long_tasks" queue 2. Enqueue a good message a. Remove the change from 1.b b. restart_all c. Enqueue another job => Same error in the log => Both jobs are new => Inspect rabbitmq, there are 2 messages 3. Apply this patch 4. restart_all => Second (good) job is finished => rabbitmq long_tasks queue is empty We cannot mark the first job as done, we have no idea which job it was! QA: Note that this patch is dealing with another problem, not tested in this test plan. If an exception is not correctly caught by the ->process method of the job, we won't crash the worker. The job will be marked as failed. Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Created attachment 145610 [details] [review] Bug 32393: Split into 2 try catch blocks Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Created attachment 145611 [details] [review] Bug 32393: Deal with the DB fallback part
QAing
Created attachment 145613 [details] [review] Bug 32393: Prevent invalid job to block the job queue I have faced a problem when testing an incorrect version of bug 32370. The frame sent to the message broker was not a correct JSON encoded string, and its decoding was obviously failing, exploding the worker script. Additionally, as we don't send a ack for this frame, the next pull will result in processing the same message, and so in the same explosion. No more messages can be processed! This patch is logging the error and ack the message to the broker, in order to not get stuck. Test plan: 0. Dont' apply this patch 1. Enqueue a bad message a. Apply 32370 b. Comment the following line in Koha::BackgroundJob::enqueue $self->set_encoded_json_field( { data => $job_args, field => 'data' } ); c. restart_all d. Use the batch item modification tool to enqueue a new job => Notice the error in the log => Note that the status of the job is "new" => Inspect rabbitmq queue: % rabbitmq-plugins enable rabbitmq_management % rabbitmqadmin get queue=koha_kohadev-long_tasks You will notice there is a message in the "long_tasks" queue 2. Enqueue a good message a. Remove the change from 1.b b. restart_all c. Enqueue another job => Same error in the log => Both jobs are new => Inspect rabbitmq, there are 2 messages 3. Apply this patch 4. restart_all => Second (good) job is finished => rabbitmq long_tasks queue is empty We cannot mark the first job as done, we have no idea which job it was! QA: Note that this patch is dealing with another problem, not tested in this test plan. If an exception is not correctly caught by the ->process method of the job, we won't crash the worker. The job will be marked as failed. Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 145614 [details] [review] Bug 32393: Split into 2 try catch blocks Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 145615 [details] [review] Bug 32393: Deal with the DB fallback part Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Created attachment 145616 [details] [review] Bug 32393: (QA follow-up) Add explicit undef response in two catch blocks Do not implicitly depend on last statement returning nothing. Make it explicit. We want $args to be null here. Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
And now we still need the job list to not crash on bad job data ;) I thought that there was a report too?
(In reply to Marcel de Rooy from comment #46) > And now we still need the job list to not crash on bad job data ;) I thought > that there was a report too? See bug 32709.
Pushed to master for 23.05. Nice work everyone, thanks!
Nice work everyone! Pushed to stable for 22.11.x
If this needs to be backported for 22.05.x then I will need some help rebasing Bug 32394.
Backported to 22.05.x for upcoming 22.05.10
depends on 32394, can't apply to 21.11.x