The question:
I’d like Celery to catch exceptions and write them to a log file instead of apparently swallowing them…
The current top answer here is so-so for purposes of a professional solution. Many python developers will consider blanket error catching on a case-by-case basis a red flag. A reasonable aversion to this was well-articulated in a comment:
Hang on, I’d expect there to be something logged in the worker log, at the very least, for every task that fails…
Celery does catch the exception, it just isn’t doing what the OP wanted it to do with it (it stores it in the result backend). The following gist is the best the internet has to offer on this problem. It’s a little dated, but note the number of forks and stars.
https://gist.github.com/darklow/c70a8d1147f05be877c3
The gist is taking the failure case and doing something custom with it. This is a superset of the OP’s problem. Here is how to adjust the solution in the gist to log the exception.
import logging
logger = logging.getLogger('your.desired.logger')
class LogErrorsTask(Task):
def on_failure(self, exc, task_id, args, kwargs, einfo):
logger.exception('Celery task failure!!!1', exc_info=exc)
super(LogErrorsTask, self).on_failure(exc, task_id, args, kwargs, einfo)
You will still need to make sure all your tasks inherit from this task class, and the gist shows how to do this if you’re using the @task
decorator (with the base=LogErrorsTask
kwarg).
The benefit of this solution is to not nest your code in any additional try-except contexts. This is piggybacking on the failure code path that celery is already using.