Job worker doesn't seem to run jobs concurrently

Hi, I have a Job defined and have the maxConcurrency variable set to 10. However, it appears only 1 job is running at a time. I can tell this because I can only see the debug output of 1 job at a time, and when I query the database for a count of jobs wih job_status_running, it only returns 1. When I created all the job “entries” in the database, the worker started running them 10 at a time, but then at some point after that it dropped down to only running sequentially. If I restart the ./start server script, it doesn’t fix the issue.

Does anyone know why this would be happening, and how to fix it?

Sounds to me like this could be a bug in the poller. The background worker has multiple triggers for starting a job:

  • watching for INSERTs to the ..._jobs table
  • polling the ..._jobs table every once in a while to catch up on missed changes. This will also be run on the start of the job worker.

It seems to me that this could be a bug in the polling logic. E.g. here

I can reproduce the issue. It looks like database listener has died and then it’s relying entirely on polling. Polling has a bug that causes it only run with concurrency=1. Working on a fix right now

A fix for the poller issue is now available at Worker: Fixed poller not scheduling concurrent jobs · digitallyinduced/ihp@7b8befc · GitHub

Next I’ll look into why the database watcher has crashed

Thanks for this. Any idea when your next release will be?