Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • B bull
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 175
    • Issues 175
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 9
    • Merge requests 9
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • OptimalBits
  • bull
  • Merge requests
  • !377

Fixes double-processing issue described in https://github.com/Optimal…

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Administrator requested to merge github/fork/mixmaxhq/fix-double-processing into master Nov 13, 2016
  • Overview 5
  • Commits 6
  • Pipelines 0
  • Changes 4

Created by: bradvogel

…Bits/bull/issues/371#issuecomment-260158407.

Double-processing happens when two workers find out about the same job at the same time via getNextJob. One worker is taking the lock, processing the job, and moving it to completed before the second worker can even try to get the lock. When the second worker finally gets around to trying to get the lock, the job is already in the completed state. But it processes it anyways since it got the lock.

So the fix here is for the takeLock script to ensure the job is in the active queue prior to taking the lock. That will make sure jobs that are in wait, completed, or even removed from the queue altogether don't get double processed. Per the discussion in #370 (closed) though, takeLock is parameterized to only require the job be in active when taking the lock while processing the job. There are other cases such as job.remove() that the job might be in a different state, but we still want to be able to lock it.

This fixes existing existing broken unit test "should process each job once".

This also prevents hazard https://github.com/OptimalBits/bull/issues/370.

Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: github/fork/mixmaxhq/fix-double-processing