Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • B bull
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 175
    • Issues 175
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 9
    • Merge requests 9
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • OptimalBits
  • bull
  • Issues
  • #371
Closed
Open
Issue created Nov 09, 2016 by Administrator@rootContributor

"Unexpected token u in JSON at position 0" while processing job

Created by: bradvogel

I'm noticing that some of our jobs are "corrupted" and the queue is throwing the following error when trying to process them:

Error processing job: SyntaxError: Unexpected token u in JSON at position 0
    at Object.parse (native)
    at Function.Job.fromData (/Users/brad/dev/bull-queue/node_modules/bull/lib/job.js:418:33)
    at /Users/brad/dev/bull-queue/node_modules/bull/lib/job.js:68:18
    at tryCatcher (/Users/brad/dev/bull-queue/node_modules/bluebird/js/release/util.js:16:23)

In inspecting the job in redis using hgetall bull:test:1 I can see that only attemptsMade, stacktrace, and failedReason keys are present in the job data.

This leads me to believe that a job failed (and _saveAttempt was called) after the job data had been already removed - which can only happen if the job was completed by another worker. And the only way another worker could have completed it is if the first worker failed to renew the lock.

To fix this, what do you think about making Job.prototype.moveToFailed an atomic script that guarantees that we own the lock before moving the job. So if another worker picked up the job and it got double processed, we won't do farther damage by trying to move that completed job back to the delayed queue.

This is related to #273 (closed).

Assignee
Assign to
Time tracking