Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • B bull
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 175
    • Issues 175
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 9
    • Merge requests 9
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • OptimalBits
  • bull
  • Issues
  • #1110
Closed
Open
Issue created Nov 01, 2018 by Administrator@rootContributor

Heap out of memory error on large number of queued DELAYED jobs

Created by: ajwootto

Hi,

I currently have around 2 million tasks queued in Bull which all need to be processed. I'm trying to test the performance of a single worker against this queue of tasks, but shortly after I start the worker I get the following heap memory error:

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

<--- Last few GCs --->

[13:0x2d0a5c0]    82667 ms: Mark-sweep 947.4 (1508.8) -> 947.3 (1510.8) MB, 877.4 / 0.0 ms  allocation failure GC in old space requested
[13:0x2d0a5c0]    83565 ms: Mark-sweep 947.3 (1510.8) -> 947.2 (1456.8) MB, 897.2 / 0.0 ms  last resort GC in old space requested
[13:0x2d0a5c0]    84474 ms: Mark-sweep 947.2 (1456.8) -> 947.2 (1436.3) MB, 909.3 / 0.0 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x142c70c25879 <JSObject>
    1: _settlePromise [/src/node_modules/bluebird/js/release/promise.js:~542] [pc=0xac62245a421](this=0x38989dfe5681 <Promise map = 0x3f8459124389>,promise=0x38989dfed891 <Promise map = 0x3f8459124389>,handler=0x1756049822d1 <undefined>,receiver=0x1756049822d1 <undefined>,value=0x173509168221 <Number 1.5411e+12>)
    2: _drainQueue(aka _drainQueue) [/src/node_modules/bluebird/js/release/async.js...

 1: node::Abort() [node /src/dist/app.js]
 2: 0x8cbf4c [node /src/dist/app.js]
 3: v8::Utils::ReportOOMFailure(char const*, bool) [node /src/dist/app.js]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [node /src/dist/app.js]
 5: v8::internal::Factory::NewUninitializedFixedArray(int) [node /src/dist/app.js]
 6: 0xd801bc [node /src/dist/app.js]
 7: 0xd97a95 [node /src/dist/app.js]
 8: v8::internal::JSObject::AddDataElement(v8::internal::Handle<v8::internal::JSObject>, unsigned int, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::Object::ShouldThrow) [node /src/dist/app.js]
 9: v8::internal::Object::AddDataProperty(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::Object::ShouldThrow, v8::internal::Object::StoreFromKeyed) [node /src/dist/app.js]
10: v8::internal::Object::SetProperty(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::LanguageMode, v8::internal::Object::StoreFromKeyed) [node /src/dist/app.js]
11: v8::internal::Runtime_SetProperty(int, v8::internal::Object**, v8::internal::Isolate*) [node /src/dist/app.js]
12: 0xac6223042fd

To try and narrow down the problem I've simplified my code down to the point where the process function isn't actually doing anything, it's just returning immediately. This leads me to believe there's an issue in the internals of the library when this many tasks are queued at once. I have also tried artificially limiting the speed of the processor using a timeout promise that resolves after 100ms, but I still hit this issue after 2-3 jobs are processed.

Assignee
Assign to
Time tracking