Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Merge requests
  • !270

[WIP | Very hacky | no to be merged yet] Cuda graph incremental decoding

  • Review changes

  • Download
  • Email patches
  • Plain diff
Open Administrator requested to merge cuda_graph_incremental_decoding into main Jul 28, 2022
  • Overview 0
  • Commits 4
  • Pipelines 0
  • Changes 14

Created by: ngoyal2707

working version of cuda graphs with incremental decoding. Currently at n_best=1 even with one single cuda graph of max_seq_len (2048), I see following improvement for 175B on 8x azure A100 :

per token latency baseline: ~95ms
per token latency this change: ~66ms

My guess is saving will be larger for 16xMP serving and any of our smaller models.

If I create bunch more cuda graphs and keep all of them in memory, we can get it down to as following:

Screen Shot 2022-07-28 at 5 41 55 PM

Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: cuda_graph_incremental_decoding