Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Merge requests
  • !639

Simplify inheritance layers

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Administrator requested to merge Simplify-inheritance-layers into main Feb 06, 2023
  • Overview 2
  • Commits 7
  • Pipelines 0
  • Changes 14

Created by: bashnick

Patch Description PR removes unnecessary inheritance layers and flattens the class structure for better interpretability and transparency. Short summary:

  • removed TransformerDecoder -> ModelParallelTransformerDecoder
  • removed LanguageModel -> BaseModel
  • removed TransformerDecoderLayer -> ModelParallelTransformerDecoderLayer
  • removed MultiheadAttention -> ModelParallelMultiheadAttention
  • removed arch transformer_lm -> transformer_lm_megatron
  • updated test gpu_tests/test_hf_compatibility.py to work with model_parallel

Testing steps

  1. tested model run: python -m PROJECT_NAME.projects.MODEL_NAME.sweep_baseline -g 4 -n 1 --rsc --model-size 8m --tokenizer rsc --prefix NB000 --local --data /checkpoint/TEAM_NAME/datasets/consolidated/v4.0
  2. tested evaluations: FSD=/checkpoint/TEAM_NAME/datasets/few_shot_data python PROJECT_NAME/scripts/eval/schedule_jobs_few_shot_opt_evaluation.py -t copa cb flan_cb --model-name punitkoura_125m --model-path /checkpoint/TEAM_NAME/checkpoints/punitkoura/small_test_run/1000/checkpoint_1000.pt --model-template gptz_sharded_config --nshot 0 -o ~/MODEL_NAME/fix_scoring_001 --slurm-partition learn --combine-tasks --max-ingestible-tokens 4000
  3. tested continued run from a checkpoint
Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: Simplify-inheritance-layers