Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Merge requests
  • !298

fix --benchmark option

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Administrator requested to merge github/fork/jieru-hu/bench into main Aug 09, 2022
  • Overview 5
  • Commits 1
  • Pipelines 0
  • Changes 1

Created by: jieru-hu

Patch Description metaseq.launcher.opt_baselines's --benchmark option seems to be broken by this commit 1d2d1525

Testing steps

before the change

$ python -m metaseq.launcher.opt_baselines --model-size 8m --benchmark -t 1 -g 8 -n 1 -p test-8m1 --azure 
Traceback (most recent call last):
  File "/shared/home/jieru/miniconda3/envs/fairseq-20220503/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/shared/home/jieru/miniconda3/envs/fairseq-20220503/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/shared/home/jieru/fork/metaseq/metaseq/launcher/opt_baselines.py", line 342, in <module>
    cli_main()
  File "/shared/home/jieru/fork/metaseq/metaseq/launcher/opt_baselines.py", line 336, in cli_main
    sweep_main(
  File "/shared/home/jieru/fork/metaseq/metaseq/launcher/sweep.py", line 378, in main
    backend_main(get_grid, postprocess_hyperparams, args)
  File "/shared/home/jieru/fork/metaseq/metaseq/launcher/slurm.py", line 34, in main
    grid = get_grid(args)
  File "/shared/home/jieru/fork/metaseq/metaseq/launcher/opt_baselines.py", line 160, in get_grid
    hyperparam("--valid-subset", ",".join(f"valid/{ss}" for ss in valid_subsets)),
UnboundLocalError: local variable 'valid_subsets' referenced before assignment

after this commit

$ python -m metaseq.launcher.opt_baselines --model-size 8m --benchmark -t 1 -g 8 -n 1 -p test-8m1 --azure 
....

Launched job 84182
Launched 84182

Considerations before submitting:

  • Was this discussed/approved via a Github issue?
  • Did you read the contributor guideline?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests? -->
Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: github/fork/jieru-hu/bench