Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • A awesome-python
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 13
    • Issues 13
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 317
    • Merge requests 317
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Vinta Chen
  • awesome-python
  • Merge requests
  • !1784

Add Sklearn-genetic-opt

  • Review changes

  • Download
  • Email patches
  • Plain diff
Open Administrator requested to merge github/fork/rodrigo-arenas/master into master Jun 27, 2021
  • Overview 0
  • Commits 1
  • Pipelines 1
  • Changes 1

Created by: rodrigo-arenas

What is this Python project?

This is an AutoML package as an alternative from popular methods inside scikit-learn, such as Grid Search and Randomized Grid Search.

Sklearn-genetic-opt uses evolutionary algorithms to choose the set of hyperparameters that optimizes the cross-validation scores, it can be used for both regression and classification problems with a scikit-learn alike API.

What's the difference between this Python project and similar ones?

  • It uses AI for the optimization process, instead of brute force approach like GridSearch.
  • It adds several features missing in similar packages, worth to mention:
    • Callbacks: Allows to monitor, save the models and stop the training when some of several possible criteria is met, such as the model has run for a long time, a threshold metric was achieved, etc. It even allows the user to create a custom callback.
    • Plotting: It was several build-in plotting functionalities to help the user understand the optimization process and take decisions over the models.
    • Tensorboard: It can log with just a single line of code all the evaluation metrics to a tensorboard instance to monitor the training.
    • MLflow: With one single config class, log all the metrics, models, hyperparameters of each run into a MLflow server.

--

Anyone who agrees with this pull request could submit an Approve review to it.

Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: github/fork/rodrigo-arenas/master