Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • P pyod
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 144
    • Issues 144
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 16
    • Merge requests 16
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Yue Zhao
  • pyod
  • Issues
  • #217
Closed
Open
Issue created Aug 05, 2020 by Administrator@rootContributor

Proposed change to PCA model decision_scores_ calculation

Created by: ghost

Hi there, The current decision_scores_ calculation in the PCA model measures the euclidean distance from each point in X to the selected principal components, weighted by the components’ explained variance ratio.

For the following scaled X values, the final point X[10] (green arrow) is expected to be given the largest anomaly score. In fact, it is scored as the least anomalous. image image

This is because the current decision_scores_ calculation computes the summed euclidean distance of each point in X to the tip of each eigenvector (in red). Thus, points X[0] and X[9] are given the largest anomaly score simply because they are furthest from the tip of the two eigenvectors.

In theory, large deviations of X from the mean of the principal components should be anomalous, where deviations to smaller eigenvectors should be given more weight. A revised decision_scores_ formula that satisfies this premise (using sklearn) is:

from sklearn.decomposition import PCA pca = PCA(n_components=None) decision_scores_ = np.sum((pca.fit_transform(X) - pca.fit_transform(X).mean(axis=0))** 2 / pca.explained_variance_ratio_, axis=1).ravel()

where:

  • X is the scaled features
  • pca.fit_transform(X) is the transformed data
  • pca.fit_transform(X).mean(axis=0) is the mean of the principal components; when data is scaled, mean = 0 and becomes redundant
  • pca.explained_variance_ratio_: proportion of variance explained by each principal component; used for weighting anomaly scores
Assignee
Assign to
Time tracking