ML | MultiLabel Ranking Metrics – Coverage Error
Code: To check for coverage Error for any prediction scores with true-labels using scikit-learn.
# Import dataset import numpy as np from sklearn.metrics import coverage_error # Create Imaginary prediction and truth dataset y_true = np.array([[ 1 , 0 , 1 ], [ 0 , 0 , 1 ], [ 0 , 1 , 1 ]]) y_pred_score = np.array([[ 0.75 , 0.5 , 1 ], [ 1 , 1 , 1.2 ], [ 2.3 , 1.2 , 0.1 ]]) print (coverage_error(y_true, y_pred_score)) |
Output:
coverage error of 2.0
[1, 0, 1]
(here [0.75, 0.5, 1])
2.0