Load the Dataset

We will use the iris datasets which is one of the most common benchmark dataset for the classification models. Let’s load this dataset using the sklearn.datasets.

Python3




iris_data = datasets.load_iris()
features = iris_data.data
target = iris_data.target


In this case, we will utilise the one vs rest strategy. So, there will be 3 cases.

Case 1:
Positive class - Setosa,
Negative Class - Versicolour and Virginica

Case 2:
Positive class - Versicolour,
Negative Class - Setosa and Virginica

Case 3:
Positive class - Virginica,
Negative Class - Versicolour and Setosa

Hence, we would have 3 ROC curves. We take the average of these 3 cases to report the final accuracy of the model. Now, Lets take a look at features.

Python3




features[:5, :]


Output:

array([[5.1, 3.5, 1.4, 0.2],
       [4.9, 3. , 1.4, 0.2],
       [4.7, 3.2, 1.3, 0.2],
       [4.6, 3.1, 1.5, 0.2],
       [5. , 3.6, 1.4, 0.2]])

Now, lets see the target values.

Python3




target[:5]


Output:

array([0, 0, 0, 0, 0])

First we need to use binarization on target values as shown below.

Python3




target = label_binarize(target,
                        classes=[0, 1, 2])
target[:5]


Output:

array([[1, 0, 0],
       [1, 0, 0],
       [1, 0, 0],
       [1, 0, 0],
       [1, 0, 0]])

Model Development and Training

We will create separate models for each case.

Python3




train_X, test_X,\
    train_y, test_y = train_test_split(features,
                                       target,
                                       test_size=0.25,
                                       random_state=42)
 
model_1 = LogisticRegression(random_state=0)\
    .fit(train_X, train_y[:, 0])
model_2 = LogisticRegression(random_state=0)\
    .fit(train_X, train_y[:, 1])
model_3 = LogisticRegression(random_state=0)\
    .fit(train_X, train_y[:, 2])
 
print(f"Model Accuracy :")
print(f"model 1 - {model_1.score(test_X, test_y[:, 0])}")
print(f"model 2 - {model_2.score(test_X, test_y[:, 1])}")
print(f"model 3 - {model_3.score(test_X, test_y[:, 2])}")


Output:

Model Accuracy :
model 1 - 1.0
model 2 - 0.7368421052631579
model 3 - 1.0

If we take the average of these accuracies, we get an overall accuracy of 91.2%.

Multiclass Receiver Operating Characteristic (roc) in Scikit Learn

The ROC curve is used to measure the performance of classification models. It shows the relationship between the true positive rate and the false positive rate. The ROC curve is used to compute the AUC score. The value of the AUC score ranges from 0 to 1. The higher the AUC score, the better the model. This article discusses how to use the ROC curve in scikit learn.

ROC for Multi class Classification

Now, let us understand how to use ROC for multi class classifier. So, we will build a simple logistic regression model to predict the type of iris. We will be using the iris dataset provided by sklearn. The iris dataset has 4 features and 3 target classes (Setosa, Versicolour, and Virginica).

Similar Reads

Import Required Libraries

Python libraries make it very easy for us to handle the data and perform typical and complex tasks with a single line of code....

Load the Dataset

...

Computing ROC – AUC Score

We will use the iris datasets which is one of the most common benchmark dataset for the classification models. Let’s load this dataset using the sklearn.datasets....

Visualizing ROC Curve

...