<script type="application/ld+json">
{
 "@context": "https://schema.org",
 "@type": "FAQPage",
 "mainEntity": [{
   "@type": "Question",
   "name": "What is a confusion matrix?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "A confusion matrix is a measure used to solve classification problems. It can be put to use against binary classification problems as well as multiclass classification problems."
   }
 },{
   "@type": "Question",
   "name": "How do you calculate a confusion matrix?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "1. You will first require a validation dataset (test dataset).
2.  you will need to make predictions for every row in your validation dataset.
3. compare the predicted outcomes and the actual outcomes to count the number of accurate predictions for every class and the number of inaccurate predictions for every class.
4. The numbers are then organized into a table (matrix) as shown above, with every row of the table corresponding to a predicted class and every column of the matrix corresponding to an actual class.
5. Now the total number of correct and incorrect classifications are filled into that table."
   }
 },{
   "@type": "Question",
   "name": "What is the outputs of a confusion matrix?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "1. True Positive: This is when the predicted result and the actual result are both positive.
2. False Positive: This is when the predicted result is positive, but the actual result is negative.
3. True Negative: This is when the predicted result and the actual result are both negative.
4. False Negative: This is when the predicted result is negative but the actual result is positive.
5. Confusion matrices have two types of errors. False Positives are considered Type I errors, while False Negatives are called Type II errors."
   }
 },{
   "@type": "Question",
   "name": "What is the ROC curve?",
   "acceptedAnswer": {
     "@type": "Answer",
     "text": "An ROC curve is a graph summarizing the performance of a classifier over all possible thresholds."
   }
 }]
}
</script>

Confusion Matrix

What is a confusion matrix?

A confusion matrix is a measure used to solve classification problems. It can be put to use against binary classification problems as well as multiclass classification problems.

It is essentially a table that describes the performance of a classification model by using a test dataset for which the actual values are known. The table contains four different combinations of predicted and actual values.

A confusion matrix is very useful to measure Precision, Recall, Specificity, and Accuracy. It is also used to measure AUC-ROC curves.


How do you calculate a confusion matrix?

You will first require a validation dataset (test dataset), the actual outcomes of which you already know. 

Next, you will need to make predictions for every row in your validation dataset.

After doing that, compare the predicted outcomes and the actual outcomes to count the number of accurate predictions for every class and the number of inaccurate predictions for every class.

The numbers are then organized into a table (matrix) as shown above, with every row of the table corresponding to a predicted class and every column of the matrix corresponding to an actual class.

Now the total number of correct and incorrect classifications are filled into that table.

The outputs of a confusion matrix

True Positive: This is when the predicted result and the actual result are both positive.


False Positive: This is when the predicted result is positive, but the actual result is negative.


True Negative: This is when the predicted result and the actual result are both negative.


False Negative: This is when the predicted result is negative but the actual result is positive.


Confusion matrices have two types of errors. 


False Positives are considered Type I errors, while False Negatives are called Type II errors.

Confusion metrics

There are five metrics that help us understand the validity of our model. 


Accuracy: How often does the classifier generate the correct output?

Formula: (True Positives + True Negatives)/Total


Misclassification: How often does the classifier generate the wrong output?

Formula: (False Positives + False Negatives)/Total

This is also known as the Error Rate


Recall (or Sensitivity): How often does it predict a positive outcome when the actual outcome is positive?

Formula: True Positives/(True Positives + False Negatives)


Specificity: How often does it predict a negative outcome when the actual outcome is negative?

Formula: True Negatives/(True Negatives + False Positives)


Precisions: When the classifier predicts a positive outcome, how often is it correct?

Formula: True Positives/(True Positives + False Positives)


What is the ROC curve?

An ROC curve is a graph summarizing the performance of a classifier over all possible thresholds.

You can generate the ROC curve by plotting the True Positive Rate against the False Positive Rate.

True Positive Rate = Recall or Sensitivity

False Positive Rate refers to how often the model predicts a positive outcome when the actual outcome is negative.

Formula: False Positive/(True Negative + False Positive)


About Engati

Engati powers 45,000+ chatbot & live chat solutions in 50+ languages across the world.

We aim to empower you to create the best customer experiences you could imagine. 

So, are you ready to create unbelievably smooth experiences?

Check us out!

Confusion Matrix

October 14, 2020

Table of contents

Key takeawaysCollaboration platforms are essential to the new way of workingEmployees prefer engati over emailEmployees play a growing part in software purchasing decisionsThe future of work is collaborativeMethodology

What is a confusion matrix?

A confusion matrix is a measure used to solve classification problems. It can be put to use against binary classification problems as well as multiclass classification problems.

It is essentially a table that describes the performance of a classification model by using a test dataset for which the actual values are known. The table contains four different combinations of predicted and actual values.

A confusion matrix is very useful to measure Precision, Recall, Specificity, and Accuracy. It is also used to measure AUC-ROC curves.


How do you calculate a confusion matrix?

You will first require a validation dataset (test dataset), the actual outcomes of which you already know. 

Next, you will need to make predictions for every row in your validation dataset.

After doing that, compare the predicted outcomes and the actual outcomes to count the number of accurate predictions for every class and the number of inaccurate predictions for every class.

The numbers are then organized into a table (matrix) as shown above, with every row of the table corresponding to a predicted class and every column of the matrix corresponding to an actual class.

Now the total number of correct and incorrect classifications are filled into that table.

The outputs of a confusion matrix

True Positive: This is when the predicted result and the actual result are both positive.


False Positive: This is when the predicted result is positive, but the actual result is negative.


True Negative: This is when the predicted result and the actual result are both negative.


False Negative: This is when the predicted result is negative but the actual result is positive.


Confusion matrices have two types of errors. 


False Positives are considered Type I errors, while False Negatives are called Type II errors.

Confusion metrics

There are five metrics that help us understand the validity of our model. 


Accuracy: How often does the classifier generate the correct output?

Formula: (True Positives + True Negatives)/Total


Misclassification: How often does the classifier generate the wrong output?

Formula: (False Positives + False Negatives)/Total

This is also known as the Error Rate


Recall (or Sensitivity): How often does it predict a positive outcome when the actual outcome is positive?

Formula: True Positives/(True Positives + False Negatives)


Specificity: How often does it predict a negative outcome when the actual outcome is negative?

Formula: True Negatives/(True Negatives + False Positives)


Precisions: When the classifier predicts a positive outcome, how often is it correct?

Formula: True Positives/(True Positives + False Positives)


What is the ROC curve?

An ROC curve is a graph summarizing the performance of a classifier over all possible thresholds.

You can generate the ROC curve by plotting the True Positive Rate against the False Positive Rate.

True Positive Rate = Recall or Sensitivity

False Positive Rate refers to how often the model predicts a positive outcome when the actual outcome is negative.

Formula: False Positive/(True Negative + False Positive)


Share

Continue Reading