Explain recall in simple terms

Question

with respect to confusion matrix

in progress 0
TheDataMonk 4 years 8 Answers 1454 views Grand Master 0

Answers ( 8 )

  1. Consider a example of fire alarm prediction.
    A model for fire alarm will generally predict if the alarm will ring or not given a certain set of features.
    The model will output a confusion matrix which will consists of True Positives (TP), True Negatives (TN),
    False Positives (FP) and False Negatives (FN).

    True Positive – The model predicted the alarm rings and it actually rings
    True Negative – The model predicted the alarm did not ring and it actually did not ring.
    False Positive – The model predicted the alarm rings but it actually did not ring.
    False Negative – The model predicted the alarm does not ring but it actually rings.

    Recall = TP/P ——–> When it is actually positive, how often does it predict positive.

    Recall is also known as sensitivity.

  2. I will always find it confusing to remember these formulations. I am going to tell you one way through which you might not forget in your entire life. I read this method on some quora post.

    Imagine that, your girlfriend gave you a birthday surprise every year in the last 10years. (Sorry, I didn’t intend to depress you if you don’t have one. Even I don’t have one)

    However, one day, your girlfriend asks you:

    ‘Sweetie, do you remember all the birthday surprises from me?’

    This simple question makes your life in danger.

    To extend your life, you need to recall all 10 surprising events from your memory.

    So, recall is the ratio of a number of events you can correctly recall a number of all correct events.

    If you can recall all 10 events correctly, then, your recall ratio is 1.0 (100%). If you can recall 7 events correctly, your recall ratio is 0.7 (70%).

    Now, it’s easier to map the word recall to real-life usage of that word.

    However, you might be wrong in some answers.

    For example, you answer 15 times, 10 events are correct and 5 events are wrong. This means you can recall all events but it’s not so precise.

    So, precision is the ratio of a number of events you can correctly recall to a number of all events you recall (mix of correct and wrong recalls). In other words, it is how precise your recall.

    From the previous example (10 real events, 15 answers: 10 correct answers, 5 wrong answers), you get 100% recall but your precision is only 66.67% (10 / 15).

    Yes, you can guess what I’m going to say next. If a machine-learning algorithm is good at recall, it doesn’t mean that algorithm is good at precision. That’s why we also need an F1 score which is the (harmonic) mean of recall and precision to evaluate an algorithm.

    You can refer to the formal definition from swaplaw007’s answer.

    https://www.quora.com/What-is-the-best-way-to-understand-the-terms-precision-and-recall – This is the quora post link

  3. RECALL:
    1) Recall literally is how many of the true positives were recalled (found), i.e. how many of the correct hits were found.
    2) Recall attempts to answer the question: What proportion of actual positives was identified correctly?
    recall = TP / (TP + FN)
    3) A model that produces no false negatives has a recall of 1.0

    PRECISION:
    1) Precision attempts to answer the question: What proportion of positive identifications was actually correct?
    precision = TP / (TP + FP)
    2) A model that produces no false positives has a precision of 1.0

    EXAMPLE: If we want to classify emails, Precision measures the percentage of emails flagged as spam that was correctly classified whereas Recall measures the percentage of actual spam emails that were correctly classified

  4. Recall represent Sensitivity which is True Positive Rate

    As In previous Ques of fire alarm where
    True Positive : the model predicts Fire alarm rings and actually it rings
    True Negative : the model predicts fire alarm doesn’t rings and actually it doesn’t rings
    False Positive : the model predicts the fire alarm rings but actually it doesn’t
    False Negative : the model predicts the fire alarm doesn’t ring but actually it does

    Recall = True Positive / True Positive+False Negative

  5. Recall – The number of true positives divided by the total number of elements that actually belong to the positive class
    (i.e. the sum of true positives and false negatives)

    Recall = TP/TP+FN

    The result is a value between 0.0 for no recall and 1.0 for full or perfect recall.
    The recall is intuitively the ability of the classifier to find all the positive samples. The best value is 1 and the worst value is 0.

  6. Recall =TP/TP+FN means how often your model predicts positive when it is actually positive

  7. Fire alarm prediction:

    It could only have two outcomes. Either, it could ring or not ring. I will be categorising ring as 1 and not ring as 0.
    True positive – It has been predicted that it will ring and actually it rings
    True Negative – It has been predicted that it will not ring and in actual it did not ring
    False positive – It has been predicted that it will ring but in actual it did not ring. It is also known as type 1 error.
    False negative – It has been predicted that it will not ring but in actual it did ring. It is also known as type 2 error.

    Recall(r): Out of all the positive classes, how much we predicted correctly. (tp/(tp+fn)) . It is also called as sensitivity.

  8. Confusion matrix looks like this.


    Attachment

Leave an answer

Browse
Browse