Notes on Results Metrics
AI
Results Metrics
A note on different loss functions.
Results
Background
Why different result metrics
The purpose of loss function is to quantify the error between the output of an algorithm and the given target value.
Let’s say that I have 100 pieces of something and I want the algorithm to predict the number of pieces available. Here, 100 pieces are the ground truth or the target value. Now, if the algorithm predicts that there are only 90 pieces, there is a loss or error of 10 pieces.
Types of Metrics
- Precision : Accuracy of positive prediction of all items classified as positive, how many were actually positive True Positive/ (True Positive + False Positive)
- Recall : Sensitivity or True Positive Rate. Of all the actual positive instances, how many were correctly identified by the model True Positive/ (True Positive + False Negative)
Thoughts:
Citation
BibTeX citation:
@misc{kumar2024,
author = {{Chandan Kumar}},
title = {Notes on {Results} {Metrics}},
date = {2024-01-13},
langid = {en-GB}
}
For attribution, please cite this work as:
Chandan Kumar. 2024. “Notes on Results Metrics.”