import torchBasic PyTorch examples about error types
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ Prediction ┊
┊ Positive ┊ Negative ┊
╭┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┤
┊ Ground ┊ Positive ┊ TP ┊ FN ┊
┊ Truth ┊ Negative ┊ FP ┊ TN ┊
╰┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈╯
Simple examples
We also can represent the same with images (or matrices):
- Ground Truth:
- Prediction:
- Output:
The same with matrices:
- Ground Truth:
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 1 ┊ 1 ┊
┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
- Prediction:
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 1 ┊
┊ 0 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
- Output:
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ TP ┊ FN ┊
┊ FP ┊ TN ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Where we can also represent the binary cases (0’s and 1’s) with True and False.
To represent each one, we can create a bool matrix for each.
TP
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 1 ┊ 0 ┊
┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
TN
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 0 ┊
┊ 0 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
FP
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 0 ┊
┊ 1 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
FN
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 1 ┊
┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
With Pytorch tensor.
Create the ground truth and prediction samples.
groud_truth = torch.zeros((2, 2))
groud_truth[0, :] = 1
groud_truthtensor([[1., 1.],
[0., 0.]])
prediction = torch.zeros((2, 2))
prediction[:, 0] = 1
predictiontensor([[1., 0.],
[1., 0.]])
Transform the ground truth and prediction samples into a bool tensor. This is important for multicategorical tensors/samples.
groud_truth_bool = torch.where(groud_truth == 1, True, False)
groud_truth_booltensor([[ True, True],
[False, False]])
prediction_bool = torch.where(prediction == 1, True, False)
prediction_booltensor([[ True, False],
[ True, False]])
Computing the bool matrices for TP, TN, FP, and FN.
# TP
# TP = (groud_truth_bool==True) & (prediction_bool==False)
# Simpler
TP = groud_truth_bool & prediction_bool
TPtensor([[ True, False],
[False, False]])
# TN
# TN = (groud_truth_bool==False) & (prediction_bool==False)
# Simpler
TN = ~groud_truth_bool & ~prediction_bool
TNtensor([[False, False],
[False, True]])
# FP
# FP = (groud_truth_bool==False) & (prediction_bool==True)
# Simpler
FP = ~groud_truth_bool & prediction_bool
FPtensor([[False, False],
[ True, False]])
# FN
# FN = (groud_truth_bool==True) & (prediction_bool==False)
# Simpler
FN = groud_truth_bool & ~prediction_bool
FNtensor([[False, True],
[False, False]])
Counting the total occurrences for each one
occurrences = {}# TP quantity
occurrences["TP"] = TP.sum().item()
occurrences["TP"]1
# TN quantity
occurrences["TN"] = TN.sum().item()
occurrences["TN"]1
# FP quantity
occurrences["FP"] = FP.sum().item()
occurrences["FP"]1
# FN quantity
occurrences["FN"] = FN.sum().item()
occurrences["FN"]1
Explanations
Operations
When have something like
TP + FP, we can read this as sum between the number of occurrences TP and the number of occurrences FPConsidering green is
Truthand Red isFalse:
- Ground Truth:
- Prediction:
- Truth table of error types
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈╮
┊ GT ┊ Pred ┊ Out ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ TN ┊
┊ 0 ┊ 1 ┊ FP ┊
┊ 1 ┊ 0 ┊ FN ┊
┊ 1 ┊ 1 ┊ TP ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈╯
TN
Looking at the Truth table of error types, we also can say or simplify the operation to TN = (GT nor Pred).
We can play with Karnaugh map to found the same result, where A=Ground Truth and B=Prediction:
- Truth table of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈╮
┊ A ┊ B ┊ TN ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ 1 ┊
┊ 0 ┊ 1 ┊ 0 ┊
┊ 1 ┊ 0 ┊ 0 ┊
┊ 1 ┊ 1 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈╯
- Karnaugh map of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈╮
┊ not A ┊ A ┊
╭┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┤
┊ not B ┊ 1 ┊ 0 ┊
┊ B ┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈╯
- Grouping:
- G1 = (not A) and (not B)
- Solution
- Out = TN = G1 = (not A) and (not B) then (not GT) and (not Pred)
- Simplify
- (not A) and (not B) = not not ((not A) and (not B)) = not (not not A or not not B) = not (A or B) =A nor B = GT nor Pred
Union
Union = TP + FP + FN
Looking at the Truth table of error types, we also can say or simplify the operation to Union = GT or Pred.
We can play with Karnaugh map to found the same result, where A=Ground Truth and B=Prediction:
- Truth table of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈╮
┊ A ┊ B ┊ Union ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ 0 ┊
┊ 0 ┊ 1 ┊ 1 ┊
┊ 1 ┊ 0 ┊ 1 ┊
┊ 1 ┊ 1 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈╯
- Karnaugh map of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈╮
┊ not A ┊ A ┊
╭┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┤
┊ not B ┊ 0 ┊ 1 ┊
┊ B ┊ 1 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈╯
- Grouping:
- G1 = A
- G2 = B
- Solution
- Out = Union = G1 + G2 = A + B = A or B then GT or Pred
Intersection (TP)
Intersection = TP
Looking at the Truth table of error types, we also can say or simplify the operation to Union = GT and Pred.
We can play with Karnaugh map to found the same result, where A=Ground Truth and B=Prediction:
- Truth table of Intersection function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ A ┊ B ┊ Intersection ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ 0 ┊
┊ 0 ┊ 1 ┊ 0 ┊
┊ 1 ┊ 0 ┊ 0 ┊
┊ 1 ┊ 1 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
- Karnaugh map of Intersection function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈╮
┊ not A ┊ A ┊
╭┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┤
┊ not B ┊ 0 ┊ 0 ┊
┊ B ┊ 0 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈╯
- Grouping:
- G1 = AB
- Solution
- Out = Intersection = G1 = AB = A and B then GT and Pred
FAQ
Why use Pytorch Tensors instead of Numpy array?
Tensors are in GPU memory (VRAM), and compute using the GPU processor. The numpy matrix, or builtin list, is in normal memory (RAM), and operations between these use the normal processor (CPU). For matrices math, in a few words, GPUs are better suited because they are constructed for that (at the hardware level).
Furthermore, have an important factor, when training a deep learning model, in general, you will use a GPU, because of the same reason explained above. So, if you have ground truth and predicted matrices at VRAM, and write a code to compute some math using the NumPy by example, you can easily transform a tensor into a NumPy array calling something like
.detach().cpu().numpy(). And with tensor transformed into a NumPy array, you just need to realize the math/NumPy operation. The problem is, doing that, you will spend time with the data doing something likefrom VRAM to RAM to CPU cachefor then the CPU can compute the operation.- Doing this with bigger batchs of matrices will be worst. And doing this repeatedly at a loop, you can spend time that could be saved.
Why use a Pytorch metric or a writed by self metric using Tensors instead of the simple use of sklearn function?
- The same answers from Why use Pytorch Tensors instead of Numpy array?





