import torch
Basic PyTorch examples about error types
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ Prediction ┊
┊ Positive ┊ Negative ┊
╭┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┤
┊ Ground ┊ Positive ┊ TP ┊ FN ┊
┊ Truth ┊ Negative ┊ FP ┊ TN ┊
╰┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈╯
Simple examples
We also can represent the same with images (or matrices):
- Ground Truth:
- Prediction:
- Output:
The same with matrices:
- Ground Truth:
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 1 ┊ 1 ┊
┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
- Prediction:
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 1 ┊
┊ 0 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
- Output:
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ TP ┊ FN ┊
┊ FP ┊ TN ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Where we can also represent the binary cases (0’s and 1’s) with True
and False
.
To represent each one, we can create a bool matrix for each.
TP
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 1 ┊ 0 ┊
┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
TN
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 0 ┊
┊ 0 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
FP
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 0 ┊
┊ 1 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
FN
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ 0 ┊ 1 ┊
┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
With Pytorch
tensor.
Create the ground truth
and prediction
samples.
= torch.zeros((2, 2))
groud_truth 0, :] = 1
groud_truth[ groud_truth
tensor([[1., 1.],
[0., 0.]])
= torch.zeros((2, 2))
prediction 0] = 1
prediction[:, prediction
tensor([[1., 0.],
[1., 0.]])
Transform the ground truth
and prediction
samples into a bool tensor. This is important for multicategorical tensors/samples.
= torch.where(groud_truth == 1, True, False)
groud_truth_bool groud_truth_bool
tensor([[ True, True],
[False, False]])
= torch.where(prediction == 1, True, False)
prediction_bool prediction_bool
tensor([[ True, False],
[ True, False]])
Computing the bool matrices for TP
, TN
, FP
, and FN
.
# TP
# TP = (groud_truth_bool==True) & (prediction_bool==False)
# Simpler
= groud_truth_bool & prediction_bool
TP TP
tensor([[ True, False],
[False, False]])
# TN
# TN = (groud_truth_bool==False) & (prediction_bool==False)
# Simpler
= ~groud_truth_bool & ~prediction_bool
TN TN
tensor([[False, False],
[False, True]])
# FP
# FP = (groud_truth_bool==False) & (prediction_bool==True)
# Simpler
= ~groud_truth_bool & prediction_bool
FP FP
tensor([[False, False],
[ True, False]])
# FN
# FN = (groud_truth_bool==True) & (prediction_bool==False)
# Simpler
= groud_truth_bool & ~prediction_bool
FN FN
tensor([[False, True],
[False, False]])
Counting the total occurrences for each one
= {} occurrences
# TP quantity
"TP"] = TP.sum().item()
occurrences["TP"] occurrences[
1
# TN quantity
"TN"] = TN.sum().item()
occurrences["TN"] occurrences[
1
# FP quantity
"FP"] = FP.sum().item()
occurrences["FP"] occurrences[
1
# FN quantity
"FN"] = FN.sum().item()
occurrences["FN"] occurrences[
1
Explanations
Operations
When have something like
TP + FP
, we can read this as sum between the number of occurrences TP and the number of occurrences FPConsidering green is
Truth
and Red isFalse
:
- Ground Truth:
- Prediction:
- Truth table of error types
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈╮
┊ GT ┊ Pred ┊ Out ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ TN ┊
┊ 0 ┊ 1 ┊ FP ┊
┊ 1 ┊ 0 ┊ FN ┊
┊ 1 ┊ 1 ┊ TP ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈╯
TN
Looking at the Truth table of error types, we also can say or simplify the operation to TN = (GT nor Pred)
.
We can play with Karnaugh map to found the same result, where A=Ground Truth
and B=Prediction
:
- Truth table of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈╮
┊ A ┊ B ┊ TN ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ 1 ┊
┊ 0 ┊ 1 ┊ 0 ┊
┊ 1 ┊ 0 ┊ 0 ┊
┊ 1 ┊ 1 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈╯
- Karnaugh map of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈╮
┊ not A ┊ A ┊
╭┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┤
┊ not B ┊ 1 ┊ 0 ┊
┊ B ┊ 0 ┊ 0 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈╯
- Grouping:
- G1 = (not A) and (not B)
- Solution
- Out = TN = G1 = (not A) and (not B) then (not GT) and (not Pred)
- Simplify
- (not A) and (not B) = not not ((not A) and (not B)) = not (not not A or not not B) = not (A or B) =A nor B = GT nor Pred
Union
Union = TP + FP + FN
Looking at the Truth table of error types, we also can say or simplify the operation to Union = GT or Pred
.
We can play with Karnaugh map to found the same result, where A=Ground Truth
and B=Prediction
:
- Truth table of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈╮
┊ A ┊ B ┊ Union ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ 0 ┊
┊ 0 ┊ 1 ┊ 1 ┊
┊ 1 ┊ 0 ┊ 1 ┊
┊ 1 ┊ 1 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈╯
- Karnaugh map of Union function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈╮
┊ not A ┊ A ┊
╭┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┤
┊ not B ┊ 0 ┊ 1 ┊
┊ B ┊ 1 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈╯
- Grouping:
- G1 = A
- G2 = B
- Solution
- Out = Union = G1 + G2 = A + B = A or B then GT or Pred
Intersection (TP)
Intersection = TP
Looking at the Truth table of error types, we also can say or simplify the operation to Union = GT and Pred
.
We can play with Karnaugh map to found the same result, where A=Ground Truth
and B=Prediction
:
- Truth table of Intersection function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
┊ A ┊ B ┊ Intersection ┊
├┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┈┈┈┈┈┤
┊ 0 ┊ 0 ┊ 0 ┊
┊ 0 ┊ 1 ┊ 0 ┊
┊ 1 ┊ 0 ┊ 0 ┊
┊ 1 ┊ 1 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
- Karnaugh map of Intersection function
╭┈┈┈┈┈┈┈┈┈┬┈┈┈┈┈┈┈┈┈╮
┊ not A ┊ A ┊
╭┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┼┈┈┈┈┈┈┈┈┈┤
┊ not B ┊ 0 ┊ 0 ┊
┊ B ┊ 0 ┊ 1 ┊
╰┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈┴┈┈┈┈┈┈┈┈┈╯
- Grouping:
- G1 = AB
- Solution
- Out = Intersection = G1 = AB = A and B then GT and Pred
FAQ
Why use Pytorch Tensors instead of Numpy array?
Tensors are in GPU memory (VRAM), and compute using the GPU processor. The numpy matrix, or builtin list, is in normal memory (RAM), and operations between these use the normal processor (CPU). For matrices math, in a few words, GPUs are better suited because they are constructed for that (at the hardware level).
Furthermore, have an important factor, when training a deep learning model, in general, you will use a GPU, because of the same reason explained above. So, if you have ground truth and predicted matrices at VRAM, and write a code to compute some math using the NumPy by example, you can easily transform a tensor into a NumPy array calling something like
.detach().cpu().numpy()
. And with tensor transformed into a NumPy array, you just need to realize the math/NumPy operation. The problem is, doing that, you will spend time with the data doing something likefrom VRAM to RAM to CPU cache
for then the CPU can compute the operation.- Doing this with bigger batchs of matrices will be worst. And doing this repeatedly at a loop, you can spend time that could be saved.
Why use a Pytorch metric or a writed by self metric using Tensors instead of the simple use of sklearn function?
- The same answers from Why use Pytorch Tensors instead of Numpy array?