Skip to content

Commit

Permalink
Merge pull request #94 from eltociear/patch-1
Browse files Browse the repository at this point in the history
chore: fix typo in metrics.py
  • Loading branch information
Mikubill authored Feb 17, 2023
2 parents e3a2e09 + df1c200 commit 3484a30
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions annotator/uniformer/mmseg/core/evaluation/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def intersect_and_union(pred_label,
ignore_index (int): Index that will be ignored in evaluation.
label_map (dict): Mapping old labels to new labels. The parameter will
work only when label is str. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. The parameter will
reduce_zero_label (bool): Whether ignore zero label. The parameter will
work only when label is str. Default: False.
Returns:
Expand Down Expand Up @@ -101,7 +101,7 @@ def total_intersect_and_union(results,
num_classes (int): Number of categories.
ignore_index (int): Index that will be ignored in evaluation.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
ndarray: The intersection of prediction and ground truth histogram
Expand Down Expand Up @@ -149,7 +149,7 @@ def mean_iou(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
dict[str, float | ndarray]:
Expand Down Expand Up @@ -188,7 +188,7 @@ def mean_dice(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
dict[str, float | ndarray]: Default metrics.
Expand Down Expand Up @@ -229,7 +229,7 @@ def mean_fscore(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
beta (int): Determines the weight of recall in the combined score.
Default: False.
Expand Down Expand Up @@ -275,7 +275,7 @@ def eval_metrics(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
float: Overall accuracy on all images.
ndarray: Per category accuracy, shape (num_classes, ).
Expand Down

0 comments on commit 3484a30

Please sign in to comment.