common.pytorch.metrics package#

Submodules#

common.pytorch.metrics.accuracy module#

Accuracy metric for PyTorch.

common.pytorch.metrics.auc module#

AUC (Area under the curve) metric for PyTorch.

common.pytorch.metrics.cb_metric module#

class common.pytorch.metrics.cb_metric.CBMetric[source]#

Bases: abc.ABC

Base class for creating metrics on CS devices.

Subclasses must override methods to provide the full functionality of the metric. These methods are meant to split the computation graph into 2 portions:

  1. update_on_device: Compiles and runs on the device (i.e., Cerebras).

  2. update_on_host: Runs on the host (i.e., CPU).

These metrics also support running on CPU and GPU.

Constructs a CBMetric instance.

This also registers the metric in the global pool of metrics. Therefore, it is important for subclasses to call super().__init__(). Otherwise, the metrics will not run.

Parameters

name – Name of the metric. If None or empty string, it defaults to the name of the class.

__init__(name: Optional[str] = None)[source]#

Constructs a CBMetric instance.

This also registers the metric in the global pool of metrics. Therefore, it is important for subclasses to call super().__init__(). Otherwise, the metrics will not run.

Parameters

name – Name of the metric. If None or empty string, it defaults to the name of the class.

abstract compute() Any[source]#

Returns the computed metric value over many iterations.

This is the “reduction” part of the metric over all steps.

classmethod create_metric_impl_factory(pipeline_metric_cls: Optional[common.pytorch.metrics.cb_metric.CBMetric] = None, ws_metric_cls: Optional[common.pytorch.metrics.cb_metric.CBMetric] = None) common.pytorch.metrics.cb_metric.CBMetric[source]#

Returns a factory for generating a correct instance of a metric

Parameters
  • pipeline_metric_cls – Optional CBMetric which specifies the compute for the pipeline execution strategy. Can be used for, and is the default for CPU/GPU

  • ws_metric_cls – Optional CBMetric which species the compute in weight streaming execution strategy. Can be used for CPU/GPU

Returns

(*args, **kwargs) -> CBMetric that automatically gives

an instance of the correct metric given the execution strategy

Return type

metric_factory

Raises

AssertionError – if values of pipeline_metric_cls or ws_metric_cls are invalid

init_state()[source]#

Sets the initial state of the metric.

Subclasses should override this method to provide any metric-specific states. This method is called once as part of __init__.

property name#

Returns the name of the metric.

property num_updates#

Returns number of times the metric was updated (i.e., stepped).

on_device_state_dict() Dict[str, torch.Tensor][source]#

A hook for subclasses to inject metric state variables (WS only).

In constrast to pipeline execution strategy where metrics are executed on the host, in weight streaming, metrics are part of the graph and are executed on device. As such any metric state variables that are updated need to be tracked to create a correct graph. This hook provides a mechanism for metric implementations to specify their state variables which will come up as outputs in the compile.

reset() None[source]#

Resets the metric state.

Instead of overriding this method, subclasses should override reset_state method which is called internally in this method.

reset_state() None[source]#

Resets the metric state.

Subclasses should override this method to clear any metrics-specific states.

update_on_device(*args, **kwargs) common.pytorch.metrics.cb_metric.DeviceOutputs[source]#

Define the portion of the metric computation that runs on the device.

This method must return a DeviceOutputs object whose args/kwargs can only contain a item/list/tuple/dict of torch tensors or Nones. These tensors are converted to CPU tensors at the step boundary and passed to update_on_host to do the host (i.e. CPU) portion of the computation.

The default implementation is just a passthrough where the arguments are converted to host tensors as is.

This method is called for every iteration.

NOTE: No tensors should be evaluated in this method. This method merely defines the operations in the graph that runs on device.

abstract update_on_host(*args, **kwargs) None[source]#

Define the portion of the metric computation that runs on host.

This methods takes as inputs the outputs of update_on_device whose tensors have been evaluated and converted to CPU tensors. It can do any sort of computation on the host (e.g., updating the metric state).

This method is called for every iteration.

class common.pytorch.metrics.cb_metric.DeviceOutputs[source]#

Bases: object

Class for encapsulating the outputs of CBMetric.update_on_device.

Parameters
  • args – postional arguments which are passed to CBMetric.update_on_host once they are converted to CPU tensors.

  • kwargs – keyword arguments which are passed to CBMetric.update_on_host once they are converted to CPU tensors.

__init__(args: typing.List[typing.Any] = <factory>, kwargs: typing.Dict[str, typing.Any] = <factory>) None#
args: List[Any]#
kwargs: Dict[str, Any]#
common.pytorch.metrics.cb_metric.compute_all_metrics() Dict[str, Any][source]#

Computes all the registered metrics and returns them in a dict.

common.pytorch.metrics.cb_metric.get_all_metrics() Dict[str, common.pytorch.metrics.cb_metric.CBMetric][source]#

Returns all registered metrics.

common.pytorch.metrics.cb_metric.reset_all_metrics() None[source]#

Resets the internal state of all reistered metrics.

common.pytorch.metrics.dice_coefficient module#

Dice coefficient metric for PyTorch.

common.pytorch.metrics.dice_coefficient.compute_helper(confusion_matrix)[source]#

Returns the dice-coefficient as a float.

common.pytorch.metrics.fbeta_score module#

F Beta Score metric for PyTorch. Confusion matrix calculation in Pytorch referenced from: https://github.com/pytorch/ignite/blob/master/ignite/metrics/confusion_matrix.py

common.pytorch.metrics.mean_iou module#

mean Intersection-Over-Union (mIOU) metric for PyTorch. Calculate per-step mean Intersection-Over-Union (mIOU).

common.pytorch.metrics.mean_iou.compute_helper(confusion_matrix)[source]#

Returns the meanIOU

common.pytorch.metrics.mean_per_class_accuracy module#

Mean per class Accuracy metric for PyTorch. Calculates the accuracy for each class, then takes the mean of that.

common.pytorch.metrics.mean_per_class_accuracy.compute_helper(total_per_class_correct_predictions, total_per_class_tokens)[source]#

common.pytorch.metrics.metric_utils module#

common.pytorch.metrics.metric_utils.compute_confusion_matrix(labels: torch.Tensor, predictions: torch.Tensor, num_classes: int, weights: Optional[torch.Tensor] = None, on_device: bool = False) torch.Tensor[source]#

Computes the confusion matrix from predictions and labels. The matrix columns represent the prediction labels and the rows represent the real labels. The confusion matrix is always a 2-D array of shape [n, n], where n is the number of valid labels for a given classification task.

If num_classes is None, then num_classes will be set to one plus the maximum value in either predictions or labels. Class labels are expected to start at 0. For example, if num_classes is 3, then the possible labels would be [0, 1, 2].

If weights is not None, then each prediction contributes its corresponding weight to the total value of the confusion matrix cell.

For example: ```

confusion_matrix([1, 2, 4], [2, 2, 4]) ==>

[[0 0 0 0 0] [0 0 1 0 0] [0 0 1 0 0] [0 0 0 0 0] [0 0 0 0 1]]

``` Note that the possible labels are assumed to be [0, 1, 2, 3, 4], resulting in a 5x5 confusion matrix.

Parameters
  • labelsTensor of real labels for the classification task.

  • predictionsTensor of predictions for a given classification.

  • weights – An optional Tensor whose shape matches predictions.

  • num_classes – The possible number of labels the classification task can have. If this value is not provided, it will be calculated using both predictions and labels array.

Returns

A Tensor with shape [n, n] representing the confusion matrix, where n is the number of possible labels in the classification task.

Raises

ValueError – If weights is not None and its shape doesn’t match predictions.

common.pytorch.metrics.metric_utils.divide_no_nan(num: torch.Tensor, denom: torch.Tensor) torch.Tensor[source]#

Prevent zero division. Replicate the behavior of tf.math.divide_no_nan()

common.pytorch.metrics.perplexity module#

Perplexity metric for PyTorch.

common.pytorch.metrics.precision_at_k module#

Precision@K metric for PyTorch.

common.pytorch.metrics.recall_at_k module#

Recall@K metric for PyTorch.

common.pytorch.metrics.rouge_score module#

Rouge Score metric for PyTorch.

common.pytorch.metrics.rouge_score.extract_text_tokens_given_cls_indices(labels, cls_indices, cls_weights, input_ids)[source]#

Extract text tokens that belongs to segments which CLS tokens have labels equal to 1. .. rubric:: Example

[[CLS, label=1] Dogs, like, cats, [CLS, label=0], Cats, like, dogs] -> [Dogs, like, cats].

Parameters
  • labels – Numpy array of shape (max_cls_tokens,).

  • cls_indices – Numpy array of shape (max_cls_tokens).

  • cls_weights – Numpy array of shape (max_cls_tokens,).

  • input_ids – Numpy array of shape (max_sequence_length,).

Returns

Numpy array with extracted input ids.

Return type

extracted_input_ids

common.pytorch.metrics.rouge_score.extract_text_words_by_token_ids(input_ids, tokenizer, max_sequence_length)[source]#

Takes input ids of tokens and convert them to a tensor with words.

Parameters
  • input_ids – Numpy array of shape (max_sequence_length,).

  • tokenizer – Tokenizer object which contains functions to convert words to token and vice versa.

  • max_sequence_length – int, maximum length of the sequence.

Returns

Numpy array with computed words padded to max seq length.

Return type

words_padded

common.pytorch.metrics.rouge_score.extract_text_words_given_cls_indices(labels, cls_indices, cls_weights, input_ids, tokenizer)[source]#

Extract text words that belongs to segments which CLS tokens have labels equal to 1.

Example

[[CLS, label=1] Dogs, like, cats, [CLS, label=0], Cats, like, dogs] -> [Dogs, like, cats].

Parameters
  • labels – Tensor of shape (batch_size, max_cls_tokens).

  • cls_indices – Tensor of shape (batch_size, max_cls_tokens).

  • cls_weights – Tensor of shape (batch_size, max_cls_tokens).

  • input_ids – Tensor of shape (batch_size, max_sequence_length).

  • tokenizer – Tokenizer to be used.

Returns

Tensor with extracted words.

Return type

extracted_words

Module contents#