streamsight.evaluators.EvaluatorPipelineBuilder

class streamsight.evaluators.EvaluatorPipelineBuilder(ignore_unknown_user: bool = True, ignore_unknown_item: bool = True, seed: int | None = None)

Bases: Builder

Builder to facilitate construction of evaluator. Provides methods to set specific values for the evaluator and enforce checks such that the evaluator can be constructed correctly and to avoid possible errors when the evaluator is executed.

Parameters:
  • ignore_unknown_user (bool, optional) – Ignore unknown user in the evaluation, defaults to True

  • ignore_unknown_item (bool, optional) – Ignore unknown item in the evaluation, defaults to True

__init__(ignore_unknown_user: bool = True, ignore_unknown_item: bool = True, seed: int | None = None)

Methods

__init__([ignore_unknown_user, ...])

add_algorithm(algorithm[, params])

Add algorithm to evaluate.

add_metric(metric)

Add metric to evaluate algorithm on.

add_setting(setting)

Add setting to the evaluator builder.

build()

Build Evaluator object.

clear_metrics()

Clear all metrics from the builder.

set_metric_K(K)

Set K value for all metrics.

Attributes

algorithm_entries

List of algorithms to evaluate

metric_entries

Dict of metrics to evaluate algorithm on.

setting

Setting to evaluate the algorithms on

ignore_unknown_user

Ignore unknown user in the evaluation

ignore_unknown_item

Ignore unknown item in the evaluation

_abc_impl = <_abc._abc_data object>
_check_ready()

Check if the builder is ready to construct Evaluator.

Raises:

RuntimeError – If there are invalid configurations

_check_setting_exist()

Check if setting is already set.

Raises:

RuntimeError – If setting has not been set

add_algorithm(algorithm: str | type, params: Dict[str, int] | None = None)

Add algorithm to evaluate.

Adding algorithm to evaluate on. The algorithm can be added by specifying the class type or by specifying the class name as a string.

Parameters:
  • algorithm (Union[str, type]) – Algorithm to evaluate

  • params (Optional[Dict[str, int]], optional) – Parameter for the algorithm, defaults to None

Raises:

ValueError – If algorithm is not found in ALGORITHM_REGISTRY

add_metric(metric: str | type) None

Add metric to evaluate algorithm on.

Metric will be added to the metric_entries dict where it will later be converted to a list when the evaluator is constructed.

Note

If K is not yet specified, the setting’s top_K value will be used. This requires the setting to be set before adding the metric.

Parameters:
  • metric (Union[str, type]) – Metric to evaluate algorithm on

  • K (Optional[int], optional) – Top K value to evaluate the prediction on, defaults to None

Raises:
  • ValueError – If metric is not found in METRIC_REGISTRY

  • RuntimeError – If setting is not set

add_setting(setting: Setting) None

Add setting to the evaluator builder.

Note

The setting should be set before adding metrics or algorithms to the evaluator.

Parameters:

setting (Setting) – Setting to evaluate the algorithms on

Raises:

ValueError – If setting is not of instance Setting

algorithm_entries: List[AlgorithmEntry]

List of algorithms to evaluate

build() EvaluatorPipeline

Build Evaluator object.

Raises:

RuntimeError – If no metrics, algorithms or settings are specified

Returns:

Evaluator object

Return type:

Evaluator

clear_metrics() None

Clear all metrics from the builder.

ignore_unknown_item

Ignore unknown item in the evaluation

ignore_unknown_user

Ignore unknown user in the evaluation

metric_entries: Dict[str, MetricEntry]

Dict of metrics to evaluate algorithm on. Using Dict instead of List for fast lookup

metric_k: int
seed: int
set_metric_K(K: int) None

Set K value for all metrics.

Parameters:

K (int) – K value to set for all metrics

setting: Setting

Setting to evaluate the algorithms on