Shortcuts

MAP#

class ignite.metrics.rec_sys.MAP(top_k=10, ignore_zero_hits=True, output_transform=<function MAP.<lambda>>, device=device(type='cpu'), skip_unrolling=False)[source]#

Calculates the Mean Average Precision (MAP) at k for Recommendation Systems.

MAP measures the mean of Average Precision (AP) across all users. AP for a single user is the average of precision values computed at every position where a relevant item appears in the ranked top-k list, divided by the total number of relevant items for that user (clipped at k).

AP@Ki=1min(Ri,K)j=1KPrecision@j1(reli,j)\text{AP}@K_i = \frac{1}{\min(R_i, K)} \sum_{j=1}^{K} \text{Precision}@j \cdot \mathbb{1}(\text{rel}_{i,j})
MAP@K=1Ni=1NAP@Ki\text{MAP}@K = \frac{1}{N} \sum_{i=1}^{N} \text{AP}@K_i

where RiR_i is the number of relevant items for user ii, reli,j\text{rel}_{i,j} is 1 if the item at rank jj is relevant and 0 otherwise, and Precision@j\text{Precision}@j is the proportion of relevant items in the top jj ranked predictions.

  • update must receive output of the form (y_pred, y).

  • y_pred is expected to be raw logits or probability scores for each item in the catalog.

  • y is expected to be binary (only 0s and 1s) values where 1 indicates a relevant item.

  • y_pred and y are only allowed shape (batch,num_items)(batch, num\_items).

  • returns a list of MAP values ordered by the sorted values of top_k.

Parameters:
  • top_k (list[int] | int) – a single positive integer or a list of positive integers that specifies k for calculating MAP@top-k. If a single int is provided, it will be wrapped in a list. Default is 10.

  • ignore_zero_hits (bool) – if True, users with no relevant items (ground truth tensor being all zeros) are ignored in computation of MAP. If set False, such users are counted with an Average Precision of 0. By default, True.

  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. The output is expected to be a tuple (prediction, target) where prediction and target are tensors of shape (batch, num_items).

  • device (str | device) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

  • skip_unrolling (bool) – specifies whether input should be unrolled or not before being processed. Should be true for multi-output models.

Examples

To use with Engine and process_function, simply attach the metric instance to the engine. The output of the engine’s process_function needs to be in the format of (y_pred, y). If not, output_transform can be added to the metric to transform the output into the form expected by the metric.

For more information on how metric works with Engine, visit Attach Engine API.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.metrics.clustering import *
from ignite.metrics.fairness import *
from ignite.metrics.rec_sys import *
from ignite.metrics.regression import *
from ignite.utils import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
metric = MAP(top_k=[1, 2, 3, 4])
metric.attach(default_evaluator, "map")
y_pred = torch.Tensor([
    [4.0, 2.0, 3.0, 1.0],
    [1.0, 2.0, 3.0, 4.0],
])
y_true = torch.Tensor([
    [0.0, 0.0, 1.0, 1.0],
    [0.0, 0.0, 0.0, 1.0],
])
state = default_evaluator.run([(y_pred, y_true)])
print(state.metrics["map"])

New in version 0.6.0.

Methods

compute

Computes the metric based on its accumulated state.

reset

Resets the metric to its initial state.

update

Updates the metric's state using the passed batch output.

compute()[source]#

Computes the metric based on its accumulated state.

By default, this is called at the end of each epoch.

Returns:

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type:

Any

Raises:

NotComputableError – raised when the metric cannot be computed.

reset()[source]#

Resets the metric to its initial state.

By default, this is called at the start of each epoch.

Return type:

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters:

output (tuple[torch.Tensor, torch.Tensor]) – the is the output from the engine’s process function.

Return type:

None

×

Search Docs