besskge.bess.AllScoresBESS
- class besskge.bess.AllScoresBESS(candidate_sampler, score_fn, window_size=1000)[source]
Distributed scoring of (h, r, ?) or (?, r, t) queries against the entities in the knowledge graph, returning all scores to host in blocks, based on the BESS [CJM+22] inference scheme. To be used in combination with a batch sampler based on a “h_shard”/”t_shard”-partitioned triple set. Since each iteration on IPU computes only part of the scores (based on the size of the sliding window), metrics should be computed on host after aggregating data (see
besskge.pipeline.AllScoresPipeline
).Only to be used for inference.
Initialize AllScores BESS-KGE module.
- Parameters:
candidate_sampler (
PlaceholderNegativeSampler
) –besskge.negative_sampler.PlaceholderNegativeSampler
class, specifying corruption scheme.score_fn (
BaseScoreFunction
) – Scoring function.window_size (
int
) – Size of the sliding window, namely the number of negative entities scored against each query at each step on IPU and returned to host. Should be decreased with large batch sizes, to avoid an OOM error. Default: 1000.
- forward(step, relation, head=None, tail=None)[source]
Forward step.
Similarly to
ScoreMovingBessKGE
, candidates are scored on the device where they are gathered, then scores for the same query against candidates in different shards are collected together via an AllToAll.- Parameters:
step (
Tensor
) – The index of the block (of size self.window_size) of entities on each IPU to score against queries.relation (
Tensor
) – shape: (1, shard_bs,) Relation indices.head (
Optional
[Tensor
]) – shape: (1, shard_bs,) Head indices, if known. Default: None.tail (
Optional
[Tensor
]) – shape: (1, shard_bs,) Tail indices, if known. Default: None.
- Return type:
- Returns:
The scores for the completions.