besskge.scoring.ConvE
- class besskge.scoring.ConvE(negative_sample_sharing, sharding, n_relation_type, embedding_size, embedding_height, embedding_width, entity_initializer=[<function init_xavier_norm>, <function zeros_>], relation_initializer=[<function init_xavier_norm>], inverse_relations=True, input_channels=1, output_channels=32, kernel_height=3, kernel_width=3, input_dropout=0.2, feature_map_dropout=0.2, hidden_dropout=0.3, batch_normalization=True)[source]
ConvE scoring function [DPPR18].
Note that, differently from [DPPR18], the scores returned by this class have not been passed through a final sigmoid layer, as we assume that this is included in the loss function.
By design, this scoring function should be used in combination with a negative/candidate sampler that only corrupts tails (possibly after including all inverse triples in the dataset, see the add_inverse_triples argument in
besskge.sharding.PartitionedTripleSet.create_from_dataset()
).Initialize ConvE model.
- Parameters:
negative_sample_sharing (
bool
) – seeDistanceBasedScoreFunction.__init__()
sharding (
Sharding
) – Entity sharding.n_relation_type (
int
) – Number of relation types in the knowledge graph.embedding_size (
int
) – Size of entity and relation embeddings.embedding_height (
int
) – Height of the 2D-reshaping of the concatenation of head and relation embeddings.embedding_width (
int
) – Width of the 2D-reshaping of the concatenation of head and relation embeddings.entity_initializer (
Union
[Tensor
,List
[Callable
[...
,Tensor
]]]) – Initialization functions or table for entity embeddings. If not passing a table, two functions are needed: the initializer for entity embeddings and initializer for (scalar) tail biases.relation_initializer (
Union
[Tensor
,List
[Callable
[...
,Tensor
]]]) – Initialization function or table for relation embeddings.inverse_relations (
bool
) – If True, learn embeddings for inverse relations. Default: True.input_channels (
int
) – Number of input channels of the Conv2D operator. Default: 1.output_channels (
int
) – Number of output channels of the Conv2D operator. Default: 32.kernel_height (
int
) – Height of the Conv2D kernel. Default: 3.kernel_width (
int
) – Width of the Conv2D kernel. Default: 3.input_dropout (
float
) – Rate of Dropout applied before the convolution. Default: 0.2.feature_map_dropout (
float
) – Rate of Dropout applied after the convolution. Default: 0.2.hidden_dropout (
float
) – Rate of Dropout applied after the Linear layer. Default: 0.3.batch_normalization (
bool
) – If True, apply batch normalization before and after the convolution and after the Linear layer. Default: True.
- broadcasted_dot_product(v1, v2)
Broadcasted dot product of queries against sets of entities.
For each query and candidate, computes the dot product of the embeddings.
- Parameters:
- Return type:
- Returns:
shape: (batch_size, B * n_neg) if
BaseScoreFunction.negative_sample_sharing
else (batch_size, n_neg)
- forward(head_emb, relation_id, tail_emb)
- reduce_embedding(v)
Sum reduction along the embedding dimension.
- score_heads(head_emb, relation_id, tail_emb)[source]
Score sets of head entities against fixed (r,t) queries.
- Parameters:
- Return type:
- Returns:
shape: (batch_size, B * n_heads) if
BaseScoreFunction.negative_sample_sharing
else (batch_size, n_heads). Scores of broadcasted triples.
- score_tails(head_emb, relation_id, tail_emb)[source]
Score sets of tail entities against fixed (h,r) queries.
- Parameters:
- Return type:
- Returns:
shape: (batch_size, B * n_tails) if
BaseScoreFunction.negative_sample_sharing
else (batch_size, n_tails) Scores of broadcasted triples.