Positive and negative edges in train_test_split_edges from Pytorch geometric package
Asked Answered
V

1

7

I am trying to find the explanation for negative and positive edges in a graph as explained in the opening of the function train_test_split_edges Pytorch Geometric doc. According to the doc file it says the function is supposed to split a graph into "positive and negative train/val/test edges". What is the meaning of a positive edge or a negative edge for that matter. According to the code the positive edges "seems" to be the connections of edges in the upper triangle of the adjacency matrix of the graph and the negative edges are the edges in the lower triangle of the adjacency matrix. So if (1,0) is considered to be a positive edge then in an undirected graph (0,1) is a negative edge. Am I correct? I am not finding anywhere about the meaning of positive edge / negative edge where in comes to graphs.

Valiant answered 2/4, 2021 at 0:27 Comment(0)
G
7

In link prediction tasks, it is usual to treat existing edges in the graph as positive examples, and non-existent edges as negative examples.

i.e. in training/prediction you feed the network a subset of all edges in the complete graph, and the associated targets are:

  1. "this is a real edge" (positive), and
  2. "this is not a real edge" (negative).
Grigson answered 2/4, 2021 at 11:37 Comment(3)
Are self loops considered as positive or negative examples - or they are not considered at all.Valiant
@Valiant that depends on the specific problem.Grigson
In Autoencoders, where one is interested in the reconstruction loss for existing edges, does it make sense to include negative edges in testing?Dilator

© 2022 - 2024 — McMap. All rights reserved.