Abstract:
Logical reasoning over Knowledge Graphs (KGs) for first-order logic (FOL) queries performs the query inference over KGs with logical operators, including conjunction, disjunction, existential quantification and negation, in order to closely approach true answers in embedding spaces. However, most existing work imposes strong distributional assumptions (e.g., Beta distribution) to represent entities and queries into presumed distributional shape, which clearly limits their expressive power. Moreover, query embeddings are challenging due to the relational complexities in multi-relational KGs, such as symmetry, anti-symmetry and transitivity. To bridge the gap, we propose a logical queries reasoning framework, Line Embedding (LinE), for FOL queries. First, to relax the distributional assumptions, we introduce the logic space transformation layer, which is a generic neural function that converts embeddings from probabilistic distribution space to LinE embeddings space. Second, to tackle multi-relational and logical complexities, we formulate neural relation-specific projections and individual logical operators to truthfully ground LinE query embeddings on logical regularities and KG factoids. Lastly, to verify the LinE embedding quality, we generate a FOL query dataset from WordNet, which richly encompasses hierarchical relations. Extensive experiments show that LinE achieves considerable performance gain via both generalization reasoning and logical entailment settings on three benchmarks (Freebase, NELL, and WordNet) against existing strong baselines, particularly for multi-hop relational queries and negation-related queries.