Knowledge graph embedding models (KGEMs) project entities and relations from knowledge graphs (KGs) into dense vector spaces, enabling tasks such as link prediction and recommendation systems. However, these embeddings typically suffer from a lack of interpretability and struggle to represent entity similarities in a way that is meaningful to humans. To address these challenges, we introduce InterpretE, a neuro-symbolic approach that generates interpretable vector spaces aligned with human-understandable entity aspects. By explicitly linking entity representations to their desired semantic aspects, InterpretE not only improves interpretability but also enhances the clustering of similar entities based on these aspects. Our experiments demonstrate that InterpretE effectively produces embeddings that are interpretable and improve the evaluation of semantic similarities, making it a valuable tool in explainable AI research by supporting transparent decision-making. By offering insights into how embeddings represent entities, InterpretE enables KGEMs to be used for semantic tasks in a more trustworthy and reliable manner.
Knowledge graphs (KGs) are structured representations of real-world entities and their relationships, organised in the form of nodes and edges, where nodes represent entities while edges illustrate the relationships between them. KGs have gained significant attention for their applications in tasks like question-answering, information retrieval, and recommender systems (Baek et al., 2023; Dietz et al., 2018; Hou & Wei, 2023; Yasunaga et al., 2021). Despite the availability of large amounts of source data and the inclusion of millions of facts, KGs) remain incomplete, with missing entities or facts about entities. Knowledge graph embedding models (KGEMs) have been proposed to address this limitation. Since the early 2010s, significant advancements have been made in developing KGEMs, which aim to project entities and relations in KGs into a low-dimensional latent vector space. This representation enables machine readability and manipulation of KG data while preserving the relationships between entities. In doing so, KGEMs offer a sub-symbolic way of representing both entities and their connections within the original graph (Boschin et al., 2022).
Several types of KGEMs exist, such as translation-based models e.g. TransE (Bordes et al., 2013), TransH (Wang et al., 2014) and semantic matching models e.g. RESCAL (Nickel et al., 2011), ComplEx (Trouillon et al., 2016). These models have proven useful in various tasks, including link prediction (Rossi et al., 2021), entity alignment (Sun et al., 2020a), recommendation systems (Ristoski & Paulheim, 2016) and so on (see Wang et al., 2017 for an overview). Although KGEMs were primarily designed and trained for the task of link prediction or triple completion in KGs, there is a widespread belief that these models can also effectively capture similarities between entities, suggesting that similar entities will naturally cluster together in the vector space. As a result, KGEMs have been widely adopted for semantic tasks, including entity or relation similarity and conceptual clustering (Gad-Elrab et al., 2020; Kalo et al., 2019; Sun et al., 2020b). The assumption that KGEMs possess strong semantic capabilities was first called into question by Jain et al. (2021). In their study, the authors conducted simple yet systematic experiments, revealing that entities belonging to the same type or ontological class do not consistently cluster together in the vector space, except for the most basic entity types such as person and organisation. Subsequently, other recent studies have delved into this further, arriving at similar conclusions (Alshargi et al., 2019; Hubert et al., 2024).These findings cast doubt on the generalisability and utility of KGEMs for tasks that rely on capturing semantic relationships effectively.
A fundamental challenge for KGEMs in capturing semantic properties arises from the complexity of the underlying data. Entities in a KG possess diverse ‘aspects’ in terms of their attributes as well their relationships with other entities, all of which significantly impact their vector representations. This complexity makes it exceedingly difficult to identify the specific factors that shape the distribution of vectors within the embedding space. Given that entities have different types and numbers of connections in the KG, and the learned vectors span hundreds of dimensions, there is no clear correspondence between entity aspects and the dimensions of the resulting vectors. This absence of a direct mapping leads to a lack of semantic interpretability, making it difficult to understand why certain vectors in the embedding space are similar or to determine which entity aspects influence their representations.
Although a formal definition of interpretability has been elusive in machine learning (Murdoch et al., 2019), in this work, we align with a model-based interpretability presented by Murdoch et al. (2019) as ‘models that readily provide insight into the relationships they have learned’ drawing on a more traditional definition that puts emphasis on human-understandability of the functionality of a model (Porta, 2016).
While the ability to represent complex data in low-dimensional spaces allows for large-scale vector manipulations, and is certainly a desirable trait in KGEMs for enabling generalisation and their effective application in tasks such as link prediction, this same factor contributes to the poor semantic interpretability of these models. Nevertheless, KGEMs are still widely used in different semantic tasks, making the ability to capture and interpret the semantic features of underlying entities highly desirable. This work aims to bridge this gap by mapping the semantics of the entities with the dimensions in the vector representations of these entities, enhancing the interpretability of these embeddings and improving their utility for semantic tasks.
In this article, we propose a novel neuro-symbolic approach InterpretE that explicitly connects the embedding vectors to the desired task-driven or user selected aspects of the KG entities. Taking inspiration from previous works on conceptual spaces (Gärdenfors, 2000), we accomplish this by deriving new vector spaces (from the vector of a given KGEM) having interpretable dimensions that can be understood in terms of the human-understandable aspects of the entities. This understandability can help in enabling informed decisions in downstream semantic tasks (e.g., recommendation systems and entity clustering), debugging and comparing the models as well as understanding hidden biases (Simhi & Markovitch, 2023). An overview of the proposed approach is shown in Figure 1. While several previous works have proposed KGEMs that attempt to capture the semantics of entities in terms of ontological information (Holter et al., 2019; Smaili et al., 2018), these approaches are limited to encapsulating only the ontological classes or types of entities (e.g., whether an entity is a person or an organisation). They are not designed to account for other relevant or application-specific aspects of the entities, for instance, the location where a person was born or the awards received by a scientist. In contrast, our approach allows for the incorporation of a broader range of existing and interesting aspects from KG data, especially for entities. Through various experiments, we demonstrate that the vector spaces generated by InterpretE can effectively encapsulate any desired semantic aspects from the KG. Moreover, our method is highly flexible, accommodating a diverse array of entity aspects in terms of both quantity and type.
Overview of the proposed InterpretE method.
It is to be noted here that InterpretE serves as a way to derive alternative embeddings from existing methods, to specifically improve their interpretability for the applications where such a feature is a desirable and necessary. As such, InterpretE embeddings do not intend to compete with or outperform existing KGEMs on tasks such as link prediction, rather they serve as complementary embeddings for semantic tasks. The proposed InterpretE method serves as an effective and highly customisable way to obtain these alternative embeddings that can be tailored to fit any downstream semantic tasks. In view of this, the evaluation of the approach is presented in terms of the quality of the resulting clusters in the derived vector space, as well in terms of the semantic similarity of the corresponding entities. We also make the code publicly available1 to promote further research in this important direction.
Our work is situated within the broader context of explainable AI (XAI) research, where, with the popularity of large language models (LLMs) and their increasing integration across various applications, the importance of transparency and interpretability in these models has garnered significant attention. As large models become more widespread in fields such as healthcare, finance, and autonomous systems, understanding how these models make decisions has become crucial. The importance of XAI stems from concerns related to trust, fairness, and accountability, especially given that deep learning models and KGEMs are often regarded as ‘black boxes’. To the best of our knowledge, the InterpretE framework introduced in this work represents the first effort to address this issue for KGEMs in terms of restoring semantic interpretability to entity vectors by explicitly mapping these vectors to underlying, human-understandable aspects of the entities.
The salient contributions of our work can be summarised as follows.
Presentation of a novel neuro-symbolic approach called InterpretE that can derive interpretable embeddings (from any KG embedding model) for the KG entities.
Description of the data-driven process of identifying and selecting desired user-selected or task-oriented entity aspects from KG datasets.
Demonstration of the proposed method in that the embeddings generated by InterpretE encapsulate the desired semantic aspects of the underlying entities and that InterpretE is highly flexible in terms of the number and types of aspects that it can work with, making it scalable for different datasets and requirements of downstream applications.
The evaluation of the approach and the resulting embeddings in terms of the properties in the vector space as well as with the measurement of semantic similarity of the entities illustrates that InterpretE indeed leads to improved interpretability for KG embeddings.
The rest of the article is organised as follows: Section 2 introduces key concepts and background that is essential for understanding the proposed method in detail. Section 3 provides a comprehensive review and comparison with related work, highlighting the ongoing challenges addressed by our approach. In Section 4, we describe the selection process for entity aspects or features from the KG datasets, followed by a formal description of the InterpretE method in Section 5. Section 6 presents the method’s evaluation through various experiments, demonstrating its effectiveness and assessing the interpretability of the derived vectors, supported by illustrative plots and a discussion of results. Finally, Section 7 concludes the article and suggests directions for future work.
Preliminaries
Knowledge Graphs
A KG is a directed graph that represents knowledge in a structured format. It consists of nodes that correspond to real-world entities, such as people or cities, and edges that represent the relationships between these entities. The edges are labeled to indicate the nature of these relationships. More formally, a KG can be represented as , where is the set of all entities, is the set of relations and is the set of triples such that , , and . Each triplet indicates a relationship between the head and the tail . KGs play a crucial role in modelling networks of interconnected objects, such as citations, relationships among individuals, and more. Their structured representation facilitates semantic understanding, enabling both machines and humans to interpret complex relationships and contexts within the data. This capability has led to their increasing adoption in diverse fields, such as computer vision, where they have been shown to enhance performance through techniques like graph convolutional networks (Kipf & Welling, 2017). KGs are particularly useful in various applications in natural language processing. They significantly enhance question-answering systems by allowing the pruning of irrelevant information, which reduces the search space and accelerates the retrieval of accurate answers. For example, the QA-GNN framework (Yasunaga et al., 2021) showcases how KGs can improve the efficiency and effectiveness of question-answering tasks.
Knowledge Graph Embeddings
KGEMs aim to represent entities and relations from KGs as continuous vectors or matrices, known as embeddings (see Cao et al., 2024; Wang et al., 2017 for an overview). The main purpose of learning these embeddings is to simplify downstream tasks, while preserving the underlying structure of the KG. A scoring function is used to evaluate how likely a predicted entity is to accurately complete a triple, ensuring that the embeddings maintain the integrity of the original graph’s relationships. Notable types of KGEMs are as follows.
Translation Distance Models
These models operate under the assumption that adding the vectors of the head and relation will result in a vector close to that of the tail. One of the earliest examples of this type of KGEMs is TransE (Bordes et al., 2013). Formally, if , and denote the vectors of the head, relation and tail, respectively, then it holds that: . To ensure the accuracy of the triple, the following scoring function must be minimised:
where , and are the vectors of the head, relation and tail, respectively, all residing in the same shared embedding space.
However, TransE struggles to capture complex relationships such as one-to-many, many-to-one and many-to-many. TransH (Wang et al., 2014) addresses this limitation by introducing a relation-specific hyperplane for each relationship, allowing entities connected through that relationship to be distinguished based on their unique semantics within that context. TransR (Lin et al., 2015) builds on a similar concept but defines relation-specific spaces instead of hyperplanes. TransR is further refined by TransD (Ji et al., 2015), which uses two embedding vectors for each entity and relation and introduces a mapping matrix that generates two distinct mapping matrices for the head and tail entities.
Semantic Matching Models
These models employ a scoring mechanism based on vector similarity, where entities are represented as vectors and relations as matrices. The core assumption is that the transformation of the head embedding will closely approximate the tail embedding, which is formalised as: , where and are the vectors of the head and tail, respectively, and is the matrix representing the relation used for mapping. RESCAL (Nickel et al., 2011) utilises a bilinear scoring function, where each relation is represented as a matrix, and the mapping between the head and tail vectors is computed using this matrix. DistMult (Yang et al., 2015) simplifies RESCAL by constraining the relation matrix to be diagonal, which reduces the number of trainable parameters. ComplEx (Trouillon et al., 2016) extends this approach by introducing complex-valued embeddings, enabling the model to capture asymmetrical relations effectively.
Among other types of models, ConvE (Dettmers et al., 2018) was the first to predict missing links in KGs using Convolutional Neural Networks (CNNs). Unlike fully connected dense layers, CNNs can train with fewer parameters, allowing them to capture complex non-linear relationships. ConvE establishes local interactions across multiple dimensions between different entities, enabling it to model intricate patterns more effectively.
Conceptual Spaces and Interpretable Dimensions
According to Gärdenfors (2000), a conceptual space is a multidimensional framework where each dimension represents a different quality or property of a concept. These dimensions serve to describe various aspects of a concept in a structured and meaningful way. For example, when considering animals, dimensions such as height, weight and colour could represent specific quality dimensions that collectively define the concept of ‘animal’. These dimensions are fundamental in understanding how concepts are represented and compared within the space. Each dimension in a conceptual space is assumed to have its own inherent structure. For instance, some dimensions, like time or weight, are one-dimensional, represented by real, non-negative values. For more complex attributes, such as colour, Gärdenfors explains that the mental model can be represented by three dimensions: hue (circular), saturation and brightness (linear), creating a cognitive conceptual space where different points correspond to specific colours.
In this context, interpretable dimensionsDerrac and Schockaert (2015) refer to the axes or directions in the conceptual space that correspond to human-understandable properties of entities. For example, in a conceptual space representing animals, the interpretable dimensions could be height, weight, and speed. Each of these dimensions has a clear and intuitive meaning, making it easier to relate the points in the space to real-world attributes. Interpretable dimensions are critical because they allow us to map abstract vectors or mathematical representations back to meaningful, semantic concepts. To understand the semantics of conceptual spaces, consider that a language can be interpreted as a projection onto a conceptual space. In this projection, distinct elements of the language are represented as vectors, and predicates within the language correspond to regions or areas in the conceptual space. These regions can be primary, representing fundamental concepts, or secondary, derived from other regions. In a conceptual space, every point represents a possible individual, with each point consistently displaying well-defined properties based on its position along the interpretable dimensions. This structure allows for clear comparisons and distinctions between concepts, helping to identify similarities and differences based on their positions within the space (see Derrac & Schockaert, 2015 for further details).
Related Work
Explainability in Large Models
Recently, the majority of embedding spaces have emerged from the training of LLMs. However, Simhi and Markovitch (2023) highlight a significant limitation of such representations: they often exceed human comprehension. To address this issue, they propose a new method for generating a conceptual space with dynamic granularity based on demand. Their work also introduces a novel assessment technique that demonstrates that the conceptualised vectors indeed reflect the semantics of the original latent representations, validated through either human raters or LLM-based raters. In relation with large models, Huben et al. (2023) discuss the concept of polysemanticity, which poses a challenge to the interpretation of neural networks. They attempt to reconstruct the internal activations of the language model to tackle this issue arising from neural networks having fewer neurons compared to the features they represent. This line of research is important within the framework of explainable AI (Arrieta et al., 2020), our work focusses on KGEMs. While Huben et al. (2023) essentially proposes a way to reverse-engineer the monosemantic features from a given network, our intention with InterpretE is instead to derive new embedding vectors for the KG entities while aligning them to a set of customisable, pre-defined and desirable aspects of these entities that may be user-defined or task-driven. By striving to make representations more understandable and interpretable, we aim to address the challenges faced in downstream applications where semantics are critical, such as entity similarity and recommendation systems.
Semantics in KG Embeddings
KGEMs provide sub-symbolic representations of entities and relations in a KG, and enable the vector manipulations of the data for tasks such as KG completion and triple classification. In recent literature, several critical works have questioned the widely-held assumption that KGEMs produce semantically meaningful representations of underlying entities (Hubert et al., 2024; Jain et al., 2021). In a popular previous work, Jain et al. (2021) investigated the degree to which similar entities correspond to similar vectors and concluded that this does not hold true universally. They demonstrated that entity embeddings derived from KGEMs often struggle to effectively discern entity types within a KG, with simpler statistical methods offering comparable performance. Additionally, Ilievski et al. (2024) observed consistent under-performance of KGEMs compared to simpler heuristics in tasks reliant on similarity, particularly within word embeddings. The authors argue that many properties that heavily relied upon by KGEMs are not conducive to determining similarity, thereby introducing noise that ultimately undermines performance. In Hubert et al. (2024), Hubert et al. challenge the widely held belief that entity similarity within a graph is adequately represented in the embedding space. Their comprehensive tests assess the capacity of KGEMs to effectively group related entities and investigate the underlying characteristics of this phenomenon. However, these previous studies primarily focus on questioning the validity of the aforementioned assumption without offering concrete solutions to address the identified shortcomings, which is the focus of our work.
KG Embeddings and Ontologies
There has been considerable work on embedding ontologies in the literature (Chen et al., 2021; Guan et al., 2019; Hao et al., 2019a; Holter et al., 2019; Smaili et al., 2018, 2019). Recent techniques have aimed to develop robust and efficient methods for embedding OWL (Web Ontology Language) and OWL2 ontologies that effectively express their semantics. Holter et al. (2019) computed embeddings for OWL2 ontologies by projecting ontology axioms into a graph and creating a corpus of phrases through random walks over this graph. A neural language model generates concept embeddings from this corpus. This work addresses limitations in earlier approaches (Smaili et al., 2018, 2019) that treated each axiom as a sentence, leading to issues such as insufficient corpus size for small to medium ontologies, noise introduced by OWL constructs, and Word2Vec’s inability to differentiate between logically similar sentences. To overcome these challenges, the authors developed a system that creates a graph from the ontology, navigates the ontology graph using various techniques, constructs a corpus of phrases based on these walks and derives concept embeddings from this corpus.
Following this, Chen et al. (2021) introduced OWL2, an ontology embedding technique based on random walks and word embedding that captures the semantics of an OWL ontology by considering its semantic information, logical constructors, and graph structure. They expanded OWL2Vec to create OWL2, a more robust embedding system. OWL2 navigates the graph forms of an OWL (or OWL2) ontology to generate a corpus of three documents that encapsulate various aspects of the ontology’s semantics, including (i) graph topology and logical constructors, syntactic information and a combination of and . Ultimately, OWL2 employs a word embedding model to produce word and entity embeddings from the generated corpus. While these works primarily focus on embedding the semantics represented in ontologies, their goals differ significantly from ours. They do not aim to establish clear connections between the embedding space and the underlying concepts in the ontology. Another line of work concerns with creating embeddings specifically for Ontologies with the goal to enable ontology specific tasks such as ontology learning, reasoning and ontology-mediated question answering (Jackermeier et al., 2024; Xiong et al., 2022; Yang et al., 2025). Ontology embedding methods also have been used for vision tasks such as few shot learning and image classification (Jayathilaka et al., 2021a, 2021b).
There are yet other works that are concerned with the integration of ontological knowledge directly into embedding models (e.g., Chen et al., 2021; d’Amato et al., 2021; Garg et al., 2019; Hao et al., 2019b; Krompaß et al., 2015; Wiharja et al., 2020; Ziegler et al., 2017), typically through modifications to the loss function during training. Indeed, while these works have the same motivation of improving the semantics in KG embedding models by leveraging the information in the ontology concepts and roles, contrary to our work, these works do not focus on the interpretability of the embedding spaces that they generate. While adding ontological information during the training of embeddings has been shown to enhance the semantic capabilities of the embeddings in some cases (Jain et al., 2021), this does not automatically entail interpretability in terms of human-understandable aspects of the entities for the generated embedding space. Moreover, the InterpretE approach is not limited to the ontological classes of KG entities. It can derive interpretable dimensions corresponding to various relevant aspects, including entity attributes (e.g., gender for person entities, genre for movie entities) and relationships with other entities (e.g., bornIn [location] for person entities, locatedIn [location] for organisation entities), or any combination thereof (which may be user-defined or task-driven). In fact, InterpretE can be applied to any of the aforementioned KG embedding techniques, generating interpretable embedding spaces with dimensions reflecting desired semantic aspects.
Interpretable Dimensions
Various approaches have focussed on constructing interpretable spaces using multiple data sources, primarily text but also images (Bouraoui et al., 2020, 2022; Derrac & Schockaert, 2015; Simhi & Markovitch, 2023; Zhu et al., 2021). As discussed in Section 2.3, conceptual spaces (Gärdenfors, 2000) represent concepts through cognitively meaningful features known as quality dimensions. These dimensions are typically derived from human judgments and serve as an intermediary representation layer between neural and symbolic representations. Derrac and Schockaert (2015) discuss techniques that facilitate a looser integration between embeddings and symbolic knowledge, deriving similarity and other forms of conceptual relatedness from vector space embeddings to support adaptable reasoning using ontologies. In another work, Bouraoui et al. (2020) demonstrate that incorporating conceptual neighbours leads to more accurate region-based representations through a straightforward technique for identifying them. Bouraoui et al. (2022) illustrate how a large corpus of text documents can be leveraged to learn essential semantic relations. While these approaches show promise for advancing explainable AI, they have not been extended to more complex datasets like KGs and their representations using KGEMs. In contrast, our proposed approach represents a first step toward identifying interpretable dimensions for such models, focussing on the underlying aspects of KG entities and thereby deriving vector spaces that are more human-understandable.
Data Analysis and Selecting Entity Aspects
In Section 5, the InterpretE method will be explained as a generalised and scalable process for obtaining entity aspects or entity features2 from a given KG dataset, as well as deriving interpretable entity vectors from it. In this section, we focus on dataset acquisition, specifically providing a detailed explanation of the data-driven analysis conducted for two KG benchmark datasets. This analysis aims to illustrate the nuances of entity feature extraction for real-world entities. To derive and categorise aspects for different entities in the KG, their type (or ontological class) information was essential. As such, we leveraged KG datasets with associated ontologies, focussing on subsets of Yago (Yago3-10 Mahdisoltani et al., 2013) and Freebase (FB15k-237) (Toutanova & Chen, 2015). Additionally, we reused Wordnet-based entity type mappings from Jain et al. (2021) to obtain the ontological classes for the entities. As a first step, the entities in the KGs were categorised by their ontological classes using WordNet types such as persons, organisations and locations. Next, for each entity type, the most representative relations were selected and their values were categorised based on their distribution in KG triples.
YAGO
An overview of the dataset analysis in terms of the most representative entity types for the YAGO3-10 dataset is shown in Figure 2. The YAGO3-10 dataset is dominated by entities of the class person. In Figure 2, it can be seen that while person is the most frequent class, various subclasses of person (at different levels of hierarchy in the ontology structure) are also frequent. For instance, player is a subclass of person, while football_player is a subclass of player. This illustrates that the person type is extensively represented throughout the dataset, ensuring sufficient data availability for this type in subsequent experiments, as the number of triples associated with it is substantial.
Top 10 most represented entity classes in YAGO3-10.
When analysing a given entity class, emphasis was placed on identifying the most represented relations. High-frequency relations are expected to be effectively captured by the embedding model, encapsulating relevant relational information within the final entity embeddings. The most significant relations for the person entities are shown in Figure 3. In this context, the relations isAffiliatedTo and playsFor emerge as the most represented for person class. It is interesting to note that an analysis of these relations in the YAGO3-10 dataset revealed that 87.65% of the triples associated with playsFor were identical to those linked with isAffiliatedTo. Due to this redundancy, only one of these relations was retained in the experiments to reduce overlap.
Top 10 most represented relations for person entities in YAGO3-10.
Most represented relations for class politician in YAGO3-10.
Most represented relations for class scientist in YAGO3-10.
This process was repeated for other classes, as shown in Figure 4 and Figure 5. To perform an in-depth analysis of various relations, the most represented values for a given relation (i.e., entities or values serving as the tail in the triplets) were examined, with the intention of finding out the values that were prominent for specific relations. For the experiments, we considered these values, coupled with the associated relation, to serve as the entity aspects (as described in Section 5). As shown in Figure 6, for entities of type organisation and the relation isLocatedIn, certain countries appeared frequently; for example, the United States accounted for 57.8% of all triples that pertained to organisation entities with the relation isLocatedIn.
Most represented values for class organisation with relation isLocatedIn in YAGO3-10.
The different types of values associated with each entity-relation pair were also examined, as illustrated in Figures 7 and 8. This analysis was aimed at informing the design of potential processes for transforming these values. It was found to be particularly valuable in instances where the distribution of values was nearly uniform, comprising a wide range of distinct entries. By understanding the type of each value, appropriate transformation strategies could be implemented. For instance, for the relation isAffiliated, it was found that the most frequently represented value type was club. With this insight, methods to categorise the clubs based on various criteria, such as their geographical locations (e.g., country, continent…) or the specific sports they are associated with, could be conceptualised. Different experiments could be designed to capture such features as desired.
Most represented values for class person in with the relation isAffiliatedTo in YAGO3-10.
Most represented values for class scientist with the relation hasWonPrize in YAGO3-10.
Freebase
Similar to YAGO3-10, we conducted a statistical analysis to select features for the FB15K-237 benchmark dataset. First, the most represented classes in the dataset were identified, as shown in Figure 10 (without any distinction as per their hierarchical levels in the ontology). For each type considered, we identified the most represented relations, this is detailed in Figure 11 for film entities as an example. Being the most represented relations, release_region and genre were focussed upon for the film class entities as shown in Figure 12 and Figure 13. In a different example, Figure 9 shows the most frequent types of professions for person class entities in this dataset. As with YAGO3-10, this dataset study serves as a guideline for the experimental design, and similar figures were generated across various classes, relations, and values to extract the most pertinent and representative information from the dataset.
Most represented values type for person in FB15K-237 with relation profession.
Top 10 most represented entity types in FB15K-237.
Top 10 most represented relations for film entities in FB15K-237.
Most represented values for film class with relation release_region in FB15K-237.
Most represented values for film class with relation genre in FB15K-237.
It is interesting to note here that different levels of abstraction were considered for the features of the entities while designing the experiments,. For example, for person entities, the relation wasBornIn (e.g., wasBornIn Paris) was found to be significant. One experiment mapped locations from specific cities to their respective countries (e.g., France), while another grouped cities by continent (e.g., Europe), allowing for evaluations across varying abstraction levels. (These experiments are presented and discussed in Section 6.) This adaptable process was primarily driven by the availability of sufficient data points for the entity features within the KG. Once features were defined, entities were labeled with binary values indicating the presence or absence of each feature. This labeled data was subsequently used for SVM training in the next phase.
LLMs for Automated Extraction of Entity Aspects
An alternative method for deriving entity aspects from the KGs was explored using large language models due to their promise of capturing complex relationships in data. In this subsection, we give an overview of how LLMs were used for feature selection via a retrieval-augmented generation (RAG) pipeline (Lewis et al., 2020). The process began by converting the KG into plain text, where each triple was treated as a document chunk. These chunks were embedded into a vector space using the LlamaIndex framework (v0.10.28),3 enabling the construction of a vectorised database. Relevant chunks were then retrieved to enrich the prompt provided to the LLM (Mistral-7B Jiang et al., 2023), with prompt design playing a crucial role in guiding the model. Figure 14 shows an example of the prompt template used to steer the LLM toward extracting salient quality dimensions from the KG. The outputs, as illustrated in Figure 15 include detailed lists of features and 2D vector projections capturing key relationships within the data. However, the LLM-generated answers were neither consistent nor explainable, which undermines the goal of obtaining human-interpretable features. This lack of consistency and clarity does not align with our objective of deriving features that are both transparent and statistically grounded. Although our experiments with LLMs and RAG were promising, they ultimately fell short of providing the reliable, understandable features needed for our analysis. We plan to revisit and expand on this automated approach in future work.
Prompt example used for the RAG pipeline.
Example results using LlamaIndex with Mistral-7B for Yago3-10.
InterpretE
In this section, we present the proposed InterpretE approach, which aligns vector representations with entity features by manipulating vector spaces to enhance interpretability. Figure 16 illustrates the components of this approach. The method begins with feature selection, leveraging data from the KG and associated ontology to select the desired entity features that are intended to be represented in the vector space. These features can be task-specific and context-driven (e.g., distinguishing players from politicians or grouping similar professions). The main idea is to guide the entity representation based on these features, ensuring that entities with shared features are positioned close together in the final derived space. The selected entity features (suppose ), along with the original pre-trained entity vectors from a KGEM (typically having ¿=100 dimensions), form the dataset. To generate interpretable embeddings, support vector machine (SVM) classifiers are trained on this dataset, with the entity features as guiding labels. This process transforms the -dimensional vectors into -dimensional InterpretE vectors, where each dimension explicitly corresponds to one of the entity features (as illustrated in the figure with a 2-dimensional space featuring Feature X and Feature Y). A formal representation of the approach, including the feature selection and SVM training process, is detailed below.
Different components of InterpretE.
Feature Selection
The InterpretE approach is centred around the representation of the desired aspects or features of the entities in the vector space. The purpose of the feature selection step is to extract one or more entity feature or combinations of multiple entity features present in the KG and transform the latent vectors for these entities (from a KGEM) to a human-understandable and interpretable vector space representing these feature(s). We designed several experiments with different features to test the approach, as detailed previously in Section 4. Feature selection was crucial as it guided experiment design.4 Intuitively, the feature selection process focusses on choosing the most relevant relations and their values for each entity within a class. The most common relations per class are selected, with relations having statistically insignificant occurrences being excluded. For each selected relation, the values it takes are identified, such as specific locations for a ‘isLocatedIn’ relation. Then, binary features are created for the entities in a class, indicating whether a particular value for a given relation is present or not. These binary features are concatenated in different combinations to form the feature vector, ensuring that it represents the key characteristics of the entities while remaining compact and interpretable.
Formally, given a KG , where is the set of entities, is the set of relations and is the set of triples such that , , and head entity has relation with tail entity .5 Also, let denote the set of values that are associated with a given relation in the set of triples T.
Let be the set of ontological classes (e.g., persons, organisations, locations) defined by the KG ontology (as previously shown in Figures 2 and 10 for Yago and Freebase resp.). For each class , the entities of class are denoted as:
Next, among all the relations associated with the entities of each class, a subset of the associated relations is chosen that is most representative of these entities (examples shown in Figures 3 and 11). To derive this, for each relation , the number of occurrences, denoted as , is computed based on how frequently the relation appears in triples where the head entity belongs to . A threshold is used to select significant relations for each class , meaning that only relations with a number (of occurrences) above the threshold are considered relevant. The set of selected relations for class is denoted as , and the condition for selecting a relation is:
It is important to note that the value of the threshold is highly dependent on the characteristics of the dataset and may vary for different relations within the dataset. Generally, the threshold was established such that values falling below this threshold were deemed irrelevant in comparison to the most frequent values. Thus, only those values exceeding the threshold were included in the analysis.
For each selected relation , the corresponding values are typically given by:
These values are then categorised into features, for example, for relation isLocatedIn the feature categories could be specific locations such as ‘Paris’ or ‘New York’.6
For each entity , a binary feature is defined to indicate whether the entity has a certain value for a given relation. The feature vector is constructed by including all selected relations and their associated values . This is defined as:
This binary feature is defined for each selected relation r and its corresponding selected values . Since different relations may have different numbers of relevant values, the total number of features for an entity depends on how many categories were obtained for each relation.
Some relations may contribute multiple binary features if multiple values are important (e.g., an isLocatedIn relation might have multiple locations, ‘Paris’, ‘London’ and so on, as relevant value categories). Other relations might contribute fewer binary features. This variability is reflected in the construction of the feature vector, ensuring that it captures all meaningful aspects of the entity while remaining interpretable. In this way, for each class , a set of features is defined, and the feature vector for each entity is given by:
In this representation, the feature vector is the concatenation of the binary features , where each feature corresponds to a relation and its respective value .
Dataset Curation
After selecting features, we pair the entities with their corresponding pre-trained KG embedding vectors (from the KGEM). The labeled feature vector forms the training dataset:
The dataset is then used in the subsequent phases for deriving interpretable vector spaces.
Derivation of Interpretable Vectors
After curating the dataset with the extracted features for different types of entities, the next step is to derive interpretable vectors using SVM classifiers. For each feature identified during the feature extraction phase, a separate SVM classifier is trained to map the pre-trained KG embedding vectors to a new interpretable vector space. This approach builds on the methodology used by Derrac and Schockaert (2015), with the goal of learning dimensions in the vector space that correspond to human-understandable features of the entities.
Although the ground truth feature vectors are available for each entity, directly converting these into binary vectors would result in a significant loss of the detailed information encapsulated in the original KG embeddings . Instead, we employ SVM classifiers, which allow us to leverage the continuous information from the original embeddings while learning to separate entities based on the selected features.
For each feature (where is the set of features defined for entities in class ), we define a binary classification problem. The binary label for each entity is derived from the feature function:
A separate SVM classifier is trained for each feature , using the KG embedding vectors as input. The objective of the SVM is to find a hyperplane that best separates the entities possessing the feature from those that do not. Formally, the SVM optimisation problem is defined as follows:
Here, is the weight vector corresponding to feature , is the bias term, is the regularisation parameter controlling the trade-off between margin maximization and classification error and is the number of training examples.
The weight vector can be perceived as the direction in the embedding space that corresponds to feature . These weights are used to define the hyperplane that separates entities having the feature from those that do not. The decision function associated with each hyperplane provides the signed distance between the estimated hyperplane and a given entity. This value represents the new coordinate for the corresponding feature. Specifically, the decision function for feature is given by:
The sign of this decision function determines whether the entity is classified as having the feature (above the hyperplane, class 1) or not (below the hyperplane, class 0). The scalar value itself is used as the new coordinate for this feature in the derived vector space, thus encoding both the presence of the feature and its relative strength in the embedding space.
The resulting weight vector characterises the estimated hyperplane for feature , and the decision function provides the corresponding coordinate for each entity.
InterpretE Vector Space
The collection of weight vectors associated with their biases for all features defines the set of estimated hyperplanes which help to transform the embeddings in the new vector space (via the decision function):
For each estimated hyperplane (represented by the weight vector) the new coordinates (one for each hyperplane) are computed for each entity. represents the coordinate linked to the value of the relation for the entity . The concatenation of all coordinates forms the new vector associated to entity :
Each new coordinate refers to a human-understandable feature, such that entities sharing common features are positioned close together in the transformed space. This makes the new vectors, referred to as InterpretE embeddings , more interpretable and transparent than the original KG embeddings.
The above process is described in Algorithm 1.
Experiments
In this section, we present experiments evaluating the efficacy of the proposed approach. We first specify the KG embedding models used in our experiments, then the implementation details for the SVM classifiers, followed by the assessment of the performance of the derived InterpretE embeddings in two distinct ways. We introduce metrics that capture the accuracy of the method and the consistency of the generated embedding space, providing scores and visualisations of the resulting embeddings to illustrate the results.
KG Embedding Models
Following previous works (Hubert et al., 2024; Jain et al., 2021), several popular and benchmark KGEMs were considered for the experiments to analyse the flexibility of the InterpretE approach across vector spaces generated with different methods, including ConvE (Dettmers et al., 2018), TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), Rescal (Nickel et al., 2011) and Complex (Trouillon et al., 2016). Although more recent embedding models have been introduced in the literature, as demonstrated by Ruffinelli et al. (2023), classical embedding models remain highly competitive when paired with effective parameter optimisation. Therefore, we have chosen the most widely used and popular embedding models as representative methods, obtaining the pre-trained embeddings from previous work,7 where the best parameters were found using the LibKGE library (Ruffinelli et al., 2023). It is important to note that our approach is entirely independent of the specific KGEM used and can be applied in conjunction with any pre-trained model, as long as the embedding vectors can be extracted from it.
Classifier Training and Optimisation
To streamline the training of the SVM classifiers, a grid search with cross-validation was performed using the Scikit-learn (Pedregosa et al., 2011) library, which is based on LibSVM (Chang & Lin, 2011). This process allowed us to automatically select the optimal hyperparameters (e.g., the regularisation parameter and prevent overfitting, thereby ensuring a more generalised solution). Class imbalance, which is common in large scale KGs as well as popular benchmark datasets, was addressed by assigning weights to entities based on the distribution of positive and negative examples for each feature. This weighting scheme helped balance the influence of underrepresented classes in the training process. A held-out test set comprising 20% of the entities (with no overlap with the training set) was used to evaluate the performance of each SVM classifier.
Evaluation of InterpretE Vector Space
The derived InterpretE vector spaces are expected to yield entity vectors that are organised into clusters that align with the selected features. To assess the effectiveness of these clusters and ensure a consistent representation across different entity types, we calculated the Cohen’s kappa coefficient (Cohen, 1960) i.e. score for the test set (following Derrac & Schockaert, 2015). This metric evaluates the level of agreement between two sets of categorical labels, in this case, the predictions made by the trained SVM and the ground truth labels for the test entities. The score ranges from to 1, with values closer to 1 indicating a stronger alignment between the model’s classifications and the expected feature-based grouping of entities in the vector space.
The mean scores across various experiments on the Yago3-10 dataset are shown in Table 1, with results for FB15k-237 provided in Table 3. As discussed in Section 4, entity features were selected in a range of combinations to explore diverse configurations and capture a variety of aspects for the entities, leading to a large number of experimental configurations. To streamline presentation, these results represent the aggregated mean values of the metrics across experiments, organised by the number of selected features. For each feature count, a representative example experiment and its corresponding scores are provided in Table 2 for Yago3-10 and Table 4 for FB15k-237. The values, which are close to 1 in most cases, underscore the approach’s strong potential in effectively clustering entities by the selected features.
Scores on the Test Set and simtop10 Scores on the Original and InterpretE Vectors for Different Number of Features (Mean Values) With Yago3-10 for the Different KGEMs.
Number of features
ConvE
TransE
DistMult
Rescal
Complex
1
score
.93
.88
.92
.95
.91
original
.182
.195
.199
.21
.206
InterpretE
.233
.223
.233
.234
.232
2
score
.87
.83
.86
.88
.84
original
.177
.231
.230
.240
.229
InterpretE
.270
.268
.270
.270
.270
3
score
.62
.49
.61
.63
.6
original
.577
.607
.640
.666
.648
InterpretE
.928
.914
.918
.924
.893
4
score
.89
.88
.90
.91
.90
original
.665
.695
.691
.696
.707
InterpretE
.814
.726
.787
.810
.824
6
score
.75
.71
.75
.74
.75
original
.635
.659
.679
.678
.648
InterpretE
.938
.888
.945
.923
.936
9
score
.87
.83
.86
.88
.84
original
.343
.353
.337
.347
.345
InterpretE
.624
.556
.624
.622
.621
Scores on the Test Set and simtop10 Scores on the Original and InterpretE vectors for representative Experiments With Yago3-10 for the Different KGEMs.
Experiments and features
ConvE
TransE
DistMult
Rescal
Complex
person: hasGender
score
1
1
1
1
.99
original
.068
.059
.054
.061
.068
InterpretE
.079
.079
.079
.079
.079
person : hasGender - wasBornIn Europe
score
.96
.93
.95
.96
.94
original
.456
.496
.492
.507
.504
InterpretE
.540
.529
.538
.543
.539
person : wasBornIn (Europe - Asia - North America)
score
.92
.84
.90
.94
.90
original
.687
.8
.814
.871
.831
InterpretE
.987
.959
.983
.987
.979
city : isLocatedIn (Europe - Asia - (North - South) America)
score
.94
.96
.96
.98
.98
original
.899
.959
.949
.966
.972
InterpretE
.989
.993
.991
.996
.996
scientist: hasWonPrize 6 top prizes
score
.96
.84
.97
.85
.98
original
.539
.510
.575
.538
.578
InterpretE
.958
.934
.966
.926
.972
person: types, player - artist - politician - scientist - officeholder - writer
score
.77
.75
.78
.78
.74
original
.745
.772
.805
.794
.662
InterpretE
.953
.945
.958
.944
.938
person: hasGender, wasBornIn (Europe - Asia - North America), types (player - artist - politician - scientist - officeholder)
score
.87
.83
.86
.88
.84
original
.343
.353
.337
.347
.345
InterpretE
.624
.556
.624
.622
.621
Scores on the Test Set and simtop10 Scores on the Original and InterpretE Vectors for Different Number of Features (Mean Values) With FB15K-237 for the Different KGEMs.
Number of features
ConvE
TransE
DistMult
Rescal
Complex
1
score
.90
.80
.90
.90
.85
original
.211
.210
.214
.215
.210
InterpretE
.322
.298
.313
.322
.319
2
score
.89
.8
.9
.9
.89
original
.336
.329
.342
.343
.335
InterpretE
.484
.480
.493
.514
.509
5
score
.72
.68
.72
.65
.73
original
.561
.538
.545
.523
.547
InterpretE
.853
.844
.889
.882
.868
6
score
.84
.73
.83
.88
.84
original
.587
.524
.575
.563
.563
InterpretE
.952
.918
.936
.956
.932
Scores on the Test Set and simtop10 Scores on the Original and InterpretE Vectors for Representative Experiments (Mean Values) With FB15K-237 for the Different KGEMs.
Experiments and features
ConvE
TransE
DistMult
Rescal
Complex
person : gender - place_of_birth United States
score
.91
.78
.92
.92
.90
original
.676
.689
.689
.693
.675
InterpretE
.909
.909
.932
.99
.977
organisations: locations (USA - UK - Japan - Canada - Germany
score
.78
.70
.75
.58
.79
original
.766
.738
.758
.731
.768
InterpretE
.951
.947
.958
.959
.96
film: film_release_region (USA - Sweden - France - Spain - Finland)
Furthermore, to clearly illustrate the advantages of the proposed approach in generating interpretable dimensions within the vector space and to compare these with the dimensions in the original KGEM vector spaces, we visualise both in a 2D space by applying principal component analysis (PCA) (Hotelling, 1933). As depicted in Figure 17, the reduced dimensions in the original KGEM space (in this case, ComplEx) fail to convey any meaningful or human-understandable representations for the entity vectors. Moreover, the person entities are not clustered according to the hasGender and was-BornIn ‘Europe’ features. Essentially, these vectors do not yield significant dimensions and do not facilitate the identification or clustering of entities based on specific features. In contrast, the InterpretE vectors derived from the ComplEx KGEM vectors as shown in Figure 18 reveal distinct clusters, with the entities within each cluster sharing common features as represented by the dimensions, that is, they reveal distinct and meaningful clusters, dictated by human-understandable entity aspects as dimensions. We also present several other visualisations for different experiments in Figure 19 and Figure 20 that convey similar characteristics.
2D projection of ComplEx vectors for class person and features hasGender and wasBornIn ‘Europe’ in Yago3-10.
2D projection of InterpretE vectors for class person and features hasGender and wasBornIn ‘Europe’ in Yago3-10.
2D projection of InterpretE vectors for class player and feature hasGenderin Yago3-10.
2D projection of InterpretE vectors for class person and features gender and place_of_birth ‘United States’ in FB15k-237.
Evaluation of Semantic Similarity
InterpretE vectors are dictated by the selected features for the entities that they represent, as such we evaluated the semantic similarity of the derived vectors (in terms of the features) to measure this desirable characteristic. We propose a simple metric simtopk to measure the similarity of entities’ neighbours. For each entity, we analyse its neighbourhood to estimate the similarity based on the corresponding feature used in the SVM experiment. The parameter represents the number of neighbours considered. The score assigned to the original entity is calculated as the mean value of the similarities computed with these neighbouring entities. This process is repeated for all entities, and the mean value of these scores is computed to serve as the final metric. The proposed simtopk metric can be formulated as:
where: : the number of total entities; : the number of considered neighbours; : the k closest neighbours of the i-th entity, determined using a Euclidean distance; : returns 1 if the two entities are similar in terms of features, 0 otherwise.
The values of this metric for k=10 for the original and the derived InterpretE embeddings for the different experiments and the various embedding models are shown in Tables 1 and 2, for Yago3-10. The scores are better for InterpretE vectors as compared to the original pre-trained vectors (obtained from various KGEMs) across the board, indicating that similar entities are being represented by vectors that are closer in the new vector space, as desired. The results for FB15k-237 are presented in Tables 3 and 4. Similar to our findings with Yago3-10, we observed enhanced semantic similarity with FB15K-237. This improvement is evidenced by the higher simtopk value in the final space compared to the original space.
We also explored an alternative method to evaluate the simtopk metric by leveraging LLMs. In a limited experiment, we applied few-shot prompting with Llama3-70B (Touvron et al., 2023) and a RAG pipeline with Mistral7B (Jiang et al., 2023) using LlamaIndex. Our prompt included one positive and one negative example to assess the similarity between an entity and its neighbourhood, as illustrated in Figure 21. Although promising, the results were inconsistent and sometimes contradictory, suggesting that further investigation is needed to develop a reliable global evaluation metric, which we plan to address in future work.
Partial example of few-shot prompts with Llama3 70B using HuggingChat.
Discussion
The results from the designed experiments for each dataset demonstrate the potential of the proposed approach. However, there are several considerations for the experiment design that depend heavily on the data distributions and characteristics of the underlying KG data. For example, there is often class imbalance in entities concerning selected features (e.g., hasGender having more male representatives than female). These factors can impact the performance of the SVM classifier. Class-based weights have been applied to the data points to address this issue, but it remains a design challenge.
In some experiments, our method achieves a simtopk value very close to . This indicates that in the resulting space, similar entities are clustered together nearly perfectly. However, this level of clustering is not consistently observed across all experiments. The variability can be explained by the fact that other underlying features, not covered in the current experiment, could contribute to more accurately clustering similar entities. An analogy can be drawn with the well-known kernel trick used in SVMs, where additional dimensions (in our case, the consideration of new features) are introduced to better distinguish different labeled data (in this context, non-similar entities). Another challenge is the abstraction of features, especially if the underlying data is noisy and non-canonicalised (e.g., different labels for the same value such as ‘UK’ and ‘United Kingdom’). Resolving these issues is crucial for creating useful feature categories. A potential limitation of this approach could be scalability. As the size of the KG increases, the time complexity of training the SVM also increases. The time complexity of SVM training is , where n is the number of entities and d is the number of dimensions. Despite these challenges, InterpretE represents a significant step towards deriving interpretable vector spaces from KGEM vectors. It is flexible and applicable to any KGEM. We aim to further develop this approach to streamline the design and engineering process as well as improving its scalability across various datasets.
Conclusion and Future Work
This work attempts to address the oft overlooked issue of lack of semantic interpretability in latent spaces generated by popular KG embedding techniques. The proposed InterpretE approach is shown to be capable of deriving interpretable spaces from existing KGEM vectors with human-understable dimensions that are based on the features in the underlying KG. Through the design and evaluation of different experiments, we have showcased the promise of the approach for encapsulating entity features in the vectors for different feature abstraction levels, customisable as per the dataset. By aiming to bridge the gap between entity representations and human-understandable features, InterpretE paves the way for enhanced understanding and utilisation of KGEMs in various applications. Future research can further explore the implications of this approach and extend its applicability to broader contexts within the field of knowledge representation and reasoning.
By providing interpretable insights into how entities are represented and clustered in KGs, InterpretE approach aims to contribute to the broader goal of AI transparency. This can allow practitioners to trace back decisions to underlying features, identify potential biases, and ensure that AI-driven systems operate in a manner that is both reliable and ethical. This focus on explainability ensures that AI models are not only accurate but also comprehensible, making them more suitable for deployment in critical decision-making contexts.
In this study, we conducted preliminary experiments employing large language models through a retrieval-augmented generation pipeline and few-shot prompting techniques to extract features as well as to assess entity similarity. While these methodologies demonstrated potential, the outcomes were marked by inconsistency and a lack of transparency, falling short of our objective to derive human-interpretable and statistically robust features. Consequently, we have prioritised deterministic, statistical approaches in our current analysis. Nonetheless, we recognise the evolving capabilities of LLMs and intend to explore their application further as the research advances. Furthermore, it would be interesting explore whether sparse autoencoders, as used by Huben et al. (2023) to identify monosemantic features in LLMs, can be applied to KGEMs to derive more interpretable entity representations.
Footnotes
Acknowledgements
This work was partly funded by the HE project MuseIT,which has been co-founded by the European Union under the Grant Agreement No 101061441. Views and opinions expressed are,however,those of the authors and do not necessarily reflect those of the European Union or European Research Executive Agency. The authors are also thankful for access to King’s Computational Research,Engineering and Technology Environment (CREATE). King’s College London. (2024). Retrieved 28 October 2024,from .
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research,authorship and/or publication of this article.
ORCID iD
Nitisha Jain
References
1.
AlshargiF.ShekarpourS.SoruT.ShethA. (2019). Concept2vec: Metrics for evaluating quality of embeddings for ontological concepts. In Spring symposium on combining machine learning with knowledge engineering (AAAI-MAKE 2019). https://doi.org/10.48550/arXiv.1803.04488
2.
ArrietaA. B.Díaz-RodríguezN.Del SerJ.BennetotA.TabikS.BarbadoA.GarcíaS.Gil-LópezS.MolinaD.BenjaminsR.ChatilaR. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
3.
BaekJ.AjiA. F.LehmannJ.HwangS. J. (2023). Direct fact retrieval from knowledge graphs without entity linking. In A. Rogers, J. Boyd-Graber & N. Okazaki (Eds.), Proceedings of the 61st annual meeting of the association for computational linguistics (Volume 1: Long Papers) (pp. 10038–10055). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.acl-long.558. https://aclanthology.org/2023.acl-long.558/
4.
BordesA.UsunierN.Garcia-DuránA.WestonJ.YakhnenkoO. (2013). Translating embeddings for modeling multi-relational data. In Proceedings of the 26th international conference on neural information processing systems - Volume 2, NIPS’13 (pp. 2787–2795). Curran Associates Inc.
5.
BoschinA.JainN.KeretchashviliG.SuchanekF. M. (2022). Combining embeddings and rules for fact prediction. In International research school in artificial intelligence in Bergen.
6.
BouraouiZ.Camacho-ColladosJ.Espinosa-AnkeL.SchockaertS. (2020). Modelling semantic categories using conceptual neighborhood. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, pp. 7448–7455).
7.
BouraouiZ.Gutiérrez-BasultoV.SchockaertS. (2022). Integrating ontologies and vector space embeddings using conceptual spaces. In C. Bourgaux, A. Ozaki & R. Peñaloza (Eds.), International research school in artificial intelligence in Bergen (AIB 2022), Open access series in informatics (OASIcs) (Vol. 99) (pp. 3:1–3:30). Schloss Dagstuhl – Leibniz-Zentrum für Informatik. ISSN 2190-6807. ISBN 978-3-95977-228-0. https://doi.org/10.4230/OASIcs.AIB.2022.3
8.
CaoJ.FangJ.MengZ.LiangS. (2024). Knowledge graph embedding: A survey from the perspective of representation spaces. ACM Computing Surveys, 56(6), 1–42.
9.
ChangC.-C.LinC.-J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3), 1–27. https://doi.org/10.1145/1961189.1961199
DietzL.KotovA.MeijE. (2018). Utilizing knowledge graphs for text-centric information retrieval. In The 41st international ACM SIGIR conference on research & development in information retrieval, SIGIR ’18 (pp. 1387–1390). Association for Computing Machinery. ISBN 9781450356572. https://doi.org/10.1145/3209978.3210187
16.
Gad-ElrabM. H.StepanovaD.TranT.-K.AdelH.WeikumG. (2020). Excut: Explainable embedding-based clustering over knowledge graphs. In International semantic web conference (pp. 218–237). Springer.
GargD.IkbalS.SrivastavaS. K.VishwakarmaH.KaranamH. P.SubramaniamL. V. (2019). Quantum embedding of knowledge for reasoning. In Neurips (pp. 5595–5605).
HaoJ.ChenM.YuW.SunY.WangW. (2019a). Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1709–1719).
21.
HaoJ.ChenM.YuW.SunY.WangW. (2019b). Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In KDD (pp. 1709–1719).
HotellingH. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24(6), 417.
24.
HouS.WeiD. (2023). Research on knowledge graph-based recommender systems. In 2023 3rd International symposium on computer technology and information science (ISCTIS) (pp. 737–742). https://doi.org/10.1109/ISCTIS58954.2023.10213083
25.
HubenR.CunninghamH.SmithL. R.EwartA.SharkeyL. (2023). Sparse autoencoders find highly interpretable features in language models. In The twelfth international conference on learning representations.
26.
HubertN.PaulheimH.BrunA.MonticoloD. (2024). Do similar entities have similar embeddings? In European semantic web conference (pp. 3–21). Springer.
27.
IlievskiF.ShenoyK.ChalupskyH.KleinN.SzekelyP. (2024). A study of concept similarity in Wikidata. Semantic Web, 15(3), 877–896. https://doi.org/10.3233/SW-233520
28.
JackermeierM.ChenJ.HorrocksI. (2024). Dual box embeddings for the description logic EL++. In Proceedings of the ACM on web conference 2024 (pp. 2250–2258).
29.
JainN.KaloJ.-C.BalkeW.-T.KrestelR. (2021). Do embeddings actually capture knowledge graph semantics? In R. Verborgh, K. Hose, H. Paulheim, P.-A. Champin, M. Maleshkova, O. Corcho, P. Ristoski & M. Alam (Eds.), The semantic web (pp. 143–159). Springer International Publishing. ISBN 978-3-030-77385-4.
30.
JainN.TranT.-K.Gad-ElrabM. H.StepanovaD. (2021). Improving knowledge graph embeddings with ontological reasoning. In International semantic web conference (pp. 410–426). Springer.
JayathilakaM.MuT.SattlerU. (2021b). Towards knowledge-aware few-shot learning with ontology-based n-ball concept embeddings. In 2021 20th IEEE international conference on machine learning and applications (ICMLA) (pp. 292–297). IEEE.
33.
JiG.HeS.XuL.LiuK.ZhaoJ. (2015). Knowledge graph embedding via dynamic mapping matrix. In C. Zong & M. Strube (Eds.), Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (Volume 1: Long Papers) (pp. 687–696). Association for Computational Linguistics. https://doi.org/10.3115/v1/P15-1067. https://aclanthology.org/P15-1067
34.
JiangA. Q.SablayrollesA.MenschA.BamfordC.ChaplotD. S.de las CasasD.BressandF.LengyelG.LampleG.SaulnierL.LavaudL. R.LachauxM.-A.StockP.ScaoT. L.LavrilT.WangT.LacroixT.SayedW. E. (2023). Mistral 7B.
35.
KaloJ.-C.EhlerP.BalkeW.-T. (2019). Knowledge graph consolidation by unifying synonymous relationships. In The semantic Web–ISWC 2019: 18th international semantic web conference, Auckland, New Zealand, October 26–30, 2019, Proceedings, Part I 18 (pp. 276–292). Springer.
36.
KipfT. N.WellingM. (2017). Semi-supervised classification with graph convolutional networks. In 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, conference track proceedings.
37.
KrompaßD.BaierS.TrespV. (2015). Type-constrained representation learning in knowledge graphs. In ISWC (pp. 640–655).
38.
LewisP.PerezE.PiktusA.PetroniF.KarpukhinV.GoyalN.KüttlerH.LewisM.YihW.-T.RocktäschelT.RiedelS.KielaD. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan & H. Lin (Eds.), Advances in neural information processing systems (Vol. 33) (pp. 9459–9474). Curran Associates, Inc.
39.
LinY.LiuZ.SunM.LiuY.ZhuX. (2015). Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the twenty-ninth AAAI conference on artificial intelligence, AAAI’15 (pp. 2181–2187). AAAI Press. ISBN 0262511290.
40.
MahdisoltaniF.BiegaJ.SuchanekF. M. (2013). Yago3: A knowledge base from multilingual wikipedias. In CIDR.
41.
MurdochW. J.SinghC.KumbierK.Abbasi-AslR.YuB. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080. https://doi.org/10.1073/pnas.1900654116
42.
NickelM.TrespV.KriegelH.-P. (2011). A three-way model for collective learning on multi-relational data. In L. Getoor & T. Scheffer (Eds.), Proceedings of the 28th international conference on machine learning (ICML-11), ICML ’11 (pp. 809–816). ACM. ISBN 978-1-4503-0619-5.
43.
PedregosaF.VaroquauxG.GramfortA.MichelV.ThirionB.GriselO.BlondelM.PrettenhoferP.WeissR.DubourgV.VanderplasJ.PassosA.CournapeauD.BrucherM.PerrotM.DuchesnayE. (2011). Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12, 2825–2830.
RistoskiP.PaulheimH. (2016). Rdf2vec: Rdf graph embeddings for data mining. In International semantic web conference (pp. 498–514). Springer.
46.
RossiA.BarbosaD.FirmaniD.MatinataA.MerialdoP. (2021). Knowledge graph embedding for link prediction: A comparative analysis. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(2), 1–49. https://doi.org/10.1145/3424672
47.
RuffinelliD.BroscheitS.GemullaR. (2023). You CAN teach an old dog new tricks! On training knowledge graph embeddings. In International conference on learning representations.
48.
SimhiA.MarkovitchS. (2023). Interpreting embedding spaces by conceptualization.
49.
SmailiF. Z.GaoX.HoehndorfR. (2018). Onto2vec: Joint vector-based representation of biological entities and their ontology-based annotations. Bioinformatics (Oxford, England), 34(13), i52–i60.
50.
SmailiF. Z.GaoX.HoehndorfR. (2019). OPA2Vec: Combining formal and informal content of biomedical ontologies to improve similarity-based prediction. Bioinformatics (Oxford, England), 35(12), 2133–2140.
51.
SunZ.ZhangQ.HuW.WangC.ChenM.AkramiF.LiC. (2020a). A benchmarking study of embedding-based entity alignment for knowledge graphs. Proceedings of the VLDB Endowment, 13(11), 2326–2340.
52.
SunZ.ZhangQ.HuW.WangC.ChenM.AkramiF.LiC. (2020b). A benchmarking study of embedding-based entity alignment for knowledge graphs, arXiv preprint arXiv:2003.07743.
53.
ToutanovaK.ChenD. (2015). Observed versus latent features for knowledge base and text inference. In Workshop on continuous vector space models and their compositionality. https://api.semanticscholar.org/CorpusID:5378837
54.
TouvronH.LavrilT.IzacardG.MartinetX.LachauxM.-A.LacroixT.RozièreB.GoyalN.HambroE.AzharF.RodriguezA. (2023). Llama: Open and efficient foundation language models, arXiv preprint arXiv:2302.13971.
55.
TrouillonT.WelblJ.RiedelS.GaussierE.BouchardG. (2016). Complex embeddings for simple link prediction. In M.F. Balcan & K.Q. Weinberger (Eds.), Proceedings of The 33rd international conference on machine learning, Proceedings of Machine Learning Research (Vol. 48, pp. 2071–2080). PMLR. https://proceedings.mlr.press/v48/trouillon16.html
56.
WangQ.MaoZ.WangB.GuoL. (2017). Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12), 2724–2743.
57.
WangZ.ZhangJ.FengJ.ChenZ. (2014). Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI conference on artificial intelligence (Vol. 28).
58.
WiharjaK.PanJ. Z.KollingbaumM. J.DengY. (2020). Schema aware iterative knowledge graph completion. Journal of Web Semant, 65, 100616.
59.
XiongB.PotykaN.TranT.-K.NayyeriM.StaabS. (2022). Faithful embeddings for EL++ knowledge bases. In International semantic web conference (pp. 22–38). Springer.
60.
YangB.YihS. W.-T.HeX.GaoJ.DengL. (2015). Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the international conference on learning representations (ICLR) 2015.
61.
YangH.ChenJ.SattlerU. (2025). TransBox: EL++-closed Ontology Embedding. In Proceedings of the ACM web conference 2025. Association for Computing Machinery.
62.
YasunagaM.RenH.BosselutA.LiangP.LeskovecJ. (2021). QA-GNN: Reasoning with language models and knowledge graphs for question answering. In K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty & Y. Zhou (Eds.), Proceedings of the 2021 conference of the North American chapter of the association for computational linguistics: Human language technologies (pp. 535–546). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.45. https://aclanthology.org/2021.naacl-main.45/
63.
ZhuX.XuC.TaoD. (2021). Where and what? Examining interpretable disentangled representations. In 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5857–5866). https://doi.org/10.1109/CVPR46437.2021.00580
64.
ZieglerK.CaelenO.GarcheryM.GranitzerM.He-GueltonL.JurgovskyJ.PortierP.ZwicklbauerS. (2017). Injecting semantic background knowledge into neural networks using graph embeddings. In 26th IEEE, WETICE (pp. 200–205).