Abstract
Data that has a complex relational structure and in which observations are noisy or partially missing poses several challenges to traditional machine learning algorithms. One solution to this problem is the use of so-called probabilistic logical models (models that combine elements of first-order logic with probabilities) and corresponding learning algorithms. In this thesis we focus on directed probabilistic logical models. We show how to represent such models and develop several algorithms to learn such models from data.
Get full access to this article
View all access options for this article.
