Hi! I am a PhD Student at the Institute for Adaptive and Neural Computation and part of the APRIL research lab in Edinburgh, supervised by Dr. Antonio Vergari and Dr. Iain Murray.
The focus of my research currently lies at the intersection of tractable probabilistic modeling and tensor factorizations for efficient machine learning methods.
In particular, a major aspect of my research is understanding and tracing the expressiveness of probabilistic models supporting exact yet efficient inference.
More generally, I am also interested in deep generative models and neurosymbolic methods.
Contact:
Links:
Curriculum Vitae
Publications
For a complete list refer to Semantic Scholar or Google Scholar.
* = Shared first authorship.
-
What is the Relationship between Tensor Factorizations and Circuits (and How Can We Exploit it)?
L. Loconte*, A. Mari*, G. Gala*, R. Peharz, C. de Campos, E. Quaeghebeur, G. Vessio, A. Vergari.
arXiv 2024.
tl;dr: We investigate the connections between tensor factorizations and circuits, and how the literature of the foremost can benefit from the theory about the latter, with a particular focus on tractable probabilistic modelling. We then devise a framework to build tensor factorizations and circuits that abstract away from the many available options.
-
Sum of Squares Circuits
L. Loconte, S. Mengel, A. Vergari.
AAAI 2025.
tl;dr: We theoretically prove an expressiveness limitation of deep subtractive mixture models learned by squaring circuits. To overcome this limitation, we propose sum of squares circuits and build an expressiveness hierarchy around them, allowing us to unify and separate many tractable probabilistic models.
-
Subtractive Mixture Models via Squaring: Representation and Learning
L. Loconte, A. M. Sladek, S. Mengel, M. Trapp, A. Solin, N. Gillis, A. Vergari.
ICLR 2024. Spotlight (top 5%).
tl;dr: We propose to build (deep) subtractive mixture models by squaring circuits. We theoretically prove their expressiveness by deriving an exponential lowerbound on the size of circuits with positive parameters only.
-
How to Turn Your Knowledge Graph Embeddings into Generative Models
L. Loconte, N. Di Mauro, R. Peharz, A. Vergari.
NeurIPS 2023. Oral (top 0.6%).
tl;dr: KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up learning and guaranteeing the satisfaction of logical constraints by design.
Software
Here is some software I have contributed to. Check out also my GitHub profile.
-
cirkit by the APRIL lab (GPL-3.0).
tl;dr: A language and framework for building, learning and reasoning about probabilistic machine learning models, such as circuits and tensor networks.
features: Support for tractable probabilistic inference operations that are automatically compiled to efficient computational graphs that run on the GPU, called circuits. Seamless integration of circuits with deep learning models and with any device compatible with PyTorch. Support for user-defined layers and parameterizations that extend the symbolic circuit language. A set of templates for constructing circuits by mixing layers and structures with just a few lines of code.