Hi! I am currently a final year PhD Student at the Institute for Adaptive and Neural Computation and part of the APRIL research lab in Edinburgh, supervised by Dr. Antonio Vergari.

I am a machine learning researcher and engineer with a strong background in mathematics, deep generative models, knowledge graphs, neurosymbolic methods, and LLMs. I am experienced with developing and training custom-built machine learning methods, as well as gluing existing methodologies together into new working systems.

Contact:

Useful links: Curriculum Vitae

Publications

For a complete list refer to Semantic Scholar or Google Scholar. * = Shared first authorship.

Fast and Expressive Multi-Token Prediction with Probabilistic Circuits
A. Grivas, L. Loconte, E. van Krieken, P. Nawrot, Y. Zhao, E. Wielewski, P. Minervini, E. Ponti, A. Vergari. ICML 2026.
tl;dr: We propose a framework for multi-token prediction using speculative decoding, allowing us to easily explore different parameterizations balancing expressiveness and efficiency, and generalize other approaches based on tensor factorizations.
How to Square Tensor Networks and Circuits Without Squaring Them
L. Loconte, A. Javaloy, A. Vergari. ICLR 2026.
tl;dr: We derive novel circuit properties based on orthogonality to speed-up marginalization in squared circuits and tensor network-based Born machines, as well as to unlock a strictly larger set of factorization structures enabling tractable marginalization
Is Complex Query Answering Really Complex?
C. Gregucci, B. Xiong, D. Hernández, L. Loconte, P. Minervini, S. Staab, A. Vergari. ICML 2025. Spotlight (top 2.6%).
tl;dr: We highlight how common benchmarks for complex query answering with neural models are skewed towards "simple" queries and propose new more challenging benchmarks that solve this issue.
What is the Relationship between Tensor Factorizations and Circuits (and How Can We Exploit it)?
L. Loconte*, A. Mari*, G. Gala*, R. Peharz, C. de Campos, E. Quaeghebeur, G. Vessio, A. Vergari. TMLR 2025. Featured certification.
tl;dr: We investigate the connections between tensor factorizations and circuits, and how the literature of the foremost can benefit from the theory about the latter. We then devise a framework to build tensor factorizations and circuits that abstracts away from the many available options.
Sum of Squares Circuits
L. Loconte, S. Mengel, A. Vergari. AAAI 2025.
tl;dr: We theoretically prove an expressiveness limitation of deep subtractive mixture models learned by squaring circuits. To overcome this limitation, we propose sum of squares circuits and build an expressiveness hierarchy around them, allowing us to unify and separate many tractable probabilistic models.
Subtractive Mixture Models via Squaring: Representation and Learning
L. Loconte, A. M. Sladek, S. Mengel, M. Trapp, A. Solin, N. Gillis, A. Vergari. ICLR 2024. Spotlight (top 5%).
tl;dr: We propose to build (deep) subtractive mixture models by squaring circuits. We theoretically prove their expressiveness by deriving an exponential lowerbound on the size of circuits with positive parameters only.
How to Turn Your Knowledge Graph Embeddings into Generative Models
L. Loconte, N. Di Mauro, R. Peharz, A. Vergari. NeurIPS 2023. Oral (top 0.6%).
tl;dr: KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up learning and guaranteeing the satisfaction of logical constraints by design.

Software

Here is some software I have contributed to. Check out also my GitHub profile.