Papers by IMFD researchers accepted at major machine learning conference
ICML is one of the world's most important gatherings for researchers dedicated to the study and advancement of machine learning, one of the essential branches of artificial intelligence. The 2025 edition will be held in Canada and will feature several papers, two of which were developed by IMFD Director Juan , an academic at the UC Department of Computer Science who also holds a position at the UC Institute of Computational Mathematical Engineering, and Pablo Barceló, director of IMC UC, researcher at CENIA and MFD, ranging from graph neural networks to a new algorithm for Markovian decision problems.
In 1980, Carnegie Mellon University in Pittsburgh (USA) hosted the first International Conference on Machine Learning (ICML). Over the years, the event has become one of the most important gatherings of professionals dedicated to advancing this branch of artificial intelligence, also known as "machine learning."
The 2025 edition will take place between July 13 and 19 in Vancouver, Canada. As in previous editions, the event is expected to once again be the ideal forum for presenting cutting-edge research in different areas of machine learning, which today is closely related to disciplines such as statistics and data science. In addition to these areas, there are other important fields of application such as computer vision, computational biology, speech recognition, and robotics.
"ICML is one of the three conferences on machine learning where everyone who does research in this area wants to publish, along with Neurips and ICLR. All the major research centers at universities and companies in the United States, China, and Europe try to participate," explains the IMFD director.

The work involving Juan and Pablo Barceló is entitled "How Expressive are Knowledge Graph Foundation Models?" and was developed in collaboration with Xingyue Huang (University of Oxford), Michael M. Bronstein (University of Oxford), İsmail İlkan Ceylan (University of Oxford), Mikhail Galkin (Google), and Miguel Romero Orth (DCC UC).
As the title of the paper indicates, the study deals with graph neural networks. These correspond to models that represent relationships between elements of a set. The academic points out that, specifically, the research carried out by him and his colleagues focuses on "network models that can be used in graphs that are completely different from the graphs we use to train them. Something equivalent to ChatGPT, for example, which is used today for any type of text," explains Reutter. The academic adds that he and his colleagues propose a "theoretical framework for the design of these networks, and, perhaps more importantly, we provide mathematical techniques that allow us to understand the power of these networks, and in particular which ones are better than others."
He explains that this line of research, focused on graph neural networks that can generalize to any domain, is becoming increasingly popular. "I see people using our study to understand how to design these networks in practical contexts, which are becoming increasingly important. On the other hand, it brings us closer to achieving a kind of ChatGPT for graphs, which is like the holy grail of machine learning in the field of graphs," says Reutter.
Interdisciplinary research
Another paper accepted at ICML 2025 is entitled "Ehrenfeucht-Haussler Rank and Chain of Thought" and was co-authored by Pablo Barceló, Alexander Kozachinskiy (CENIA), and Tomasz Steifer (Institute of Fundamental Technological Research of the Polish Academy of Sciences). Kozachinskiy, who was a postdoctoral fellow at IMFD and IMC UC, as was Steifer, explains that the acceptance of the paper at ICML is a great achievement.


"It is an A* conference and, as such, one of the most prestigious conferences in the field of machine learning. This edition of ICML received 12,107 submissions, of which only 3,260 were accepted," notes the CENIA researcher.
Kozachinskiy points out that in the article, he and his co-authors have managed to "relate the classical theory of 'Pac learning' ('probably approximate correct learning') to recent developments in artificial intelligence." In other words, he adds, they have "demonstrated that a notion of rank that was defined in a classic work by Erenfeucht and Haussler (1989) can also be defined through large language models. Thus, we have found a rather surprising link between two lines of research in machine learning."
Looking ahead, Kozachinskiy says, the ideal would be to "find more links between the field of artificial intelligence and other more developed areas of mathematics and computer science. In this way, we could deepen our research into large language models and shed more light on the significance of classical tools."

Source: IMC UC
