IMFD researcher's papers accepted at major machine learning conference

ICML is one of the most relevant meetings in the world for researchers dedicated to the study and advancement of machine learning, one of the essential branches of artificial intelligence. The 2025 version will be held in Canada and will include the participation of several papers, two of them developed by the director of the IMFD, Juan Reutter, academic of the Department of Computer Science UC in shared position with the Institute of Computational Mathematical Engineering UC, and Pablo Barceló, director IMC UC, researcher CENIA and MFD, ranging from graph neural networks to a new algorithm for Markovian decision problems.

In 1980, Carnegie-Mellon University in Pittsburgh (USA) hosted the first version of the International Conference on Machine Learning (ICML). Over the years, the event would become one of the most important meetings of professionals dedicated to the advancement of this branch of artificial intelligence, also known as "machine learning".

The 2025 version will be held from July 13-19 in Vancouver, Canada. As in previous editions, it is expected that the event will once again be the ideal opportunity to present cutting-edge research in different areas of machine learning, which today is closely related to disciplines such as statistics and data science. In addition to these areas, there are important fields of application such as artificial vision, computational biology, voice recognition and robotics.

"ICML is one of the three machine learning conferences where everyone doing research in this area wants to publish, along with Neurips and ICLR. All the major research centers from universities and companies in the United States, China and Europe try to participate," explains the IMFD director.

John Reutter

The paper in which Juan Reutter and Pablo Barceló participate is entitled "How Expressive are Knowledge Graph Foundation Models?" and was developed together with Xingyue Huang (U. of Oxford), Michael M. Bronstein (U. of Oxford), İsmail İlkan Ceylan (U. of Oxford), Mikhail Galkin (Google) and Miguel Romero Orth (DCC UC).

As the title of the paper indicates, the study deals with graph neural networks. The latter correspond to models that represent relationships between elements of a set. The academic points out that, specifically, the research carried out by him and his colleagues focuses on "network models that can be used in graphs that are completely different from the graphs we use to train them. Something equivalent to ChatGPT, for example, which is used today for any kind of text," Reutter explains. He adds that he and his colleagues propose a "theoretical framework for the design of these networks, and, perhaps more importantly, we give mathematical techniques that allow us to understand the power of these networks, and in particular which ones are better than others".

He details that this line of research, focused on graph neural networks that can generalize to any domain, is increasingly in vogue. "I see people using our study to understand how to design these networks in practical contexts, which are becoming more and more important. On the other hand, it brings us closer to being able to achieve a kind of ChatGPT for graphs, which is kind of the holy grail of the graph machine learning area," says Reutter.

Interdisciplinary research

Another of the papers accepted in ICML 2025 is called "Ehrenfeucht-Haussler Rank and Chain of Thought" and is co-authored by Pablo Barceló, Alexander Kozachinskiy (CENIA) and Tomasz Steifer(Institute of Fundamental Technological Research of the Polish Academy of Sciences). Kozachinskiy, who was a postdoc at IMFD and IMC UC like Steifer, explains that the acceptance of the paper at ICML is a great achievement.

"It is an A*-ranked conference and is thus one of the most prestigious conferences in the area of machine learning. This edition of ICML had 12,107 papers submitted, and only 3,260 were accepted," says the CENIA researcher.

Kozachinskiy states that in the article he and the other co-authors have succeeded in "relating the classical theory of 'Pac learning' ('probably approximate correct learning') to the recent development of artificial intelligence". That is, he adds, they have "shown that a notion of rank that was defined in a classic paper by Erenfeucht and Haussler (1989) can also be defined through large language models. Thus, we have found a rather surprising link between two lines of research in machine learning".

In the future, says Kozachinskiy, the ideal would be "to find more links between the area of artificial intelligence with more developed areas of mathematics and computational science. In this way, we could deepen the investigation of large language models and further illuminate the significance of classical tools".

Source: IMC UC