Fairness in language modeling: A look at gender biases


In this talk we will take a look at the considerations to be taken when developing applications that use natural language models in the face of gender bias. We will take as a case study the Candidate Recommender System developed by the Innovation area of the Millennium Institute Foundational Research on Data.

Finding the ideal candidate or job is a challenge for companies and professionals today. The upward trend of employee turnover within companies and the increase in demand for skilled professionals makes it increasingly necessary for recruitment and selection processes to be efficient and effective, without neglecting the propagation and/or amplification of biases present in historical processes.


Camila Henríquez Beltrán. Data Scientist of the Innovation and Technology Transfer area of the Instituto Milenio Fundamento de los Datos. Bachelor of Science, mention in Astronomy, Universidad de Chile. Currently I am dedicated to the development and constant research of solutions associated with the world of Data and Artificial Intelligence.

When and where
Wednesday, May 8th, 2024
18:00 to 19:00 GMT-4


More information