Ensuring high-quality information is fundamental to modern data-driven decision-making systems. This thesis explores the role of language models (LMs) and large language models (LLMs) in enhancing information quality (IQ), spanning tasks such as data cleaning, uncertainty estimation, on-demand data retrieval, and fairness in subjective data ranking. The first part of this work focuses on data cleaning, particularly entity resolution (ER) and entity count estimation, proposing a framework that integrates machine learning, clustering, and statistical approaches to efficiently estimate the number of distinct entities in large datasets. A sampling-based pipeline is introduced to improve scalability without compromising accuracy. The second part investigates uncertainty estimation in LLM-generated responses, proposing a Bayesian crowdsourcing framework to assess and aggregate outputs from multiple models. This enables more reliable decision-making by quantifying the confidence in generated information. Furthermore, this thesis explores the use of LLMs for automating structured data retrieval from heterogeneous sources, demonstrating their effectiveness in industrial applications where real-time insights are required. Finally, the thesis addresses ethical data quality, with a particular focus on fairness in ranking systems that rely on subjective data. A fairness assessment pipeline is introduced to measure exposure disparities across different groups in collaborative rating platforms. The proposed methodology quantifies both item-level and query-level fairness, ensuring balanced representation in ranked outputs. Through a combination of machine learning, Bayesian inference, and LLM-based techniques, this thesis advances the state of the art in ensuring reliability, fairness, and efficiency in data-driven applications. The proposed methodologies are validated through extensive experiments on real-world datasets, offering practical solutions for improving information quality across diverse domains.

Language models for information quality: methods and applications / Mathew, JERIN GEORGE. - (2025 Jan 23).

Language models for information quality: methods and applications

MATHEW, JERIN GEORGE
23/01/2025

Abstract

Ensuring high-quality information is fundamental to modern data-driven decision-making systems. This thesis explores the role of language models (LMs) and large language models (LLMs) in enhancing information quality (IQ), spanning tasks such as data cleaning, uncertainty estimation, on-demand data retrieval, and fairness in subjective data ranking. The first part of this work focuses on data cleaning, particularly entity resolution (ER) and entity count estimation, proposing a framework that integrates machine learning, clustering, and statistical approaches to efficiently estimate the number of distinct entities in large datasets. A sampling-based pipeline is introduced to improve scalability without compromising accuracy. The second part investigates uncertainty estimation in LLM-generated responses, proposing a Bayesian crowdsourcing framework to assess and aggregate outputs from multiple models. This enables more reliable decision-making by quantifying the confidence in generated information. Furthermore, this thesis explores the use of LLMs for automating structured data retrieval from heterogeneous sources, demonstrating their effectiveness in industrial applications where real-time insights are required. Finally, the thesis addresses ethical data quality, with a particular focus on fairness in ranking systems that rely on subjective data. A fairness assessment pipeline is introduced to measure exposure disparities across different groups in collaborative rating platforms. The proposed methodology quantifies both item-level and query-level fairness, ensuring balanced representation in ranked outputs. Through a combination of machine learning, Bayesian inference, and LLM-based techniques, this thesis advances the state of the art in ensuring reliability, fairness, and efficiency in data-driven applications. The proposed methodologies are validated through extensive experiments on real-world datasets, offering practical solutions for improving information quality across diverse domains.
23-gen-2025
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Mathew.pdf

accesso aperto

Note: tesi completa
Tipologia: Tesi di dottorato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.57 MB
Formato Adobe PDF
3.57 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1733553
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact