Masked Language Models (MLMs) like BERT and RoBERTa excel at predicting missing words based on context, but their ability to understand deeper semantic relationships is still being assessed. While MLMs have demonstrated impressive capabilities, it is still unclear if they merely exploit statistical word co-occurrence or if they can capture a deeper, structured understanding of meaning, similar to how knowledge is organized in ontologies. This is a topic of increasing interest, with researchers seeking to understand how MLMs might internally represent concepts like ontological classes and semantic containment relations (e.g., sub-class and instance-of). Unveiling this knowledge could have significant implications for Semantic Web applications, but it necessitates a profound understanding of how these models express such relationships. This work investigates whether MLMs can understand these relationships, presenting a novel approach to automatically leverage the predictions returned by MLMs to discover semantic containment relations in unstructured text. We achieve this by constructing a verbalizer, a system that translates the model’s internal predictions into classification labels. Through a comprehensive probing procedure, we assess the method’s effectiveness, reliability, and interpretability. Our findings demonstrate a key strength of MLMs: their ability to capture semantic containment relationships. These insights bring significant implications for MLM application in ontology construction and aligning text data with ontologies.

PRONTO: Prompt-Based Detection of Semantic Containment Patterns in MLMs / De Bellis, A.; Anelli, V. W.; Di Noia, T.; Di Sciascio, E.. - 15232:(2024), pp. 227-246. (Intervento presentato al convegno 23rd International Semantic Web Conference, ISWC 2024 tenutosi a usa nel 2024) [10.1007/978-3-031-77850-6_13].

PRONTO: Prompt-Based Detection of Semantic Containment Patterns in MLMs

De Bellis A.;Anelli V. W.;Di Noia T.;Di Sciascio E.
2024-01-01

Abstract

Masked Language Models (MLMs) like BERT and RoBERTa excel at predicting missing words based on context, but their ability to understand deeper semantic relationships is still being assessed. While MLMs have demonstrated impressive capabilities, it is still unclear if they merely exploit statistical word co-occurrence or if they can capture a deeper, structured understanding of meaning, similar to how knowledge is organized in ontologies. This is a topic of increasing interest, with researchers seeking to understand how MLMs might internally represent concepts like ontological classes and semantic containment relations (e.g., sub-class and instance-of). Unveiling this knowledge could have significant implications for Semantic Web applications, but it necessitates a profound understanding of how these models express such relationships. This work investigates whether MLMs can understand these relationships, presenting a novel approach to automatically leverage the predictions returned by MLMs to discover semantic containment relations in unstructured text. We achieve this by constructing a verbalizer, a system that translates the model’s internal predictions into classification labels. Through a comprehensive probing procedure, we assess the method’s effectiveness, reliability, and interpretability. Our findings demonstrate a key strength of MLMs: their ability to capture semantic containment relationships. These insights bring significant implications for MLM application in ontology construction and aligning text data with ontologies.
2024
23rd International Semantic Web Conference, ISWC 2024
9783031778490
9783031778506
PRONTO: Prompt-Based Detection of Semantic Containment Patterns in MLMs / De Bellis, A.; Anelli, V. W.; Di Noia, T.; Di Sciascio, E.. - 15232:(2024), pp. 227-246. (Intervento presentato al convegno 23rd International Semantic Web Conference, ISWC 2024 tenutosi a usa nel 2024) [10.1007/978-3-031-77850-6_13].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/283063
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact