Improving Language Model Predictions via Prompts Enriched with Knowledge Graphs

dc.bibliographicCitation.bookTitleCEUR Workshop Proceedingseng
dc.bibliographicCitation.seriesTitleCEUR Workshop Proceedings ; 3342eng
dc.bibliographicCitation.volume3342
dc.contributor.authorBrate, Ryan
dc.contributor.authorMinh-Dang, Hoang
dc.contributor.authorHoppe, Fabian
dc.contributor.authorHe, Yuan
dc.contributor.authorMeroño-Peñuela, Albert
dc.contributor.authorSadashivaiah, Vijay
dc.contributor.editorAlam, Mehwish
dc.contributor.editorBuscaldi, Davide
dc.contributor.editorCochez, Michael
dc.contributor.editorOsborne, Francesco
dc.contributor.editorReforgiato Recupero, Diego
dc.date.accessioned2024-02-01T15:03:41Z
dc.date.available2024-02-01T15:03:41Z
dc.date.issued2023
dc.description.abstractDespite advances in deep learning and knowledge graphs (KGs), using language models for natural language understanding and question answering remains a challenging task. Pre-trained language models (PLMs) have shown to be able to leverage contextual information, to complete cloze prompts, next sentence completion and question answering tasks in various domains. Unlike structured data querying in e.g. KGs, mapping an input question to data that may or may not be stored by the language model is not a simple task. Recent studies have highlighted the improvements that can be made to the quality of information retrieved from PLMs by adding auxiliary data to otherwise naive prompts. In this paper, we explore the effects of enriching prompts with additional contextual information leveraged from the Wikidata KG on language model performance. Specifically, we compare the performance of naive vs. KG-engineered cloze prompts for entity genre classification in the movie domain. Selecting a broad range of commonly available Wikidata properties, we show that enrichment of cloze-style prompts with Wikidata information can result in a significantly higher recall for the investigated BERT and RoBERTa large PLMs. However, it is also apparent that the optimum level of data enrichment differs between models.eng
dc.description.versionpublishedVersion
dc.identifier.urihttps://oa.tib.eu/renate/handle/123456789/14436
dc.identifier.urihttps://doi.org/10.34657/13467
dc.language.isoeng
dc.publisherAachen, Germany : RWTH Aachen
dc.relation.essn1613-0073
dc.relation.urihttps://ceur-ws.org/Vol-3342/paper-3.pdf
dc.rights.licenseCC BY 4.0 Unported
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subject.ddc004
dc.subject.otherPrompt Learningeng
dc.subject.otherPre-trained Language Modeleng
dc.subject.otherKnowledge Grapheng
dc.titleImproving Language Model Predictions via Prompts Enriched with Knowledge Graphs
dc.typeBookParteng
dc.typeTexteng
dcterms.eventWorkshop on Deep Learning for Knowledge Graphs (DL4KG 2022) co-located with the 21th International Semantic Web Conference (ISWC 2022) , online, 24 October 2022
tib.accessRightsopenAccess
wgl.contributorFIZ KA
wgl.subjectInformatik
wgl.typeBuchkapitel / Sammelwerksbeitrag
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
paper-3.pdf
Size:
1.2 MB
Format:
Adobe Portable Document Format
Description:
Collections