2020.findings-emnlp.25@ACL

Total: 1

#1 Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models [PDF] [Copy] [Kimi1]

Authors: Zhiyuan Zhang ; Xiaoqian Liu ; Yi Zhang ; Qi Su ; Xu Sun ; Bin He

Conventional knowledge graph embedding (KGE) often suffers from limited knowledge representation, leading to performance degradation especially on the low-resource problem. To remedy this, we propose to enrich knowledge representation via pretrained language models by leveraging world knowledge from pretrained models. Specifically, we present a universal training framework named Pretrain-KGE consisting of three phases: semantic-based fine-tuning phase, knowledge extracting phase and KGE training phase. Extensive experiments show that our proposed Pretrain-KGE can improve results over KGE models, especially on solving the low-resource problem.