Fast and Continual Knowledge Graph Embedding via Incremental LoRA(五)

Liu J, Ke W, Wang P and others. Fast and continuous knowledge graph embedding via incremental LoRA[J]. arXiv pre-publication arXiv:2407.05705, 2024.
引用量:10

图1展示了CKGE中IncLoRA的架构示意图。在故事发展过程中围绕哈利波特展开的故事中
上方展示的是围绕哈利波特展开的故事中
KG不断增长
在各个快照中则展示了具有增量LoRAs的KGE模型。图2展示了...
5 Conclusion
本文提出了一种新型的CKGE基线模型FastKGE,并通过IncLoRA机制有效地保持所学知识,并显著提升微调效率。
为了解决灾难性遗忘问题,在处理新知识时采用图分层策略以实现新旧知识的有效区分与独立存储。
同时,在减少计算资源消耗的前提下提出了一种自适应秩分配机制,并基于此设计了增量低秩适配器学习方法。
展望未来的工作方向在于探讨当知识图谱 KGs 发生变化时如何进行CKGE(Continual Knowledge Evolving)学习。
References
[Ansuini et al., 2019] Alessio Ansuini, Alessandro Laio, Jakob H Macke, and Davide Zoccolan explored the intrinsic dimension of data representations in deep neural networks. Their work was published in the proceedings of NeurIPS, 2019.
[Bordes et al., 2013] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating such embeddings facilitates the modeling of multi-relational data. In the NIPS conference proceedings, 2013.
在[Bordes等人于2014年发表的一篇文章中],Antoine Bordes、Jason Weston以及Nicolas Usunier探讨了使用弱监督嵌入模型进行开放问题回答的研究。该研究发表于_ECMl-PKDD_会议中于2014年举行。
Among the authors are Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell and others. Language models operate as few-shot learners. These findings were presented in the proceedings of NeurIPS (abbreviated as NeurIPS), 2020.
[Chen et al., 2023] Mingyang Chen, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z Pan, and Huajun Chen. Entity-agnostic representation learning for parameter-efficient knowledge graph embedding at AAAI 2023.
[Cui et al., 2023] Yuanning Cui,… Proceedings of AAAI, 2023.
Contributors such as Angel Daruna, Mehul Gupta, Mohan Sridharan, and Sonia Chernova have conducted research on learning and embedding knowledge graphs through continual learning approaches. This work was published in the IEEE Robotics and Automation Letters journal in 2021.
[Dettmers et al., 2018] The paper by Tim Dettmers and his co-authors, including Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel, introduced two-dimensional convolution-based knowledge graph embeddings. These contributions were presented at the AAAI conference in 2018.
Xin Dong and colleagues published in 2014 in the proceedings of SIGKDD. Their paper titled "Knowledge vault: A web-scale approach to probabilistic knowledge fusion" details an innovative method for integrating diverse knowledge sources on the web at scale.
[Hamaguchi et al. , 2017] Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. Knowledge sharing among entities beyond the scope of existing knowledge bases: a graph neural network method. In IJCAI , 2017.
Edward J Hu and his team introduced the LoRA method: a low-rank-based approach for adapting large-scale language models at the ICLR conference in 2022.
A novel approach for learning low-dimensional representations of entities and relations in knowledge graphs was presented at the NeurIPS conference in 2018 by Seyed Mehran Kazemi and David Poole.
[Kirkpatrick et al. , 2017] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago
Ramalho, Agnieszka Grabska-Bawinkel, et al. Reducing neural network catastrophic forgetting through adaptive regularization techniques. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
[Kou et al. , 2020] 小玉·Kou、严 Kai、刘少波、李鹏、周杰及张燕共同完成了题为《基于分解式的连续图表示学习》的研究,在 EMNLP 上发表于2020年。
[Li et al., 2022] Guozheng Li, Xu Chen, Peng Wang, Jiafeng Xie, and Qiqing Luo. FastRE: To investigate fast relation extraction techniques through the application of a convolutional encoder combined with an enhanced cascade binary tagging framework. In IJCAI, 2022.
Revealing instructive in-context learning through the use of tabular prompts for the extraction of relational triples.
[Group of researchers led by Yuzhang Liu et al. 在2020年] conducted research on Aprime, which introduces an improved attention mechanism utilizing a pseudo residual connection for knowledge graph embedding. The study was presented at the COLING conference in 2021年.
[Liu and colleagues, 2023] Jiajun Liu, Peng Wang, Ziyu Shang, and Chenxiao Wu presented their work on Iterde: an iterative process for knowledge distillation framework for knowledge graph embeddings at the AAAI conference in 2023.
To advance the field of knowledge graph embedding through incremental distillation techniques, [Liu et al., 2024] presented a novel approach by Jiajun Liu, Wenjun Ke, Peng Wang (Wang鹏), Ziyu Shang, Jinhua Gao (高锦花), Guozheng Li (李国政), Ke Ji (Jim Ji) and Yanhe Liu (刘延辉). Their work focuses on achieving continuous improvement in knowledge graph representation through innovative incremental learning strategies. The research was published in the AAAI conference in 2024.
[Lomonaco及Maltoni, 2017] Vincenzo Lomonaco及Davide Maltoni. Core50:该数据集及其基准测试对持续物体识别具有重要意义. In CoRL, 2017.
The authors introduce Gradient episodic memory as a mechanism for enhancing the model's ability to learn continuously over time, which is particularly useful in scenarios where data distributions shift.
[Li and Xu, 2021] Li and Xu. A hyperbolic-based hierarchical knowledge graph embedding technique is proposed to advance link prediction research. In EMNLP, 2021.
[Paszke et al., 2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury et al. PyTorch: A modern imperative deep learning framework with high performance capabilities. Published in NeurIPS, 2019.
[Radford et al., 2019] Alec Radford and his team developed advanced language models. These models function as unsupervised learners of multitasking systems. The research was published in The OpenAI blog (1:8:2019).
A comprehensive study by Rossi et al., published in the year 2021, presents a detailed examination of knowledge graph embeddings and their application in link prediction through a comparative evaluation of various methodologies. The research article, titled "The application of knowledge graph embeddings in link prediction: a comparative evaluation," was published in the ACM Transactions on Knowledge Discovery from Data (TKDD), issue number 15(2), covering pages 1–49 during that year.
[Rusu et al., 2016] Contributions by Andrei A Rusu and others have significantly advanced the field of progressive neural network architectures. The work of Neil C Rabinowitz and Guillaume Desjardins further solidified these innovations. Their research on hubert Soyer's contributions to the development of progressive neural networks has been particularly impactful in the context of arXiv preprints (arXiv:1606.04671), published in 2016.]
[Shang et al., 2023] The paper authored by Ziyu Shang, Peng Wang, Yuzhang Liu, Jiajun Liu, and Wenjun Ke was presented at the ISWC conference in 2023. Their work introduced Askrl: A novel aligned-spatial knowledge representation learning framework designed specifically for constructing open-world knowledge graphs.
Contributors Ziyu Shang, Wenjun Ke, Nana Xiu, Peng Wang, Jiajun Liu, Yanhui Li, Zhizhao Luo and Ke Ji led by Ontofact a pioneering framework designed to unravel the intricate factskeleton of large language models through ontology-driven reinforcement learning techniques. The study was published in the proceedings of the AAAI conference in 2024.
[Song and Park, 2018] Hyun-Je Song and Seong-Bae Park. By enhancing the translation-based knowledge graph embeddings through a process of continual learning. IEEE Access , 6:60489–60497, 2018.
[Suchanek et al., 2007] Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum presented Yago as a foundational semantic knowledge repository within the context of the WWW Conference (WWW), held in 2007.
[Sun et al., 2019] Zhi-Qing Sun et al. Rotate: Through relation rotation in complex space for knowledge graph embedding. In the proceedings of the ICLR, 2019.
[Toutanova et al. , 2015] Kristina Toutanova et al., along with Danqi Chen and others such as Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing text for joint embedding of text and knowledge bases was presented at the proceedings of the EMNLP conference in 2015.
[Trouillon及其合著者, 2016] Theo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, 和 Guillaume Bouchard. Complex embeddings for simple link prediction. 在 ICML 上发表于 2016年
[Vrandeciˇ c and Kr ´ otzsch, 2014 ¨ ] Denny Vrandeciˇ c and ´ Markus Krotzsch. Wikidata: a free collaborative knowl- ¨ edgebase. Communications of the ACM , 57(10):78–85, 2014.
[Wang et al. , 2017] Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. An overview of knowledge graph embeddings: Approaches and applications. IEEE Transactions on Knowl_ edge and Data Engineering, 29(12):2724–2743, 2017.
[Wang et al., published their study in 2019] Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, and William Yang Wang. Aligning sentence embeddings with lifelong relation extraction. appeared in the proceedings of NAACL in 2019.
Wang et al., 2023
Wikidata, 2023
[Xu and her colleagues , 2023] Xu et al., contributed by Zijie Xu, Peng Wang, Ziyu Shang, and Jiajun Liu. A two-stage framework designed to correct noisy event extraction was presented at the DASFAA conference in 2023.
[Zenke et al. , 2017] Friedemann Zenke, Ben Poole, and Surya Ganguli. The study highlights the mechanism of achieving continuous learning via bio-inspired neural networks within the framework of synaptic intelligence. This research contributes to understanding the adaptability of deep learning models in dynamic environments, presented at the ICML conference in 2017.
The work of Zhang et al. in 2016 introduced a novel collaborative knowledge base embedding approach for recommendation systems.
[Zhang et al., 2023] Qingru Zhang, Minshuo Chen, Alexander Bukharin, Heping He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adaptive budget allocation策略用于提高参数高效微调的效果。在ICLR会议上报告了这一发现
