논문 리뷰/LLM1 [Paper Review] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks 논문 원본 : https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP TasksLarge pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limarxiv.org 1. Introduction사전학습된 신경망 언.. 2025. 4. 20. 이전 1 다음