缓解LLM幻觉一定是个复杂的系统问题,我们可以综合不同的技术方案、从多个层级协同去降低LLM的幻觉。虽然现有方案无法保证从根本上解决幻觉,但随着不断探索,我们坚信业内终将找到限制LLM幻觉的更有效的方案,也期待届时LLM相关应用的再次爆发式增长。 京东零售一直走在AI技术探索的前沿,随着公司在AI领域的不断投入和持续深耕,我们相信京东必将产出更多先进实用的技术成果,为行业乃至整个社会带来深远持久的影响。 【参考文献】[1] Hallucination is Inevitable: An Innate Limitation of Large Language Models[2] A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions[3] Unveiling the Causes of LLM Hallucination and Overcoming LLM Hallucination[4] Editing Large Language Models: Problems, Methods, and Opportunities[5] ACL 2023 Tutorial: Retrieval-based Language Models and Applications[6] Theoretical Limitations of Self-Attention in Neural Sequence Models[7] Sequence level training with recurrent neural networks.[8] Discovering language model behaviors with model-written evaluations[9] Dola: Decoding by contrasting layers improves factuality in large language models[10] Bert rediscovers the classical nlp pipeline[11] Retrieval-Augmented Generation for Large Language Models: A Survey[12] TaD: A Plug-and-Play Task-Aware Decoding Method toBetter Adapt LLM on Downstream Tasks[13] Inference-time intervention: Eliciting truthful answers from a language model[14] Beyond RAG: Building Advanced Context-Augmented LLM Applications