Paper
28 March 2024 Building end-to-end dialogue system with large language models
Jie Fan, Guojun Ma
Author Affiliations +
Proceedings Volume 13091, Fifteenth International Conference on Signal Processing Systems (ICSPS 2023); 130912Q (2024) https://doi.org/10.1117/12.3022781
Event: Fifteenth International Conference on Signal Processing Systems (ICSPS 2023), 2023, Xi’an, China
Abstract
In order to solve the complex module dependencies of dialogue systems and improve the system's ability to understand deep knowledge in natural language and produce more coherent texts, this paper introduces an end-to-end dialogue system based on large language models. First, low-rank adaption is used to fine-tune sequence-to-sequence large language models, which reduces system complexity and model fine-tuning cost. Then, the training method of reinforcement learning from human feedback is adopted to make the generated responses more aligned with human expectations. Finally, in-context learning is used to adapt to specific tasks, improving model flexibility and adaptability. Experimental results show that the system performs well in both automatic evaluation and practical use and has strong application value.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Jie Fan and Guojun Ma "Building end-to-end dialogue system with large language models", Proc. SPIE 13091, Fifteenth International Conference on Signal Processing Systems (ICSPS 2023), 130912Q (28 March 2024); https://doi.org/10.1117/12.3022781
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Education and training

Performance modeling

Systems modeling

Autoregressive models

Mathematical optimization

Machine learning

Back to Top