THE ANALYSIS OF THE EFFICIENCY OF GENERATIVE AI ALGORITHMS FOR CREATING A NATURAL DIALOGUE
Kuznetsov Alexander , CBDO, Co-Founder at Voctiv Manila, PhilippinesAbstract
In the modern world, artificial intelligence (AI) plays an increasingly important role in various fields of human activity. One of the most promising areas of AI application is the generation of natural dialogue. The purpose of this work is to analyze the efficiency of generative AI algorithms for creating natural dialogue. The relevance of this topic is due to the growing interest in the use of AI to create dialogue systems capable of interacting with people in a natural way. The results of the study can be useful for developers of dialogue systems, researchers in the field of AI, as well as anyone interested in the application of AI in their everyday life. Natural language generation is a fundamental task in artificial intelligence, with applications ranging from chatbots to virtual assistants. This study provides a comprehensive analysis of the efficiency of various generative artificial intelligence algorithms for creating a natural dialogue. Their performance is assessed in generating consistent and contextually appropriate responses by evaluating modern models using quantitative metrics and human evaluation. Additionally, the study explores the impact of various training data sizes and techniques on the quality of a generated dialogue. The results provide insight into the strengths and weaknesses of current generative AI approaches in the generation of a dialogue.
Keywords
Generative models, Natural language interface, Transformer models
References
Bakker M. A. et al. Fine-Tuning Language Models to Find Agreement among Humans with Diverse Preferences.” arXiv. – 2022.
Korbak T. et al. Pretraining language models with human preferences. arXiv. doi: 10.48550 //arXiv preprint ARXIV.2302.08582. – 2023.
Soni N. et al. Human language modeling //arXiv preprint arXiv:2205.05128. – 2022.
Ouyang L. et al. Training language models to follow instructions with human feedback //Advances in neural information processing systems. – 2022. – T. 35. – P. 27730-27744.
Zbinden R. Implementing and experimenting with diffusion models for text-to-image generation //arXiv preprint arXiv:2209.10948. – 2022.
"T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" by Colin Raffel et al. (2020)
Luo H. et al. Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning //Neurocomputing. – 2022. – T. 508. – P. 293-304.
Rafailov R. et al. Direct preference optimization: Your language model is secretly a reward model //Advances in Neural Information Processing Systems. – 2024. – T. 36.
O’Meara J., Murphy C. Aberrant AI creations: co-creating surrealist body horror using the DALL-E Mini text-to-image generator //Convergence. – 2023. – T. 29. – No. 4. – pp. 1070-1096.
Colin R. Exploring the limits of transfer learning with a unified text-to-text transformer //JMLR. – 2020. – T. 21. – No. 140. – P. 1.
Clark K. et al. Electra: Pre-training text encoders as discriminators rather than generators //arXiv preprint arXiv:2003.10555. – 2020.
Holtzman A. et al. Learning to write with cooperative discriminators //arXiv preprint arXiv:1805.06087. – 2020.
Bakker M. et al. Fine-tuning language models to find agreement among humans with diverse preferences //Advances in Neural Information Processing Systems. – 2022. – T. 35. – P. 38176-38189.
Article Statistics
Downloads
Copyright License
Copyright (c) 2024 Kuznetsov Alexander
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.