In recent years, Artificial Intelligence (AI) has made remarkable strides, particularly in the realm of language comprehension and modelling. At the forefront of this revolution are Large Language Models (LLMs), which have significantly advanced since 2020. This new report delves into the state of the art of LLMs and offers technical recommendations for their adaptation within the ROLEPL-AI project.
By compiling a comprehensive overview of the current advancements in AI language models, this deliverable aims to act as a "small reference book", guiding methodologies in AI adaptation and application before practical experimentation within the project. It builds upon the findings of our Review of the status of research in AI and education and aligns with our upcoming Recommendations for use of AI in education and ALTAI self-assessment.
The rapid progress in LLMs can be attributed to the discovery and enhancement of the transformer neural network architecture. Modern LLMs exhibit a range of emergent abilities, including in-context learning, instruction following, and step-by-step reasoning. These advancements have garnered significant public interest, as evidenced by the surge in scientific publications and media coverage.
While LLMs have demonstrated potential in role-play applications, several challenges remain. By relying on instruction-following benchmarks and handcrafted prompts, we evaluate a series of open-source models and explore various methods to adapt LLMs, comparing them against available data and computational resources. This analysis will inform our model adaptation process.
As AI continues to evolve, staying abreast of the latest developments and tailoring solutions to specific needs is crucial. This document not only serves as a reference for current advancements but also sets the stage for future work in integrating AI into educational and role-play contexts.
Figure: Workflow of the algorithm for Reinforcement Learning with Human Feedback by Zhao (2023)
Comentários