Auto-survey Challenge - A&O (Apprentissage et Optimisation) Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Auto-survey Challenge

Résumé

We present a novel platform for evaluating the capability of Large Language Models (LLMs) to autonomously compose and critique survey papers spanning a vast array of disciplines including sciences, humanities, education, and law. Within this framework, AI systems undertake a simulated peer-review mechanism akin to traditional scholarly journals, with human organizers serving in an editorial oversight capacity. Within this framework, we organized a competition for the AutoML conference 2023. Entrants are tasked with presenting stand-alone models adept at authoring articles from designated prompts and subsequently appraising them. Assessment criteria include clarity, reference appropriateness, accountability, and the substantive value of the content. This paper presents the design of the competition, including the implementation baseline submissions and methods of evaluation.
Fichier principal
Vignette du fichier
main.pdf (269.87 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04206578 , version 1 (05-10-2023)
hal-04206578 , version 2 (07-10-2023)

Licence

Paternité

Identifiants

Citer

Thanh Gia Hieu Khuong, Benedictus Kent Rachmat. Auto-survey Challenge: Advancing the Frontiers of Automated Literature Review. JDSE 2023 - 8th Junior Conference on Data Science and Engineering, Sep 2023, Orsay, France. ⟨hal-04206578v2⟩
165 Consultations
63 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More