Conference Paper


Benchmarks and Metrics for Evaluations of Code Generation: A Critical Review

Abstract

With the rapid development of Large Language Models (LLMs), a large number of machine learning models have been developed to assist programming tasks including the generation of program code from natural language input. However, how to evaluate such LLMs for this task is still an open problem despite of the great amount of research efforts that have been made and reported to evaluate and compare them. This paper provides a critical review of the existing work on the testing and evaluation of these tools with a focus on two key aspects: the benchmarks and the metrics used in the evaluations. Based on the review, further research directions are discussed.



The fulltext files of this resource are currently embargoed.
Embargo end: 2025-09-25

Authors

Paul, Debalina Ghosh
Zhu, Hong
Bayley, Ian

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: 2024
Date of RADAR deposit: 2024-06-19



“© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”


Related resources

This RADAR resource is the Accepted Manuscript of [arXiv preprint, version 1] Benchmarks and Metrics for Evaluations of Code Generation: A Critical Review
This RADAR resource is the Accepted Manuscript of Benchmarks and Metrics for Evaluations of Code Generation: A Critical Review

Details

  • Owner: Daniel Croft (removed)
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 355