Beyond Reference-Based Metrics:
Analyzing Behaviors of Open LLMs on Data-to-Text Generation

Institute of Formal and Applied Linguistics
Charles University

Abstract

We analyze the behaviors of open large language models (LLMs) on the task of data-to-text (D2T) generation, i.e., generating coherent and relevant text from structured data. To avoid the issue of LLM training data contamination with standard benchmarks, we design Quintd – a tool for collecting novel structured data records from public APIs. Using a dataset collected with Quintd and leveraging reference-free evaluation, we analyze model behaviors on five D2T generation tasks. We find that recent open LLMs (Llama2, Mistral, and Zephyr) can generate fluent and coherent text from standard data formats in zero-shot settings. However, we also show that the semantic accuracy of the outputs is a major issue: both according to our GPT-4-based metric and human annotators, more than 80% of the outputs of open LLMs contain a semantic error. We publicly release the code, data, and model outputs.

How to cite us

@misc{kasner2024referencebased,
    title={Beyond Reference-Based Metrics: Analyzing Behaviors of Open LLMs on Data-to-Text Generation}, 
    author={Zdeněk Kasner and Ondřej Dušek},
    year={2024},
    eprint={2401.10186},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}