AQuA - Combining Experts’ and Non-Experts’ Views To Assess Deliberation Quality in Online Discussions Using LLMs
Published in The First Workshop on Language-driven Deliberation Technology (DELITE2024) LREC-COLING 2024, Torino, Italy, 2024
In this work, we introduce AQuA, an additive score that calculates a unified deliberative quality score from multiple indices for each discussion post. Unlike other singular scores, AQuA preserves information on the deliberative aspects present in comments, enhancing model transparency. We develop adapter models for 20 deliberative indices, and calculate correlation coefficients between experts' annotations and the perceived deliberativeness by non-experts to weigh the individual indices into a single deliberative score. We demonstrate that the AQuA score can be computed easily from pre-trained adapters and aligns well with annotations on other datasets that have not be seen during training. The analysis of experts' vs. non-experts' annotations confirms theoretical findings in the social science literature.
Recommended citation: Behrendt, M., Wagner, S. S., Ziegele, M., Wilms, L., Stoll, A., Heinbach, D., Harmeling, S. (2024). "AQuA - Combining Experts' and Non-Experts' Views To Assess Deliberation Quality in Online Discussions Using LLMs ". arXiv preprint. https://doi.org/10.48550/arXiv.2404.02761 http://mabehrendt.github.io/files/2024aqua.pdf