Follow
Miruna Clinciu
Miruna Clinciu
Ph.D. Student, Edinburgh Centre for Robotics
Verified email at hw.ac.uk
Title
Cited by
Cited by
Year
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
11242023
Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions
DM Howcroft, A Belz, M Clinciu, D Gkatzia, SA Hasan, S Mahamood, ...
13th International Conference on Natural Language Generation 2020, 169-182, 2020
1672020
The gem benchmark: Natural language generation, its evaluation and metrics
S Gehrmann, T Adewumi, K Aggarwal, PS Ammanamanchi, ...
arXiv preprint arXiv:2102.01672, 2021
1312021
A survey of explainable AI terminology
MA Clinciu, HF Hastie
1st Workshop on Interactive Natural Language Technology for Explainable …, 2019
882019
You reap what you sow: On the challenges of bias evaluation under multilingual settings
Z Talat, A Névéol, S Biderman, M Clinciu, M Dey, S Longpre, S Luccioni, ...
Proceedings of BigScience Episode# 5--Workshop on Challenges & Perspectives …, 2022
652022
A study of automatic metrics for the evaluation of natural language explanations
M Clinciu, A Eshghi, H Hastie
arXiv preprint arXiv:2103.08545, 2021
452021
Underreporting of errors in NLG output, and what to do about it
E Van Miltenburg, MA Clinciu, O Dušek, D Gkatzia, S Inglis, L Leppänen, ...
arXiv preprint arXiv:2108.01182, 2021
302021
Emergent structures and training dynamics in large language models
R Teehan, M Clinciu, O Serikov, E Szczechla, N Seelam, S Mirkin, ...
Proceedings of BigScience Episode# 5--Workshop on Challenges & Perspectives …, 2022
122022
Needle in a haystack: An analysis of high-agreement workers on mturk for summarization
L Zhang, S Mille, Y Hou, D Deutsch, E Clark, Y Liu, S Mahamood, ...
arXiv preprint arXiv:2212.10397, 2022
62022
Barriers and enabling factors for error analysis in NLG research
E Van Miltenburg, M Clinciu, O Dušek, D Gkatzia, S Inglis, L Leppänen, ...
Northern European Journal of Language Technology 9 (1), 2023
52023
It’s commonsense, isn’t it? demystifying human evaluations in commonsense-enhanced NLG systems
M Clinciu, D Gkatzia, S Mahamood
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), 1-12, 2021
52021
Needle in a haystack: An analysis of finding qualified workers on mturk for summarization
L Zhang, J Sedoc, S Mille, Y Hou, S Gehrmann, D Deutsch, E Clark, Y Liu, ...
Retrieved June 8, 2023, 2022
32022
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
arXiv preprint arXiv:2211.05100, 2022
22022
I don't understand! Evaluation Methods for Natural Language Explanations
M Clinciu, A Eshghi, H Hastie
12021
Let's Evaluate Explanations!
MA Clinciu, H Hastie
HRI 2020 Workshop on Test Methods and Metrics, 2020
12020
On the Role of Summary Content Units in Text Summarization Evaluation
M Nawrath, A Nowak, T Ratz, DC Walenta, J Opitz, LFR Ribeiro, J Sedoc, ...
arXiv preprint arXiv:2404.01701, 2024
2024
It's Common Sense, isn't it? Demystifying Human Evaluations in Commonsense-enhanced NLG systems
S Mahamood, M Clinciu, D Gkatzia
2021
The system can't perform the operation now. Try again later.
Articles 1–17