Educational Natural Language Processing (EduNLP)
The junior research group "EduNLP" is part of the research center CATALPA.
Automatic evaluation of free text answers and automated feedback for learners and teachers - to make this work very reliably in the future, our junior research group investigates how language processing methods can be used here.
Goals and research questions
This project investigates how natural language processing methods can be used to automatically score free-text answers and provide learners and teachers with automatic feedback.
A core research question of the project is how learners can be provided with formative feedback about their essays. This can cover different aspects of writing such as syntactic or lexical variance, structure and argumentation, coherence, thematic fit with the topic or use of figurative language.
A number of sub-questions arise from this overall goal, such as:
- What is the performance of existing scoring algorithms for a certain phenomenon and how can they be adapted for a specific use-case?
- What are properties of useful formative feedback from the learners’ perspective and how can datasets with such feedback messages be collected?
- How can we automatize such feedback, for example by training a decision tree to select an appropriate feedback message from a pool of human-created messages or by using natural language generation methods to produce the feedback?
- How do humans judge such feedback, e.g. in terms of understandability and naturalness and how does the feedback influence the learning outcome?
-
-
- Yuning Ding (Doctoral Researcher, start date: 01.01.2022)
- Viet Nguyen (student Assistant, start date: 01.02.2022)
- Finn Brodmann (student Assistant, start date: 01.02.2022)
- Joey Pehlke (student Assistant, start date: 01.02.2022)
-
December 2021 – November 2024
-
2024
Journals
- Jansen, T., Meyer, J., Fleckenstein, J., Horbach, A., Keller, S., & Möller, J. (2024). Individualizing goal-setting interventions using automated writing evaluation to support secondary school students’ text revisions. Learning and Instruction, 89, 101847.
- Meyer, J., Jansen, T., Schiller, R., Liebenow, L. W., Steinbach, M., Horbach, A., & Fleckenstein, J. (2024). Using LLMs to bring evidence-based feedback into the classroom: AI-generated feedback increases secondary students’ text revision, motivation, and positive emotions. Computers and Education: Artificial Intelligence, 6, 100199.
- Schaller, N.-J., Horbach, A., Höft, L. I., Ding, Y., Bahr, J. L., Meyer, J., & Jansen, T. (2024). DARIUS: A comprehensive learner corpus for argument mining in german-language essays.
- Shin, H. J., Andersen, N., Horbach, A., Kim, E., Baik, J., & Zehner, F. (2024). Operational automatic scoring of text responses in 2016 ePIRLS: Performance and linguistic variance.
Conferences
- Bexte, M., Horbach, A., & Zesch, T. (2024). EVil-probe - a composite benchmark for extensive visio-linguistic probing. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 joint international conference on computational linguistics, language resources and evaluation (LREC-COLING 2024) (pp. 6682–6700). ELRA; ICCL. https://aclanthology.org/2024.lrec-main.591
- Ding, Y., Kashefi, O., Somasundaran, S., & Horbach, A. (2024). When argumentation meets cohesion: Enhancing automatic feedback in student writing. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 17513–17524.
- Shardlow, M., Alva-Manchego, F., Batista-Navarro, R. T., Bott, S., Ramirez, S. C., Cardon, R., François, T., Hayakawa, A., Horbach, A., Huelsing, A., et al. (2024). An extensible massively multilingual lexical simplification pipeline dataset using the MultiLS framework. Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI)@ LREC-COLING 2024, 38–46.
Talks and Poster Presentations
- Wehrhahn, F., Ding, Y., Gaschler, R., Zhao, F., & Horbach, A. (2024, June 26–28). Argumentative essay writing practice with automated feedback and highlighting. [Poster Presentation]. EARLI SIG WRITING 2024 – ways2write, Université Paris Nanterre, France.
2023
Journals
- Horbach, A., Pehlke, J., Laarmann-Quante, R., & Ding, Y. (2023). Crosslingual content scoring in five languages using machine-translation and multilingual transformer models. International Journal of Artificial Intelligence in Education, 1–27.
- Zesch, T., Horbach, A., & Zehner, F. (2023). To score or not to score: Factors influencing performance and feasibility of automatic content scoring of text responses. Educational Measurement: Issues and Practice, 42(1), 44–58. https://doi.org/10.1111/emip.12544
Conferences
- Bexte, M., Horbach, A., & Zesch, T. (2023). Similarity-based content scoring - a more classroom-suitable alternative to instance-based scoring? Findings of the Association for Computational Linguistics: ACL 2023, 1892–1903. https://aclanthology.org/2023.findings-acl.119
- Ding, Y., Bexte, M., & Horbach, A. (2023a). CATALPA_EduNLP at PragTag-2023. In M. Alshomary, C.-C. Chen, S. Muresan, J. Park, & J. Romberg (Eds.), Proceedings of the 10th workshop on argument mining (pp. 197–201). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.argmining-1.22
- Ding, Y., Bexte, M., & Horbach, A. (2023b). Score it all together: A multi-task learning study on automatic scoring of argumentative essays. Findings of the Association for Computational Linguistics: ACL 2023, 13052–13063. https://aclanthology.org/2023.findings-acl.825
- Ding, Y., Trüb, R., Fleckenstein, J., Keller, S., & Horbach, A. (2023). Sequence tagging in EFL email texts as feedback for language learners. Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning, 53–62.
- Mousa, A., Laarmann-Quante, R., & Horbach, A. (2023). Manual and automatic identification of similar arguments in EFL learner essays. Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning, 85–93.
Proceedings
- Kochmar, E., Burstein, J., Horbach, A., Laarmann-Quante, R., Madnani, N., Tack, A., Yaneva, V., Yuan, Z., & Zesch, T. (2023). Proceedings of the 18th workshop on innovative use of NLP for building educational applications (BEA 2023).
Talks and Poster Presentations
- Zehner, F., Zesch, T., & Horbach, A. (2023a, February 28–March 2). Mehr als nur Technologie- und Fairnessfrage: Ethische Prinzipien beim automatischen Bewerten von Textantworten aus Tests [Paper Presentation]. 10th GEBF Annual conference, Universität Duisburg-Essen.
- Zehner, F., Zesch, T., & Horbach, A. (2023b, February 28–March 2). To score or not to score? Machbarkeits- und performanzfaktoren für automatisches scoring von textantworten [Paper Presentation]. 10th GEBF annual conference, Universität Duisburg-Essen.
2022
Conferences
- Bexte, M., Horbach, A., & Zesch, T. (2022). Similarity-based content scoring - how to make S-BERT keep up with BERT. Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 118–123. https://aclanthology.org/2022.bea-1.16
- Bexte, M., Laarmann-Quante, R., Horbach, A., & Zesch, T. (2022). LeSpell - a multi-lingual benchmark corpus of spelling errors to develop spellchecking methods for learner language. Proceedings of the Language Resources and Evaluation Conference, 697–706. https://aclanthology.org/2022.lrec-1.73
- Ding, Y., Bexte, M., & Horbach, A. (2022). Don’t drop the topic - the role of the prompt in argument identification in student writing. Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 124–133. https://aclanthology.org/2022.bea-1.17
- Horbach, A., Laarmann-Quante, R., Liebenow, L., Jansen, T., Keller, S., Meyer, J., Zesch, T., & Fleckenstein, J. (2022). Bringing automatic scoring into the classroom–measuring the impact of automated analytic feedback on student writing performance. Swedish Language Technology Conference and NLP4CALL, 72–83. https://ecp.ep.liu.se/index.php/sltc/article/view/580/550
- Laarmann-Quante, R., Schwarz, L., Horbach, A., & Zesch, T. (2022). ‘Meet me at the ribary’ – acceptability of spelling variants in free-text answers to listening comprehension prompts. Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 173–182. https://aclanthology.org/2022.bea-1.22
Proceedings
- Kochmar, E., Burstein, J., Horbach, A., Laarmann-Quante, R., Madnani, N., Tack, A., Yaneva, V., Yuan, Z., & Zesch, T. (Eds.). (2022). Proceedings of the 17th workshop on innovative use of NLP for building educational applications (BEA 2022). Association for Computational Linguistics. https://aclanthology.org/2022.bea-1.0
Chapters in Edited Books
- Horbach, A. (2022). Werkzeuge für die automatische Sprachanalyse. In M. Beißwenger, L. Lemnitzer, & C. Müller-Spitzer (Eds.), Forschen in der Linguistik. Eine Methodeneinführung für das Germanistik-Studium. Wilhelm Fink (UTB).
2021
Journals
- Zesch, T., Horbach, A., & Laarmann-Quante, R. (2021). Künstliche Intelligenz in der Bildung. Unikate: Berichte aus Forschung und Lehre, 56: Junge Wilde - Die nächste Generation, 95–103. https://www.uni-due.de/unikate/pdf/UNIKATE_2021_056_10_Zesch.pdf
Conferences
- Bexte, M., Horbach, A., & Zesch, T. (2021). Implicit Phenomena in Short-answer Scoring Data. Proceedings of the First Workshop on Understanding Implicit and Underspecified Language.
- Haring, C., Lehmann, R., Horbach, A., & Zesch, T. (2021). C-Test Collector: A Proficiency Testing Application to Collect Training Data for C-Tests. Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, 180–184. https://www.aclweb.org/anthology/2021.bea-1.19
Proceedings
- Burstein, J., Horbach, A., Kochmar, E., Laarmann-Quante, R., Leacock, C., Madnani, N., Pilán, I., Yannakoudakis, H., & Zesch, T. (Eds.). (2021). Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics. https://www.aclweb.org/anthology/2021.bea-1.0
2020
Journals
- Wahlen, A., Kuhn, C., Zlatkin-Troitschanskaia, O., Gold, C., Zesch, T., & Horbach, A. (2020). Automated Scoring of Teachers’ Pedagogical Content Knowledge - A Comparison between Human and Machine Scoring. Frontiers in Education. https://www.frontiersin.org/articles/10.3389/feduc.2020.00149/pdf
Conferences
- Ding, Y., Horbach, A., Wang, H., Song, X., & Zesch, T. (2020). Chinese Content Scoring: Open-Access Datasets and Features on Different Segmentation Levels. Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing(AACL-IJCNLP 2020). https://www.aclweb.org/anthology/2020.aacl-main.37.pdf
- Ding, Y., Riordan, B., Horbach, A., Cahill, A., & Zesch, T. (2020). Don’t take "nswvtnvakgxpm" for an answer - The surprising vulnerability of automatic content scoring systems to adversarial input. Proceedings of the 28th International Conference on Computational Linguistics(COLING 2020). https://www.aclweb.org/anthology/2020.coling-main.76.pdf
- Horbach, A., Aldabe, I., Bexte, M., Lacalle, O. de, & Maritxalar, M. (2020). Appropriateness and Pedagogic Usefulness of Reading Comprehension Questions. Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC-2020). https://www.aclweb.org/anthology/2020.lrec-1.217.pdf