Andrea Horbach

Dr. Andrea Horbach Foto: privat

Dr. Andrea Horbach

Leiterin der Nachwuchsgruppe EduNLP, stv. Mitglied im Leitungsteam

E-Mail: andrea.horbach

Telefon: +49 2331 987-1702

Universitätsstr. 27 – PRG / Gebäude 5
Raum A 107 (1. Etage)
58097 Hagen

Was ist meine Rolle in D²L²?

Als Computerlinguistin leite ich die Nachwuchsgruppe EduNLP. Mich fasziniert, wie der Computer menschliche Sprache analysieren, verstehen und selbst produzieren kann - obwohl Sprache so komplex und mehrdeutig ist. Ich möchte durch automatische Sprachverarbeitung Lernende dabei unterstützen, bessere Texte zu schreiben, und Lehrenden ermöglichen, Texte effizienter auszuwerten.

Warum D²L²?

Für uns Menschen ist Sprache in den meisten Situationen das Kommunikationsmittel der Wahl. Gerade in der online-Lehre musste man bisher häufig auf „sprachfreie“ Aufgabenformate ausweichen, weil sie der Computer besser automatisch auswerten kann. Ich möchte in D²L² dazu beitragen, dass digitale Lehre sich an den Anforderungen der Lernenden ausrichten kann und sich nicht der technischen Machbarkeit unterordnen muss.

    • Leiterin der Nachwuchsforschungsgruppe “Educational Natural Language Processing” im Forschungsschwerpunkt D²L² “Digitalisierung, Diversität und Lebenslanges Lernen. Konsequenzen für die Hochschulbildung“, FernUniversität in Hagen (seit 12/2021)
    • Wissenschaftliche Mitarbeiterin, Language Technology Lab, Universität Duisburg-Essen (10/2016 - 11/2021)
    • PhD in Computerlinguistik, Universität des Saarlandes, Saarbrücken (2018)
    • Wissenschaftliche Mitarbeiterin/Doktorandin am Institut für Computerlinguistik, Universität des Saarlandes, Saarbrücken (04/2010 – 09/2016)
    • Diplom in Computerlinguistik, Universität des Saarlandes, Saarbrücken (2008)
    • Sprachverarbeitung für Bildungsanwendungen
    • Automatische Bewertung von Freitextaufgaben
    • Aufgaben- und Feedbackgenerierung
    • EduNLP
    • Explaining AI
  • Bexte, M., Laarmann-Quante, R., Horbach, A., & Zesch, T. (2022, to appear). LeSpell - A Multi-Lingual Benchmark Corpus of Spelling Errors to Develop Spellchecking Methods for Learner Language. In Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC-2022)

    Bexte, M., Horbach, A., & Zesch, T. (2021). Implicit Phenomena in Short-answer Scoring Data. In Proceedings of the First Workshop on Understanding Implicit and Underspecified Language. https://aclanthology.org/2021.unimplicit-1.2/

    Horbach, A., Aldabe, I., Bexte, M., Lopez de Lacalle, O., & Maritxalar, M. (2020). Appropriateness and Pedagogic Usefulness of Reading Comprehension Questions. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC-2020). https://aclanthology.org/2020.lrec-1.217/

    Ding, Y., Riordan, B., Horbach, A., Cahill, A., & Zesch, T. (2020). Don’t take “nswvtnvakgxpm” for an answer - The surprising vulnerability of automatic content scoring systems to adversarial input. In Proceedings of the 28th International Conference on Computational Linguistics(COLING 2020). https://aclanthology.org/2020.coling-main.76/

    Ding, Y., Horbach, A., Wang, H., Song, X., & Zesch, T. (2020). Chinese Content Scoring: Open-Access Datasets and Features on Different Segmentation Levels. In Proceedings of the 1st conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing(AACL-IJCNLP 2020). https://aclanthology.org/2020.aacl-main.37/

    Horbach, A., & Zesch, T. (2019). The Influence of Variance in Learner Answers on Automatic Content Scoring. Frontiers in Education, 4, 28. https://duepublico2.uni-due.de/servlets/MCRFileNodeServlet/duepublico_derivate_00047459/Horbach_Zesch_Influence_Variance.pdf

    Zesch, T., Horbach, A., Goggin, M., & Wrede-Jackes, J. (2018). A flexible online system for curating reduced redundancy language exercises and tests. In P. Taalas, J. Jalkanen, L. Bradley, & S. Thouësny (Eds.), Future-proof CALL: language learning as exploration and encounters – short papers from EUROCALL 2018 (pp. 319–324). https://doi.org/10.14705/rpnet.2018.26.857

    Horbach, A., Stennmanns, S., & Zesch, T. (2018). Cross-lingual Content Scoring. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 410–419). New Orleans, LA, USA: Association for Computational Linguistics. http://www.aclweb.org/anthology/W18-0550

    Horbach, A., & Pinkal, M. (2018). Semi-Supervised Clustering for Short Answer Scoring. In LREC. Miyazaki, Japan. http://www.lrec-conf.org/proceedings/lrec2018/pdf/427.pdf

    Zesch, T., & Horbach, A. (2018). ESCRITO - An NLP-Enhanced Educational Scoring Toolkit. In Proceedings of the Language Resources and Evaluation Conference (LREC). Miyazaki, Japan: European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2018/pdf/590.pdf

    Horbach, A., Ding, Y., & Zesch, T. (2017). The Influence of Spelling Errors on Content Scoring Performance. In Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA 2017) (pp. 45–53). Taipei, Taiwan: Asian Federation of Natural Language Processing. https://www.aclweb.org/anthology/W17-5908

    Horbach, A., Scholten-Akoun, D., Ding, Y., & Zesch, T. (2017). Fine-grained essay scoring of a complex writing task for native speakers. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 357–366). Copenhagen, Denmark: Association for Computational Linguistics. https://doi.org/10.18653/v1/W17-5040

    Riordan, B., Horbach, A., Cahill, A., Zesch, T., & Lee, C. M. (2017). Investigating neural architectures for short answer scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 159–168). Copenhagen, Denmark: Association for Computational Linguistics. https://aclanthology.org/W17-5017/

    Keiper, L., Horbach, A., & Thater, S. (2016). Improving POS Tagging of German Learner Language in a Reading Comprehension Scenario. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) (pp. 198–205). Portorož, Slovenia: European Language Resources Association (ELRA). Retrieved from https://www.aclweb.org/anthology/L16-1030

    Horbach, A., & Palmer, A. (2016). Investigating Active Learning for Short-Answer Scoring. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 301–311). San Diego, CA: Association for Computational https://aclanthology.org/W16-0535/

    Horbach, A., Thater, S., Steffen, D., Fischer, P. M., Witt, A., & Pinkal, M. (2015). Internet corpora: A challenge for linguistic processing. Datenbank-Spektrum, 15(1), 41–47. https://link.springer.com/article/10.1007%2Fs13222-014-0172-z

    Ostermann, S., Horbach, A., & Pinkal, M. (2015). Annotating Entailment Relations for Shortanswer Questions. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications (pp. 49–58). Beijing, China: Association for Computational Linguistics. https://aclanthology.org/W15-4408/

    Horbach, A., Poitz, J., & Palmer, A. (2015). Using Shallow Syntactic Features to Measure Influences of L1 and Proficiency Level in EFL Writings. In Proceedings of the fourth workshop on NLP for computer-assisted language learning (pp. 21–34). Vilnius, Lithuania: LiU Electronic Press. https://www.aclweb.org/anthology/W15-1903

    Koleva, N., Horbach, A., Palmer, A., Ostermann, S., & Pinkal, M. (2014). Paraphrase Detection for Short Answer Scoring. In Proceedings of the third workshop on NLP for computer-assisted language learning (pp. 59–73). Uppsala, Sweden: LiU Electronic Press. https://www.aclweb.org/anthology/W14-3505

    Horbach, A., Palmer, A., & Wolska, M. (2014). Finding a Tradeoff between Accuracy and Rater’s Workload in Grading Clustered Short Answers. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014) (pp. 588–595). Reykjavik, Iceland: European Languages Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2014/pdf/887_Paper.pdf

    Horbach, A., Palmer, A., & Pinkal, M. (2013). Using the text to evaluate short answers for reading comprehension exercises. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity (pp. 286–295). Atlanta, Georgia, USA: Association for Computational Linguistics. https://www.aclweb.org/anthology/S13-1041