Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

publications

How Much Do I Argue Like You? Towards a Metric on Weighted Argumentation Graphs

Published in SAFA@COMMA, 2020

When exchanging arguments with other people, it is interesting to know who of the others has the most similar opinion to oneself. In this paper, we suggest using weighted argumentation graphs that can model the relative importance of arguments and certainty of statements. We present a pseudometric to calculate the distance between two weighted argumentation graphs, which is useful for applications like recommender systems, consensus building, and finding representatives. We propose a list of desiderata which should be fulfilled by a metric for those applications and prove that our pseudometric fulfills these desiderata.

Recommended citation: Brenneis, M., Behrendt, M., Harmeling, S. and Mauve, M. (2021). "How Much Do I Argue Like You? Towards a Metric on Weighted Argumentation Graphs." In SAFA@COMMA, pages 2-13, http://mabehrendt.github.io/files/2020.howmuch.pdf

How Will I Argue? A Dataset for Evaluating Recommender Systems for Argumentations

Published in Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2021

Exchanging arguments is an important part in communication, but we are often flooded with lots of arguments for different positions or are captured in filter bubbles. Tools which can present strong arguments relevant to oneself could help to reduce those problems. To be able to evaluate algorithms which can predict how convincing an argument is, we have collected a dataset with more than 900 arguments and personal attitudes of 600 individuals, which we present in this paper. Based on this data, we suggest three recommender tasks, for which we provide two baseline results from a simple majority classifier and a more complex nearest-neighbor algorithm. Our results suggest that better algorithms can still be developed, and we invite the community to improve on our results.

Recommended citation: Brenneis, M., Behrendt, M., and Harmeling, S. (2021). "How Will I Argue? A Dataset for Evaluating Recommender Systems for Argumentations." In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 360–367, Singapore and Online. Association for Computational Linguistics. http://mabehrendt.github.io/files/2021.howwill.pdf

ArgueBERT: How To Improve BERT Embeddings for Measuring the Similarity of Arguments

Published in Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021), 2021

Argumentation is an important tool within human interaction, not only in law and politics but also for discussing issues, expressing and exchanging opinions and coming to decisions in our everyday life. Applications for argumentation often require the measurement of the arguments’ similarity, to solve tasks like clustering, paraphrase identification or summarization. In our work, BERT embeddings are pre-trained on novel training objectives and afterwards fine-tuned in a siamese architecture, similar to Reimers and Gurevych (2019b), to measure the similarity of arguments. The experiments conducted in our work show that a change in BERT’s pre-training process can improve the performance on measuring argument similarity.

Recommended citation: Behrendt, M., & Harmeling, S. (2021). "Arguebert: How to improve bert embeddings for measuring the similarity of arguments." In Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021) (pp. 28-36). http://mabehrendt.github.io/files/2021.arguebert.pdf

A Survey on Self-Supervised Representation Learning

Published in arXiv preprint, 2023

This survey paper provides a comprehensive review of these methods in a unified notation, points out similarities and differences of these methods, and proposes a taxonomy which sets these methods in relation to each other. Furthermore, our survey summarizes the most-recent experimental results reported in the literature in form of a meta-study. Our survey is intended as a starting point for researchers and practitioners who want to dive into the field of representation learning.

Recommended citation: Uelwer, T., Robine, J., Wagner, S. S., Höftmann, M., Upschulte, E., Konietzny, S., Behrendt, M. and Harmeling, S. (2023). "A Survey on Self-Supervised Representation Learning." In arXiv preprint. http://mabehrendt.github.io/files/2023representationsurvey.pdf

Automatic Dictionary Generation: Could Brothers Grimm Create a Dictionary with BERT?

Published in 19th Conference on Natural Language Processing (KONVENS 2023), 2023

The creation of the most famous German dictionary, also referred to as “Deutsches Wörterbuch” or in English “The German Dictionary”, by the two brothers Jacob and Wilhelm Grimm, took more than a lifetime to be finished (1838-1961). In our work we pose the question, if it would be possible for them to create a dictionary using present technology, i.e., language models such as BERT. Starting with the definition of the task of Automatic Dictionary Generation, we propose a method based on contextualized word embeddings and hierarchical clustering to create a dictionary given unannotated text corpora. We justify our design choices by running variants of our method on English texts, where ground truth dictionaries are available. Finally, we apply our approach to Shakespeare’s work and automatically generate a dictionary tailored to Shakespearean vocabulary and contexts without human intervention.

Recommended citation: Weiland, H. T., Behrendt, M., and Harmeling, S. (2023). "How Much Do I Argue Like You? Towards a Metric on Weighted Argumentation Graphs." In Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)., pages 102-120, http://mabehrendt.github.io/files/2023.konvens.pdf

How algorithmically curated online environments influence users’ political polarization: Results from two experiments with panel data

Published in Computers in Human Behavior Reports, 2023

This study presents results from two quasi-experiments in which participants were exposed either to algorithmically selected or randomly selected arguments that were either in line or in contrast with their attitudes on two different topics. The results reveal that exposure to like-minded arguments increased participants’ attitude polarization and affective po­larization more intensely than exposure to opposing arguments. Yet, contrary to popular expectations, these effects were not amplified by algorithmic selection. Still, for one topic, exposure to algorithmically selected arguments led to slightly stronger attitude polarization than randomly selected arguments.

Recommended citation: Kelm, O., Neumann, T., Behrendt, M., Brenneis, M., Gerl, K., Marschall S., Meißner, F., Harmeling, S. Vowe, G., Ziegele, M.. (2023). "How algorithmically curated online environments influence users’ political polarization: Results from two experiments with panel data." Computers in Human Behavior Reports. http://mabehrendt.github.io/files/2023.chbr.pdf

AQuA - Combining Experts’ and Non-Experts’ Views To Assess Deliberation Quality in Online Discussions Using LLMs

Published in The First Workshop on Language-driven Deliberation Technology (DELITE2024) LREC-COLING 2024, Torino, Italy, 2024

In this work, we introduce AQuA, an additive score that calculates a unified deliberative quality score from multiple indices for each discussion post. Unlike other singular scores, AQuA preserves information on the deliberative aspects present in comments, enhancing model transparency. We develop adapter models for 20 deliberative indices, and calculate correlation coefficients between experts' annotations and the perceived deliberativeness by non-experts to weigh the individual indices into a single deliberative score. We demonstrate that the AQuA score can be computed easily from pre-trained adapters and aligns well with annotations on other datasets that have not be seen during training. The analysis of experts' vs. non-experts' annotations confirms theoretical findings in the social science literature.

Recommended citation: Behrendt, M., Wagner, S. S., Ziegele, M., Wilms, L., Stoll, A., Heinbach, D., Harmeling, S. (2024). "AQuA - Combining Experts' and Non-Experts' Views To Assess Deliberation Quality in Online Discussions Using LLMs ". arXiv preprint. https://doi.org/10.48550/arXiv.2404.02761 http://mabehrendt.github.io/files/2024aqua.pdf

talks