|Current Workshop | Overview||Back | Home | Search ||
Human-Aware Computational Argumentation
Description and Aim
Argumentation is an omnipresent method of human communication. Politicians argue for their election manifestos, colleagues argue about the best way of solving a task, and we even argue with ourselves before making an important decision. But what exactly is an argument, what does it mean that two arguments are in conflict, and how can we determine who wins a debate?
In the past 20 years, these questions have been investigated from a computational point of view within the field of artificial intelligence. Numerous theories of computational argumentation have been proposed, which formalise, for example, how arguments may be constructed from underlying knowledge, how contradictory information in arguments may lead to conflicts between arguments, and which sets of arguments may be deemed acceptable in a debate with conflicting arguments. Amongst others, the theories have proven useful for aiding decision making, for robot communication, and for providing human understandable explanations of algorithm solutions. Even though the theories of computational argumentation provide sound theoretical systems, there is only little work on whether or not they indeed encode concepts found in human argumentation.
This workshop aims at bridging the gap between computational and human aspects of argumentation, by bringing together experts from the fields of artificial intelligence and psychology studying argumentation, as well as researchers working on applications of argumentation in different domains, e.g. legal reasoning and medical decision-making. The goal is to understand the overlap of computational and human argumentation in more detail, to form interdisciplinary collaborations, and to formulate cross-discipline research questions that will advance the study of human-aware argumentation in upcoming years.
Concrete questions to be discussed during the workshop include:
· How can we evaluate whether computational argumentation theories mirror human argumentation using experimental setups and results from psychology?
· How does human argumentation differ in different contexts, such as legal argumentation, argumentative negotiation, and everyday argumentation, and do the diverse theories of computational argumentation reflect these differences?
· How can we leverage psychological knowledge as prior information and constraints in argumentation processes?
· How can psychology and user studies be used to increase the effect of argumentation in persuasion?
· How can theoretical argumentation formally explain psychological evidence?