Lorentz Center - Games, action and social software from 30 Oct 2006 through 3 Nov 2006
  Current Workshop  |   Overview   Back  |   Home   |   Search   |     

    Games, action and social software
    from 30 Oct 2006 through 3 Nov 2006

 
Monday Oct 30

Robot Companions and Virtual Characters in

Games as 'BDI+' Agents

 

John-Jules Meyer

 

In this talk I'll tell about recent and current work that we are doing on cognitive agents that possess - besides the usual 'rational' BDI attitudes - also other cognitive attitudes, particularly emotions. I'll also discuss our plans in applying these 'BDI+' agents in current larger projects we are involved in: the construction of companion robots for elderly people and virtual characters in video games.

 

 

The logic of defeasible argumentation

 

Bart Verheij

 

In his 1958 book The Uses of Argument, Stephen Toulmin presented an influential argument model. In it, he distinguishes data, qualified claims, warrants, backings and rebutters. Insights underlying the argument model that are by now familiar, but still relevant include the following (Hitchcock & Verheij 2005):

 

1.         Reasoning and argument involve not only support for points of view, but also attack against them.

2.         Reasoning can have qualified conclusions.

3.         There are other good types of argument than those of standard formal logic.

4.         Unstated assumptions linking premisses to a conclusion are better thought of as inference licenses than as implicit premisses.

5.         Standards of reasoning can be field-dependent, and can be themselves the subject of argumentation.

 

In the talk, I present research focusing on the first and third points in this list. With respect to the first point, developments in the study of argument attack and defeat are discussed. In particular, John Pollock's distinction between rebutting and undercutting defeaters and Dung's directed graph approach towards argument attack are treated. A connection is made with my own work on argumentation software (ArguMed) and argumentation logic (DefLog).

 

With respect to the third point, some work on the (semi-)formal study of argumentation is discussed that go beyond standard formal logic. A formal analysis of rules and reasons with an eye on their specific uses in the field of law is summarized (Reason-Based Logic, developed by Jaap Hage and myself). Furthermore it is discussed how the notion of argumentation schemes as it is used in the fields of critical thinking and informal argumentation theory by Douglas Walton and others can be approached using formal methods (see for instance recent work by Henry Prakken, Floris Bex and myself).

 

The research presented exemplifies the exploration of the possibilities and the boundaries of the use of formal methods in the field of argumentation.

 

 
Comparing Dynamic Epistemic Logic and Epistemic Temporal Logic
 
Eric Pacuit
 
Bringing together knowledge and temporal change is a natural move in modeling, but it is also a potentially dangerous one from a complexity perspective, as has been shown forcefully by Halpern & Vardi in their 1989 paper. These epistemic temporal models provide a view ‘from above', as a Grand Stage where events unfold. There is also the view ‘from below', found in 'dynamic epistemic logics' which construct successive new event models in definable stages. We will compare and contrast these two different approaches to modeling social situations.

 

 

DEL, ETL, and Belief Revision, part II
 
Johan van Benthem, Amsterdam & Stanford
 
Logical Dynamics is about making (inter-)actions first-class citizens in logical systems. Dynamic-epistemic logics treat information update from observed events, changing the current doxastic -epistemic model. Belief revision can be treated in the same format, using update rules for plausibility relations which have been proposed also for preference change. In this way, we obtain complete sets of axioms for particular revision mechanisms, plus a standard modal correspondence analysis for abstract revision postulates. We will discuss how all these systems fit into a temporal logic setting.

 

 

On Deriving Knowledge from Belief
 
Dov Samet
 
Defining knowledge in terms of belief is studied here in the framework of epistemic logic. It is universally agreed that knowledge cannot be defined as true belief. It is shown however that true belief does define knowledge without negative introspection. It is shown further that knowledge with negative introspection cannot be defined syntactically in terms of belief only, and not even in terms of belief and justification, as long as the latter is defined independently of belief.  This stands in stark contrast with the relational semantic of belief, since in each Kripke model of belief there exists a unique way to associated knowledge with belief. The impossibility of defining knowledge in terms of belief is reflected semantically in non-relational frames.

 

 

Does cognition matter in crowd behavior and riot control?

 

Rene Jorna

(cooperation with Nanda Wijermans, Wander Jager, Tony van Vliet

and Martin Helmhout)

 

In this research the central issue is the simulation of crowd behavior and riot control. The research consists of three parts: 1) the simulation of crow behavior and their excesses (riots); 2) the simulation of reacting activities of opposing forces such as police and military police and 3) the development of a decision support tool to support commanders in simulated field activities based on 1) and 2). At the moment only the first goal is part of ongoing research. There are three ways of dealing with the simulation of crowd behavior. The first is to see the crowd as an entity as such. The individual members are not relevant. The second is to deal with the individuals within crowds, but to consider the individuals as empty actors, such as is usual in most of present day economy and in organization studies. The third is the view that actors in a crowd are human information processing entities, that is to say cognitive actors. Such a group is a multi-actor system. We follow the third line of reasoning, using cognitive actors. To understand crowd behavior one has to use three levels of analysis: the intra-individual level (or functional level), the individual level and the inter-individual or group level. The last two are quite common in social simulation research. Working with also an intra-individual level in a simulation model means to work with cognitive architectures and mental representations. We used the ACT-R architecture of John Anderson to simulate cognitive plausible actors in crowds and riots. Martin Helmhout (2006) adapted the individual architecture of ACT-R to implement a social cognitive actor, called Rbot and place that into a Multi-Rbot System (MRS). This model will be used by Nanda Wijermans to implement a simulation tool for crowd behavior.

 

 

Two Kinds of Common Knowledge

 

Boudewijn de Bruin

(joint work with Barteld Kooi)

 

Common knowledge is widely used in social philosophy to understand collective entities. In this paper, we show how Jean-Paul Sartre's subtle observations on group formation (Critique of Dialectical Reason) inspire a logical distinction between two kinds of common knowledge that mirrors different kinds of collective entities.

 

 

Moral conflicts between groups of agents

 

Barteld Kooi

 

Two groups of agents face a moral conflict if both groups have an obligation such that these obligations cannot both be fulfilled. We study moral conflicts using a multi-agent deontic logic devised to represent reasoning about sentences like 'In the interest of group F, group G ought to see to it that A'. We provide a formal language and a consequentialist semantics. An illustration of our semantics with an analysis of the Prisoner's Dilemma follows. Next, necessary and sufficient conditions are given for the possibility that two groups face a moral conflict.

 

 

Perception in Update Logic
 
Jan van Eijck
 
Three key ways of updating one's knowledge are
(i) perception of states of affairs,
(ii) reception of messages, 
(iii) drawing new conclusions from known facts.
 
If one represents knowledge by means of Kripke models, the implicit assumption is that drawing conclusions is immediate. This assumption of logical omniscience is a useful abstraction. It leaves the distinction between (i) and (ii) to be accounted for.
 
In current versions of Update Logic (Dynamic Epistemic Logic, Logic of Communication and Change) perception and message reception are not distinguished.  We will look at what is needed to distinguish them.

 

 

Security of Multi-party Protocols: Epistemics and Verification

 

Francien Dechesne and Yanjing Wang

 

*Security Protocols* are a special case of complex communicative actions that are specifically intended to guarantee certain security requirements (anonymity of agents, secrecy of messages and so on). Several such requirements can be formalized naturally in the epistemic languages. From the perspective of dynamic epistemic logic, to verify that a security protocol has a desired property (expressed by a formula) is to check whether the corresponding formula holds after executing every possible action sequences according to the protocol. We aim to apply the techniques from both dynamic epistemic logic and process theory to the verification problem of security protocols.

 

 

Action Emulation

 

Ji Ruan

 

A key notion of equivalence for modal and epistemic logic is bisimulation. However, to capture the update effects of action models in epistemic update logic, this notion turns out to be too strong. We propose the notion of action emulation as a notion of equivalence more appropriate for action models than bisimulation. It is proved that every bisimulation is an action emulation, but not vice versa, and that in the context of action models with propositional or modal preconditions, action emulation provides a full characterisation of update effect.

 

 

Unsafe Beliefs and Unsafe Learning:

the dangers of knowing too much

 

Alexandru Baltag

 

It has often been observed that, in both competitive and cooperative games, there are situations in which gaining some piece of new, trufhful information can be damaging for a player or for the group, leading to “bad" moves or preventing “good" ones. For instance, a group of agents that started by “trusting" each other (say, by sharing a common true belief that none of them will ever lie to the others) might lose this mutual trust due to some truthful communication by one of the agents: though objectively unjustified (since nobody actually lied), this loss of trust can be well-justified from the point of view of the agents' subjective rationality. (I will give concrete examples in the lecture.)

 

I analyze such situations using a logic of knowledge and conditional beliefs, initially developed in order to incorporate an AGM-type belief-revision mechanism into the logic of “epistemic actions". In this context, “knowledge" (in the standard sense used in AI, epistemic logic and game theory, as an S5 operator quantifying over all epistemically-possible states) is equivalent to “absolutely unrevisable belief": a belief that would still be held under any conditions, i.e. after any (epistemically-)possible belief-revision. However, an old paper by Stalnaker defines “knowledge" in a subtly different way, namely as “defensible belief": belief that would still be held under any TRUTHFUL conditions, i.e. after any belief revision induced by learning truthful new information.

 

I call “safe belief" this second, non-standard, weaker notion of knowledge. Safe belief is an S4 operator, but not S5: it is not negatively introspective. Safe belief is implied by knowledge, and it implies truth and (simple) belief. Different ways to model conditional beliefs give different models to safe belief. I sketch some of these different, but related, models: one in terms of plausibility preorders (with or without additional constraints, such as connectedness or “well-preordered"-ness of the relation), one in terms of ordinal plausibility rankings, and finally one in terms of (discrete) conditional probability measures. In the preorder case, the safe belief operator is very close to the “preference modality" introduced in a recent paper by J. van Benthem, S. van Otterloo and O. Roy. It turns out that (almost) all these classes of models have the same logic: I give a complete (and decidable) proof system.

 

To conclude, communicating more can be sometimes dangerous for a group, by overturning true unsafe beliefs: previously-held, truthful, useful, (reasonably well) justified, but “fragile" beliefs. Avoiding such “unsafe learning" should be a desirable requirement on any communication protocol between cooperative agents. Conversely, in a non-cooperative situation, a player can exploit the unsafety of some of his opponent's (true) beliefs to overturn them by communicating some truthful (but dangerous) information .In this way, the player can deceive his opponent, while still respecting the rules of the game (“never lie!").

 

This talk is based on previous joint work with Sonja Smets, and on on-going work with Johan van Benthem and Sonja Smets.

 

 

The Many Faces of Rationalizability

 

Krzysztof Apt

 

The rationalizability concept was introduced in Bernheim 84 and Pearce 84 to assess what can be inferred by rational players in a non-cooperative game in the presence of common knowledge. However, this notion can be defined in a number of ways that differ in seemingly unimportant minor details. We shed light on these differences, explain their impact, and clarify for which games these definitions coincide. Also we apply the same analysis to explain the differences and similarities between various ways the iterated elimination of strictly dominated strategies was defined in the literature.

 

This allows us to clarify the results of Dufwenberg and Stegeman 02 and Chen, Long and Luo 05, and improve upon them. We also provide epistemic analysis for the considered procedures for the iterated elimination of strategies.

 

 

Interest Based Negotiation

 

Frank Dignum

 

In the negotiation literature it is propagated that one should not negotiate based on positions, but based on interests. In the software agents research one predominantly makes use of utilities and thus positions with any negotiation framework. Can interest based negotiation be implemented here as well? What do you need for kinds of agents? What are the new research questions? And above all, does it work "better"? In this presentation I will touch upon a few of these aspects.

 

 

Adaptive Agents in Socio-Economic Games

 

Han la Poutre

 

Real-world agents as observed in economics and social sciences, as well as software agents designed for business applications, are characterized by learning capabilities. Such agents are considered to learn from past events, aiming at an improved and adaptive behavior or performance.

 

Real-world agents are simulated in research areas like agent-based computational economics (ACE), and software agents are developed for application areas like e-business and e-commerce. Both areas allow for fundamental models in the form of socio-economic games. Also, in both cases, it is important to design proper learning techniques, either to mimic learning in reality and observe emergent behavior (ACE), or to allow effective learning in software agents for business applications, e.g. in the form of game strategies.

 

We address the role of learning techniques in real-world simulation as well as in systems of software agents for applications. In addition, we compare some learning techniques with each for the above areas, point out some strengths and weaknesses, and focus on methodological issues.

 

 

Protocols for private List Intersection, or: how to fly to the

US and remain unnoticed

 

Wouter Teepe

 

In this talk, I will describe a communication protocol that allows two determine the intersection of their lists, without disclosing any information on the list items which are not part of the intersection. This protocol could be applied to the Passenger Name Record that is currently sent to the US Department of Homeland Security for every passenger flying to and over the US, thereby restoring the privacy of passengers that are not suspected of terrorism.

 

 

Voting Systems That Combine Approval and Preference

 

Steven Brams

(joint work with M. Remzi Sanver)

 

Information on the rankings and information on the approval of candidates in an election, though related, are fundamentally different-one cannot be derived from the other. Both kinds of information are important in the determination of social choices. We propose a way of combining them in two hybrid voting systems, preference approval voting (PAV) and fallback voting (FV), that satisfy several desirable properties, including monotonicity. Both systems may give different winners from standard ranking and nonranking voting systems. PAV, especially, encourages candidates to take coherent majoritarian positions, but it is more information-demanding than FV.  PAV and FV are manipulable through voters' contracting or expanding their approval sets, but a 3-candidate dynamic poll model suggests that Condorcet winners, and candidates ranked first or second by the most voters if there is no Condorcet winner, will be favored, though not necessarily in equilibrium.

 

 

Better Ways to Cut a Cake

 

Steven Brams

(joint work with Michael A. Jones and Christian Klamler)

 

Procedures to divide a cake among n people with n-1 cuts (the minimum number) are analyzed and compared. For 2 persons, cut-and-choose, while envy-free and efficient, limits the cutter to exactly 50% if he or she is ignorant of the chooser's preferences, whereas the chooser can generally obtain more. By comparison, a new 2-person surplus procedure (SP), which induces the players to be truthful in order to maximize their minimum allocations, leads to a proportionally equitable division of the surplus -- the part that remains after each player receives 50% -- by giving each person exactly the same proportion of the surplus as he or she values it.

 

For n >= 3 persons, a new equitable procedure (EP) yields a maximally equitable division of a cake. This division gives all players the highest common value that they can achieve and induces truthfulness, but it may not be envy-free. The applicability of SP and EP to the fair division of a heterogeneous, divisible good, like land, is briefly discussed.

 

 

Location Based Gaming

 

Dennis Reidsma

 

Location based games are entertainment applications where the interaction between players and game is situated in an environment. Some examples are geocaching, urban games where the city is the playing board, edutainment adventures in museums or on historical locations and the game “Can You See Me Now” by Blast Theory, but also games on a smaller scale where the environment is an augmented kindergarten playroom, or your own living room.

 

An important characteristic of such applications is the fact that the interaction with the system takes place in new ways. Explicit control through buttons, spoken commands or predefined control gestures gets (partly) replaced by implicit control. The user is continuously observed (through audio, video, RFID, infrared sensors, object tracking and/or other sensors) and the system can act both reactive and pro-active.

 

When you have any kind of system where interaction between agents in the broadest sense occurs, and at least one of the agents is a human, certain things will come to the foreground. Interaction with humans is seldom completely turn-based or rational, and often involves a high measure of contingency. Applications as described above, interacting with humans, need careful attention to issues of observation, reaction/interaction models, timing, and other things.

 

One of the ways in which the interaction between environment and user can be implemented is using Virtual Humans. In the HMI group several of such applications are researched, among which a virtual conductor and a virtual dancer. This talk will present these two examples, explain how they work and then explore some aspects of location based games that are especially visible in those applications.

 

 

Social Affordances in Modelling and Facilitating Interactions
 
Kecheng Liu
 
Although everyday we make a lot physical moves, most of human actions involve signs (e.g. sound, text, gesture and signal). It is plausible to imagine that some actions can be characterised as mainly sign-based, without much of physical moves. Human interactions and communications are closely linked, though the latter could be more often seen as merely using signs. The possiility of a community to conduct meaningful interactions relies on the ability of sharing certain possessions which are termed as affordances. The jointly owned affordances allow members display a repertoire of behaviour. An understanding of a groups repertoire of behaviour will help us in modelling interactions, hence enabling a better design of social software.

 



   [Back]