More on speaker utility

Appendix chapter 02: More on the speaker’s utility function

Author: Michael Franke

The main text of Chapter 1 introduced the utility function for the pragmatic speaker as:

According to this definition, utterance is good for agent who knows that the true world state is to the extent that has low costs and to the extent that the literal listener assigns a high probability to after updating with . Since the literal listener updates his prior beliefs with the semantic meaning of , the latter makes it so that any utterance which is not true of will receive the lowest possible utility: negative infinitiy. Combined with a softmax-choice rule (on the assumption that for each there is at least one true utterance), this effectively implements Grice’s Maxim of Quality which requires speakers not to say anything false refp:Grice1975:Logic-and-Conve. (Later chapters will show how this strong Truth-Only Regime can be untied by reasoning about other utility structures, in the form of flexible Questions Under Discussion.) Furthermore, if there are two messages and both of which are true in , then will be preferred over whenever makes the true world state more likely after literal interpretation. This effectively implements Grice’s Maxim of Quantity which requires speakers to strive towards maximimization of the (relevant) information conveyed by their utterances.

It is possible to make the relation with other formulations of Gricean Quantity from theoretical linguistics even more clear. If the set of states is finite and the literal listener’s prior beliefs are uniform, i.e., if for all and , then whenever and are both true in , we get

In other words, the speaker prefers one true message over another true message , all else equal, iff is logically stronger than (in the sense that rules out more possible states). ( and can still be logically independent; this is not a requirement that ought to imply .)

The above definition of utilities uses information theoretic surprisal to implement a probabilistic version of (something like) a Gricean Quantity Maxim. The surprisal-based notion can also be derived in a different way, which is interesting to look at because it justifies the choice of utility function in more complex models (such as in the second model of chapter II). If the speaker knows the true world state , she has a degenerate probabilistic belief about world states which assigns probability 1 to the true and 0 to any other world state. After an utterance , the literal listener also has a probability distribution over world states . One way of thinking about what happens in cooperative discourse that maximizes relevant information flow is that the speaker tries to choose utterances such that the listener’s beliefs (after hearing an utterance) is maximally similar to the belief of the speaker. In other words, speakers choose to say things that assimilate the listener’s belief state to their own as much as possible. A notion of divergence between probability distributions is the Kullback-Leibler divergence. A defintion of utility in terms of minimization of KL-divergence derives the original suprisal-based definition, if the speaker’s beliefs are degenerate:

In the second model of chapter II the speaker did not know the true world state but only had probabilistic beliefs based on some possibly partial observation . A definition of utilities as negative Kullback-Leibler divergence derives the same utterance choice probabilities as assumed in chapter II. Starting from KL-based utilities we get choice probabilities like this:

Expanding the definition of KL-divergence:

which is equivalent to

The last summand is just the entropy of , which is a constant and so cancels out under normalization in the soft-max choice rule. We end up with:


Table of Contents