Self-Modification in Game Theory

I have been reading Talking Prices, Olav Velthuis’ book on the sociology and economics of the contemporary art market. Velthuis is interested in how cold and tidy economic realities are built out of messy human narratives: what do prices mean, in different contexts?

One of the interesting facts Velthuis mentions is the commissions gallerists receive in the Dutch gallery market. Commissions for mosts artists are between 40 and 50 percent. But, Velthuis says, they rarely exceed 50 percent: to do so is seen as a moral violation.

Velthuis says:

The commission that a dealer receives from an artist is not simply established on the basis of mutual bargaining power. Instead, negotiations about commissions are phrased in terms of entitlements, fairness, and, again, desert. In these negotiations, not only the time and energy of artists are at stake, but also those of dealers themselves. Says a Dutch gallery owner: “I make big efforts for my artists, among others editing and publishing small catalogues on individual artists written by recognized people in the field. That’s why I am inclined to ask 50 percent of artists. Especially given the fact that one of the most renowned artists I represent only wants to have 50 percent, it is not done to ask less of young, relatively unknown artists.” Commissions higher than 50 percent are considered “unethical,” however. Dealers who do ask more than this percentage violate a moral code, since they take advantage of the multitude of artists who are desperately looking for a gallery to represent them. Commissions lower than 50 percent are also legitimated in moral terms: “If paintings are starting to sell for 150,000 dollars, why would I take 75,000? I don’t really deserve it,” as a New York dealer explained.

Velthuis at p. 139, citations omitted.

The 50 percent “ethical” line is interesting. Many artists just starting out have little bargaining power, and would settle for a commission higher than 50 percent if it were offered. It is reminiscent of the results of the Ultimatum Game (in which one participant must divide a sum between himself and a recipient, and the recipient can then accept or refuse the offer – and if he refuses, neither party gets anything) and the Dictator Game (in which one participant divides a sum and the recipient has no power to refuse). In both cases, participants commonly offer much more than zero. In the Ultimatum Game case, recipients commonly refuse offers of less than 50 percent. In the Dictator Game case, participants are particularly likely to offer 50 percent when they are not anonymous – that is, if their reputation is at stake, as with the gallerists. Presumably the young artists have little bargaining power; should one refuse, another will gladly take his place. The gallerist’s situation is probably more comparable to the Dictator Game.

Humans have a taste for “altruistic punishment” – punishing unfair behavior in others, even at cost to themselves. Retribution is a major motive in criminal justice; capital punishment may have little deterrent effect, but many governments impose it at great cost nonetheless, indicating that deterrence (and even incapacitation) is not the only motivation being served by criminal punishment.

Since we are aware that humans have a taste for altruistic punishment, we guide our actions with this in mind. We act “fairly” (e.g. in the Dictator Game) even when it is not in our pecuniary interests to do so, in order to be perceived as fair and to avoid incurring reputational costs – and to avoid punishment, even when it is not available to the other party in the particular game we are playing.

Self-Modification

There was an interesting discussion on LessWrong a few years ago, in which Yvain suggested a way to minimize harm: if someone appears to be offended, and in pain, as a result of your actions, immediately cease those actions, even if you can’t understand why the person would be in pain. Vladimir_M countered that, were this to be come a norm, it would create an incentive to self-modify in order to feel pain at the slightest transgression of one’s beliefs. fburnaby sums it up:

reducing the offense I cause directly increases net utility (Yvain)
reducing the offense I cause creates a world with stronger incentives for offense-taking, which is likely to substantially decrease net utility in the long-term (Vladmir_M)

Self-modification is always a risk in game theory situations. I think it’s interesting that people have already self-modified in this way: being willing to incur costs to oneself in order to promote fairness, from the emotion of “spite” that motivates altruistic punishment.

Kevin Simler has written about crying as a particular human behavior that acts as a costly signal of pain, and invites offers of friendship “at a discount.” Evolution has given us ways to signal our pain in a costly manner.

But self-modifying to feel pain more easily would usually incur reputational costs. We have terms like “crybaby,” “whiner,” and “drama queen” precisely because we recognize that some people may be incentivized to express an excessive amount of pain. However, those with high social status or social value may be particularly likely to over-express pain, as they are less likely to be mocked or ostracized for doing it. This is the “cry-bully” phenomenon: those with little fear of social ostracism or judgment may express excessive pain or offense against those less powerful (e.g. the outgroup), and get away with it without reputational costs.

Two observations, in sum:

  1. A social norm of giving in to any expression of offense or pain will ultimately result in more offense and pain.
  2. Those with high social status will express offense and pain at a lower threshold, often causing harm to those of lower social status.
Advertisements

5 thoughts on “Self-Modification in Game Theory

  1. “Humans have a taste for altruistic punishment”

    Define “humans”. The actual studies show a lot of variation in the inclination for altruistic punishment not just between individual organisms, but also between populations of homo sapiens.

    • Yes! I assume most of my weird readers have read at least a few papers on the subject, and I wanted to call to mind the concept rather than analyze it in excruciating detail.

  2. I often wonder if it’s self modification, or simply a presumption thereof? When I see rhetoric like “drama-queen” used in online spaces, I often find it is stems a misunderstanding on the accuser’s part – either due to a culture clash or not knowing the full gravity of the situation, or worse a dogmatic assumption about the other party’s position.

    I am reminded of the cooperation strategy known as “tit-for-tat with forgiveness”. It accounts for ‘noisy’ and missing information transfer between cooperating agents. I would suggest that our current hardware and cultural norms have adapted to communication channels that operate at a much higher fidelity, and it’s getting confused with the mixed and varied nature of online communication. We start seeing bad-actors where none are actually operating.

    It often feels like our evolutionary hardware for dealing with information disparity is ill equipped to operate outside of tribalism. I feel this is amplified in online spaces because of the limited nature of many digital mediums. The lack of emotional reciprocity that most people experience in face-to-face is rife in online spaces and civility goes out the window as a result. Lacking coping skills to deal with it, any assumptions made trend toward negative because most people assume the worst when interacting with those outside their tribe.

    Then again, a rational observer may notice this disparity of information and take advantage of the grey area. I know it took a lot of mental training to default to assuming ignorance over malice.

Comments are closed.