27 Feb 2009

I'll do a three-piece blog series about cooperation - well, at least about some basic terms I've been thinking about, in order to clarify them for myself. This first piece will deal with cooperation as a research objective. Later, I will argue why I think cooperation is a formidable objective for Artificial Intelligence research and why it's a good idea to study network structures along with it.

Basic Definition
To cooperate basically means to do something together. The research I deal with is especially interested in situations where altruism is needed for cooperation, i.e. you have some initial costs if you cooperate. Then, the question the cooperator can ask himself is "Will I be repaid?".

The Dilemma
In many situations in life, you will be repaid. But if you want to be rational about it, there is always some uncertainty. Moreover, cooperation may or may not have some overall benefit for you after you've been repaid, but if you cooperate and get screwed over (not cooperating is called "defecting"), you may lose your whole investment.  So there is unbalanced risk, favoring defection. The uncertainty about this is higher if you know less about the situation you're in.
The dilemma is as follows: If you don't plan ahead (maybe due to uncertainty), defecting is always the best option. If at all, cooperation only repays in the long run and if you don't get defected too often.

We all know that cooperation happens every day, even with big uncertainty, in humans as well as in almost every other life form imaginable. It has been proven that humans are even hardwired to cooperate, for instance via the oxytocin hormone (see for instance this book by Joachim Bauer) *. The question is how it came to be.

The Prisoner Dilemma
A really nice way of putting this dilemma into numbers is the Prisoners Dilemma. I'm not going to reiterate the basic "prisoner" setup (which was just a story to make the dilemma clear). If you look at the payoff matrix below for a while, you'll notice how the numbers express the dilemma: to defect will not hurt, but might payoff big time - cooperating will not yield that much, but may hurt dramatically.


Why is it nice to have this dilemma formalized so simple? Now you can setup different worlds, simulate them or prove mathematically that a certain setup is good for cooperation or not. It's a great tool. I'll talk more about this in the next post on Cooperation in AI research.

To finish this, I want to highlight some approaches to explain why systems have cooperation. It's amazing how long it takes to come to reasonable theories for this:


Survival of the fittest (Spencer, 1864)
The survival of the fittest is a dangerously extreme notion of individual success. This notion would only opt for the short-term notion of the cooperation dilemma (saying that defection is your best option, always), and is therefore stupid. It has been attached to Darwins ideas of evolution for a long time and to explain cooperation it is often changed to somewhat more meaningful concepts:
 

  • Kinship selection - A very popular concept among scientists is that you're more likely to cooperate the more related you are to your opponent genetically. This notion is carried by the belief that genes have been favoured in evolution that make their carrier do beneficial stuff for similar copies of them (i.e. for related carriers), where you are related 1/2 to each parent, 1/4 to a sibling and so on. They even try to get these numbers to show up in psychological results (it gets problematic right there).
  • Group Selection - For cooperation outside of kinship, people just bend the survival-of-the-fittest metaphor to several levels and say that there is selection pressure on whole groups and if only its members cooperate, the group is stronger than other groups. You can find proponents for this from Kropotkin to Hitler.

Both of these subtheories make some sense in specific situations. If I ever happen to die in a fight to the death, then certainly the survival-of-the-fittest approach would be suited best to explain why I didn't survive. But none of these approaches is in any way general. There is certainly cooperation without kinship (called "reciprocal altruism"), and when everyone is connected to everyone, how do you define a group?

Systemic approach
There is a need for approaches that approach conditions for cooperation more systematically. For instance, one very important property of a cooperative system is that altruistic acts are superadditive (meaning that such an act generates more utility than if the two participants had just acted alone - the utility can still be shared unequally). When you look at cooperation in systems, you are more interested in the behaviours of agents than their interiors. You care more for the dynamics of all interaction patterns than if A would beat B in a duel. This is an approach we need while we realize how complex all our networks are.

I found a promising model by Fletcher and Doebeli (2006) [1], who connect Hamilton's rule with results from D.C. Queller (which are from 1985!!). While Hamilton's rule explains in simple terms that cooperation works well if the cooperators are related on the genotype, the rule can be generalized to the relation among phentotypes. In other words, systems that have some properties that support cooperation, will have cooperation. It matters that altruists benefit each other. I quote from Fletcher and Doebelis conclusion:
 
"What this rule requires is that those carrying the altruistic genotype receive direct benefits from the phenotype (behaviors) of others (adjusted by any nonadditive effects) that on average exceed the direct costs of their own behaviors. Kinship interactions or conditional iterated behaviors are merely two of many possible ways of satisfying this fundamental condition for altruism to evolve."

I hope I have motivated how research generally deals with cooperation, or at least what tools I think are appropriate. The next post in this short series places cooperation in the context of AI research, which mostly should answer the question why anyone would want to build cooperative systems (until now, I was talking about explaining them).


[1] J.A. Fletcher & M. Doebeli, Unifying the theories of inclusive fitness and reciprocal altruism, The American Naturalist, 2006

* Of course, too few cooperation also happens. The Tragedy Of The Commons models situations in which agents use up a depletable common good (for instance a water well), thinking only in short terms.

# lastedited 28 Feb 2009
follow comments per RSS     
  on28 Feb 2009 - 18:06 fromJan
Nice, more of that please.
[link]
  on28 Feb 2009 - 20:26 fromsadi
It`s the common blindness, and you just stumble along the surface, too. Altruism is the mere result, not the reason, dear!
[link]
  on04 Mar 2009 - 12:45 fromNic
Yes, it's important to point out that everyone is scratching the surface. Even more, everyone is still arguing which surface to scratch and it's hard to understand why that takes so long. By the tone of your comment I can only assume that you already have the solution, good for you. Btw, I believe I didn't claim that altruism is the reason for anything in any way, or did I? In the end of the text, I am favouring the notion that the emergence of cooperation in any situation is less dependent on kinship or groups, but more on the context in which you're in - it's the actions of others that matter.
[link]
You are seeing a selection of all entries on this page. See all there are.