Game Theory is a little known field of applied mathematics that has a doubly unfortunate name: "game" seemingly indicates it is about play things, and "theory" indicates ivory tower aloofness. This is too bad because the field has much to offer those of us interested in collaboration in a business environment. Here we will discuss the theory only briefly, and then jump into applications to business environments, including Web 2.0 applications: see Wikipedia for more background on the theory if you are interested.
The classic "game" of game theory is known as the prisoners dilemma. Again, Wikipedia provides a fine, concise description:
Two suspects, A and B, are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must make the choice of whether to betray the other or to remain silent. However, neither prisoner knows for sure what choice the other prisoner will make. So this dilemma poses the question: How should the prisoners act?
The dilemma is that when viewed from the standpoint of the individual prisoner, the only rational choice is to betray the other. Why? Because if the other person betrays me, I am better off also betraying (I would reduce my sentence from 10 years to 5 years.) If the other person doesn't betray me, then I am still better off betraying because then I would reduce my sentence from 6 months to ZERO months! Unfortunately, both of the prisoners think this way (it is only "rational"), so instead of both shutting up and taking a 6 month sentence, we both end up betraying each other and therefore receive 5 year sentences. So it seems that rationality has led us astray, thus the dilemma.
Robert Axelrod took game theory in a new direction when he asked what would happen if various players in a prisoners dilemma played their "strategy" many times in repeated games -- would the outcomes be different? In other words, in the prisoners dilemma outlined above, it is a one-shot deal. But forms of the prisoners dilemma exist in which repeat games, or iterations, occur. This is more like real life where we have interactions with people now and in the future, and our current behavior is affected by what we think people will do in the future as well as right now. For a full overview see his book The Evolution of Cooperation. Axelrod held a computer competition in which people entered their strategies, and these strategies were pitted against each other in tournaments. (An example of a strategy: cooperate (don't betray) until the other person does, and then don't cooperate anymore. Another example strategy: alternate cooperation and non-cooperation with no regard for what the other person does). The winning strategy turned out to be extremely simple, and was called TIT-for-TAT: cooperate on the first move, and from then on do whatever the other person did on the previous move. So if the other person cooperates all the time, so will TIT-for-TAT. If the other person cooperates most of the time, TIT-for-TAT will only be non-cooperative when the other person isn't, and then return to cooperation along with the other individual.
OK -- enough about theory and artificial "games". Suffice it to say that lots of smart people have modeled many systems utilizing game theory (of which the prisoners dilemma is only a specific sub-set of the possible types of games) including the nuclear arms race, biological systems, and economics. Axelrod was able to experiment with many different factors in his computer simulations, such as the relative importance of the future, the amount of interaction, etc, and from this drew several interesting conclusions on how to foster environments of cooperation or collaboration. We will end this post with a listing of his conclusions, and then discuss some applications in my next posting.
1) Enlarge the shadow of the future - make future interactions more frequent and likely and important
2) Change the payoffs - give incentives to value cooperation more than non-cooperation.
3) Teach people to care about each other
4) Teach reciprocity
5) Improve recognition abilities
Link to original post