- [[David Rand|Dave]], [[collective action problem]]
# Idea
[[Martin Nowak]] discussed such strategies in his book super cooperators.
In the [[prisoner's dilemma]] game, the benefits to other(s), $b$ is greater than the cost of cooperation, $c$:
$b > c$
## Repetition and direct reciprocity
After a few rounds, people will learn to cooperate because they expect to be reciprocated in the future.
People in smaller cities/towns are nicer and cooperative because they know they likely to meet others again, so they expect to be reciprocated.
- deviate: 0
- cooperate: $-c + pb$, where $p$ is probability we meet again
- $pb$: probability of me benefiting from cooperating with others
- $-c + pb > 0$
- thus, if $p > c/b$, we should cooperate
I hope to benefit **directly** in the future from a person whom I've helped now.
## Reputation and indirect reciprocity
By cooperating, our reputation gets known.
We simply replace probability of meeting with probability of reputation being known.
I hope that, by helping person A, this person will tell others I'm great, and then other people will do stuff that benefits me in the future. **Denser connections are better**.
## Network reciprocity
Let $k$ be the number of neighbors in a [[regular graph]]. If $k < b/c$, we cooperate.
**Denser connections are worse.** The denser the connections are between people, the more likely people are to change and defect.
## Group selection
Within group, defectors do better.
Between groups, groups that have more cooperators are better off than groups with fewer cooperators.
## Kin selection
People are related and you can care about other people based on their relatedness, $r$ ([[kin selection]]):
$rb > c$
## Laws and prohibitions
Regulate/punish behavior.
## Incentives
Regulate/reinforce/reward behavior.
# References
- https://www.coursera.org/learn/model-thinking/lecture/Oj51H/seven-ways-to-cooperation
- [[Nowak 2011 super cooperators]]