Some time ago, on one of the forums I got into discussion about merits and motivations of releasing projects and code as open source. Turns out that many people cannot exactly wrap their heads around the concept of giving away your code for free. Even leaving the exact meaning of ‘free’ aside (it applies to both of them), I believe we can observe a kind of cultural gap here. Strangely enough, it’s not even the case of Nerds vs. Rest of the World: the geek community is in itself somewhat divided with respect to this issue.
And that’s OK, in a way. I know it may not be obvious how value can be preserved if we just volunteer our time and skills for open source projects and don’t receive any direct compensation in return. Honestly, I’m still kinda amazed how it all works out, but I have my small theory. It’s somewhat tangential to the typical gift culture explanation, but could also shed some light on companies’ motivations for contributing to OSS.
Long story short, I think the relation dynamics between open source contributors and beneficents can be described in term of the prisoner’s dilemma: a rather classic example from the game theory. Before pursuing the analogy further, let’s have a brief look at this curious puzzle.
Like the name suggests, the prisoner’s dilemma can be formulated in terms of jail, prisoners, cooperation and defection. I prefer an alternate setting, though, as it seems to better illustrate the concept and may be easier to understand. You can check the original formulation in Wikipedia, among other sources.
There is small game-theoretical difference between those two scenarios, but it’s largely irrelevant to our discussion.
Consider a two-player game with a potential reward of $100. The money can be taken by one of the players or split evenly among them. There is also a possibility of both players getting nothing. It all depends on how the players themselves decide what to do with the money.
They make independent decisions by choosing one of two options. They can decide to either split the money evenly between both of them, or steal (figuratively) the whole sum for themselves.
What happens next is the result of both decisions, revealed and applied at once:
Sounds contrived?… It’s been actually tested in real life (as much as television can be called that), sometimes with dramatic results.
What is the optimal strategy in a setting like this? If both opponents are known to be rational agents, they should arrive at the same conclusions. Because they cannot know each other’s thoughts, they both may only speculate what – probability that opponent steals – may be.
We know their decisions are independent, though, so shall remain the same regardless of what the player chooses in the end. Hence the expected values of both decisions seem to clearly indicate which one is better:
Looks like we should simply steal and be done with it.
But wait! Since both players are rational agents, they will both arrive at the same conclusion. Moreover, each will know the other thought the same. So they both know that is actually 1. Unfortunately, in this case
and they will both get nothing if they follow this logic.
Pity. Or maybe it’s better to split, then? If they both do just that, each will at least get $50 instead of nothing… Yes, this looks like a much better alternative, especially that we can count on both players to apply this reasoning; they are rational agents, after all. So by this logic, they will both split and everyone will get something in the end…
Well, except that now is 0, so it’s actually better to steal instead. Oh, and both players know that, obviously, and are not happy about it… again. Hence they will rather split, which makes stealing more attractive option, which in turns compels splitting – and so on.
This example of circular meta-reasoning is unresolvable in two-player case, because a single choice will make or break the system. Fortunately, reality is much more complex, with multiple agents making countless decisions all the time.
Decisions such as, for example, whether to open source this new project, or maybe contribute to some existing one.
Just like with the situation described above, looks like the optimal choice is to “steal”: to draw liberally from the vast expanse of existing open source software while contributing nothing in return. No single such behavior would cause the whole ecosystem to crumble, so the incentive for exploitation is very tangible. Heck, it’s not even obvious what’s the steal here, because what value loss is incurred by “splitters” is not easily recognizable.
Yet it’s obvious why every party cannot apply this strategy, lest the result will be very suboptimal for everyone. Somewhere between those two extremes (everyone “splits” vs everyone “steals”) there might be the point of equilibrium: where stealers can derive maximum utility without irrevocably harming the game’s dynamics. We don’t really know where that point lies, though, so we just choose to play along and split.