分类:Authorship credit allocation game

来自Big Physics


Background and Motivation

In collaborations such as doing research work and writing a paper together, often the relative contribution from each team members determines the order of authors of the paper. We call this the authorship/credit allocation problem. Ideally, one should have a system to record constributions from all members at each step and also have a metric to evaluate the contributions for this problem. However, in reality, there is no such explicit system and it often relies very much on each member's memory of the contribution records.

Previous reseach already found that people intend to exaggerate/overestimate his/her own contribution when the group effort leads to something good (and likely to under estiamte when it leads to something bad)[1]. In perticular, [2] examined this phenomenon on authorship allocation and found that overall the total estimated contribution, which is the sum of the relative contribution (in percentage) of each member estimated by each author him/herself, is larger than [math]\displaystyle{ 100\% }[/math].

This is rather an serious issue. First of all, it will be good for scientometric sttudies to have a truthful author list, which more or less descibe the true contributions from the authors. And more importantly, if one believes that his/her constribution is not properly recognized in the author list, he/she might stop further even more fruitful/needed collaboration.

So, can we have a tool to help all collaborators to get more satisfied (by each member or at least by majority) authorship allocation, without needing the above tool of recording and evaluating contributions?

BTW, the tool of recording and evaluating contributions is, of course, also a project that should be implemented by our scientometric people.

Another motivation is from the studies of game theory, the Ultimatum game and the public goods game in perticular. In the ultimatum game, two players need to determine how to allocate a given amount of money. The first player suggests the plan, say how much will be given to the second player, while the second player decide either to accept or reject the offer. If accepted, the allocation will be done as suggested. If rejected, neither player gets any money.

Usually in Ultimatum game, people found that the first player usually offer [math]\displaystyle{ 30-50\% }[/math] to the second player while both player understand that if both of them are rational the second player will accept a very small offer. In a sense, the summation of both players' expected relative payoff is less than [math]\displaystyle{ 30-50\% }[/math]: let us reprase the game in this way, each player sets a lower bound of his/her own payoff [math]\displaystyle{ p_{1}, p_{2} }[/math], as long as the summation is less than [math]\displaystyle{ 30-50\% }[/math], the allocation will succeed. We see that here often [math]\displaystyle{ p_{1}+p_{2}\lt 100\% }[/math]. This in a sense is very different from the authorship allocation issue, where [math]\displaystyle{ p_{1}+p_{2}+\cdots \gt 100\% }[/math]. The difference between the two situations is that in UG, there is no previous effort from each player and in the authorship allocation issue, each member need to work together for the publication.

So is it possible that the previous effort is the key that determines either [math]\displaystyle{ p_{1}+p_{2}+\cdots \gt 100\% }[/math] or [math]\displaystyle{ p_{1}+p_{2}+\cdots \lt 100\% }[/math]?

In this project, we want to look into the above two issues: whether or not we can design an authorship/credit allocation game to help collaborators to reach higher satisfaction of the allocation, and whether or not the existence of previous effort makes the difference between [math]\displaystyle{ p_{1}+p_{2}+\cdots \gt 100\% }[/math] and [math]\displaystyle{ p_{1}+p_{2}+\cdots \lt 100\% }[/math].

The Authorship/Credit Allocation Game

Our authorship/credit allocation game (AAG) is very simple: at each round, all authors [math]\displaystyle{ i }[/math], which will be referred as players from now on, report his/her estimated relative contribution [math]\displaystyle{ p_{i} }[/math] (if necessary, we can limit the amount of options, say [math]\displaystyle{ 0\%, 5\%, 10\%, \cdots }[/math]) and then we if [math]\displaystyle{ \sum_{i}p_{i}\lt 100\% }[/math] the author list will be created accordingly and otherwise, there will be another round, till an author list is created.

AAG can be done before or after the publication of the paper. Let us call them pre-publication AAG (PAAG) and after-publication AAG (AAAG).

We can also run artificial AAG with monetary payoffs. Say with a given amount of money [math]\displaystyle{ M }[/math], each player suggest certain amount of money for his/herself [math]\displaystyle{ p_{i} }[/math], if [math]\displaystyle{ \sum_{i}p_{i}\lt 100\% }[/math] M will be allocated accordingly, and otherwise, every player get no money at all.

Experimental Design

  1. Field experiment: recruit authors of the same publication to perform PAAG or AAAG, and their satisfaction of the authorlist before and after the game will be measured, via questionare or via EEG and other brain imaging techniques[3][4].
  1. Lab experiment: run a collaborative task first in lab and then run AAG with monetary payoffs instead of the authorship, again their satisfaction of the allocation will be measured.

One example of such collaborative task can be the public goods game. In the public goods game, each player is given initially the same amount of money ([math]\displaystyle{ d }[/math]) and then each player decide how much ([math]\displaystyle{ v_{i}\lt d }[/math]) out of this initial amount will be invested in the public goods. The invested money will be rewarded with a constant rate [math]\displaystyle{ R }[/math] and in the end, each player will get an even part of the reward so that at the end [math]\displaystyle{ E^{i}=d-v_{i}+\frac{R}{N}\left(\sum_{i=1}^{N}v_{i}\right) }[/math].

AAG with monetary payoffs

References

  1. D. R. Forsyth, B. R. Schlenker, Attributing the causes of group performance: Effects of performance quality, task importance, and future testing. J. Pers. 45, 220-236 (1977).
  2. Noa Herz, Orrie Dan, Nitzan Censor, Yair Bar-Haim. Opinion: Authors overestimate their contribution to scientific work, demonstrating a strong bias. Proceedings of the National Academy of Sciences, 117 (12) 6282-62852020. DOI: 10.1073/pnas.2003500117
  3. Esfahani, E. T., & Sundararajan, V. (2011). Using brain-computer interfaces to detect human satisfaction in human–robot interaction. International Journal of Humanoid Robotics, 8(01), 87-101.
  4. da Rocha, A., Rocha, F., & Arruda, L. (2013). A neuromarketing study of consumer satisfaction. Available at SSRN 2321787.

本分类目前不含有任何页面或媒体文件。