Cooperation in the republic of science

Cooperation in the republic of science preview image

1 collaborator

Default-person Duncan Law (Author)

Tags

(This model has yet to be categorized with any tags)
Visible to everyone | Changeable by the author
Model was written in NetLogo 6.0.4 • Viewed 222 times • Downloaded 25 times • Run 0 times
Download the 'Cooperation in the republic of science' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

A model of intermediate input sharing between working scientists, based on O'Riordan and Sorensen's 2008 'Stable cooperation in the n-player prisoner's dilemma: The importance of community structure.' Following Dasgupta and David (1994) the model treats the problem of sharing scientific inputs as an n-player prisoner’s dilemma: players in any given n-player game all benefit from input sharing, yet input sharing would be a strictly dominated strategy in a one-round game. In this model, agents inhabit a structured graph. Multiple prisoner's dilemmas are played simultaneously in each round, and agents update their strategies having observed the payoffs achieved by their neighbours. The model allows the user to manipulate both the weights of graph edges and the costs and benefits associated with different strategies, to allow users to observe the conditions under which different forms of cooperation are selected for and disseminate across the community.

HOW IT WORKS

Agents inhabit a structured graph composed of subgroups of five agents linked to adjacent groups through their outermost members. The links are weighted, with the weights determined by the user.

In each round of play, agents select neighbours to play an n-player prisoner's dilemma. Neighbours are selected stochastically, based on the weight of the link with the first player. Three strategies are available to agents: noncooperation, closed cooperation (with other group members only) or open cooperation (with the scientific community as a whole). In each round of play, every agent participates in as many games as they have entered, and receives as overall payoff the average of the payoffs for each individual game.

There are two learning mechanisms for agents. In most rounds of play, agents observe their immediate neighbours, and select a neighbouring strategy to adopt. The greater the weight of the link to a neighbour, and the higher the neighbour's payoff in the previous round of play, the more likely an agent is to adopt that strategy. Every four rounds of play, agents expand their field of observation, and can also adopt their neighbours' neighbours' strategies. Here the likelihood of adopting another agent's strategy is determined solely by the agent's payoff in the previous round of play.

A more complete description of the model can be found in Law (2019).

HOW TO USE IT

The 'xgroups' and 'ygroups' sliders determine the size of the graph.

The 'withingroups' and 'betweengroups' slides determine the weight of the links between agents. 'Withingroups' determines the weights between agents within each five-agent cluster; 'betweengroups' determines the weight of the links between those clusters.

The 'proportioncooperate' slider determines the probability that any given agent will be assigned a cooperative strategy on setup. The 'proportionclosedcooperate' slider determines the probability that an agent assigned a cooperative strategy will be assigned the strategy of closed cooperation (other cooperative agents will engage in open cooperation).

The 'contribution' slider determines the benefit to agents in the group created by any given agent sharing their inputs. The 'closedcost' slider determines the cost to an agent of sharing their inputs within the group; the 'opencost' slider determines the cost to an agent of sharing their inputs both within the group and more broadly.

THINGS TO NOTICE

The graph displays the percentage of agents adopting the three available strategies:

Red = noncooperation Green = closed cooperation Blue = open cooperation

THINGS TO TRY

There are two main features of the model worth exploring.

First, the links between agents: shifting the strength of the ties between agents within and, especially, between groups changes the dynamics of the adoption and dissemination of strategies.

Second, the costs of different strategies: shifting the relative costs of open and closed cooperation will shift the relative success of those strategies.

RELATED MODELS

This model is an adaptation and extension of the model described in:

O'Riordan, Colm and Humphrey Sorensen (2008). "Stable cooperation in the N-Player Prisoner's Dilemma: the importance of community structure." In: Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning. Springer, pp. 157-168.

CREDITS AND REFERENCES

Model created by Duncan Law, as part of the materials accompanying my 2019 Ph.D. thesis ‘The Reputational Economics of Open Inputs Science’.

The model is an adaptation of the model described in:

O'Riordan, Colm and Humphrey Sorensen (2008). "Stable cooperation in the N-Player Prisoner's Dilemma: the importance of community structure." In: Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning. Springer, pp. 157-168.

The understanding of science as n-player prisoner's dilemma builds on the analysis presented in:

Dasgupta, P., & David, P. A. (1994). Toward a new economics of science. Research policy, 23(5), 487-521.

COPYRIGHT AND LICENSE

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

extensions [ nw ]
globals [base-turtle tracking dummy bestpayoff xdummy ydummy thiscolor gamelist gamegroupobserver gamegroupturtles otherplayers player opencooperators closedcooperators fourticks pickanumber]
turtles-own [strategy newstrategy localpayoff totalpayoffs averagepayoff gamesplayed small-group large-group large-group-agentset small-group-denominator large-group-denominator
  thispath thispathweight small-group-numerator-closed small-group-numerator-open large-group-numerator-closed large-group-numerator-open probability-closed-cooperate probability-open-cooperate]
links-own [weight]

to setup

  set-default-shape turtles "dot"

  clear-all

  resize-world 0 ( ( xgroups * 3 ) - 1 ) 0 ( ( ygroups * 3 ) - 1 )

  reset-ticks


  ;; The following creates the turtles
  set ydummy 0
  repeat ( ygroups / 2 ) [
    set xdummy 0
    repeat ( xgroups / 2 ) [

      ask patch ( xdummy + 0 ) ( ydummy + 0 ) [sprout 1 ]
      ask patch ( xdummy + 2 ) ( ydummy + 0 ) [sprout 1 ]
      ask patch ( xdummy + 1 ) ( ydummy + 1 ) [sprout 1 ]
      ask patch ( xdummy + 0 ) ( ydummy + 2 ) [sprout 1 ]
      ask patch ( xdummy + 2 ) ( ydummy + 2 ) [sprout 1 ]

      ask patch ( xdummy + 3 ) ( ydummy + 3 ) [sprout 1 ]
      ask patch ( xdummy + 5 ) ( ydummy + 3 ) [sprout 1 ]
      ask patch ( xdummy + 4 ) ( ydummy + 4 ) [sprout 1 ]
      ask patch ( xdummy + 3 ) ( ydummy + 5 ) [sprout 1 ]
      ask patch ( xdummy + 5 ) ( ydummy + 5 ) [sprout 1 ]

      set xdummy ( xdummy + 6 )
    ]
    set ydummy ( ydummy + 6 )
  ]

  ask turtles [ set label who ]


  ;; The following creates the links
  set dummy 0
  repeat ( ( count turtles ) / 10 ) [

    ask turtle dummy [ create-links-with turtles-at -1 -1 [ set weight betweengroups ] ]
    ask turtle dummy [ create-links-with turtles-at  2  0 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at  1  1 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at  0  2 [ set weight withingroup ]   ]

    addone

    ask turtle dummy [ create-links-with turtles-at  1 -1 [ set weight betweengroups ] ]
    ask turtle dummy [ create-links-with turtles-at  0  2 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at -1  1 [ set weight withingroup ]   ]

    addone
    ask turtle dummy [ create-links-with turtles-at -1  1 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at  1  1 [ set weight withingroup ]   ]

    addone
    ask turtle dummy [ create-links-with turtles-at  2  0 [ set weight withingroup ]   ]

    addone

    addone

    ask turtle dummy [ create-links-with turtles-at -1 -1 [ set weight betweengroups ] ]
    ask turtle dummy [ create-links-with turtles-at  2  0 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at  1  1 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at  0  2 [ set weight withingroup ]   ]

    addone

    ask turtle dummy [ create-links-with turtles-at  1 -1 [ set weight betweengroups ] ]
    ask turtle dummy [ create-links-with turtles-at  0  2 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at -1  1 [ set weight withingroup ]   ]

    addone
    ask turtle dummy [ create-links-with turtles-at -1  1 [ set weight withingroup ]   ]
    ask turtle dummy [ create-links-with turtles-at  1  1 [ set weight withingroup ]   ]

    addone
    ask turtle dummy [ create-links-with turtles-at  2  0 [ set weight withingroup ]   ]

    addone

    addone

  ]


  ;; The following sets up the agents' strategies
  ask turtles [
    ifelse random-float 100 < proportionnoncooperate
      [ set strategy 0
        set color red ]
      [ ifelse random-float 100 < proportionclosedcooperate
        [ set strategy 1
          set color green ]
        [ set strategy 2
          set color blue ]
      ]
  ]
end 

to go

  set gamelist []
  set gamegroupobserver []

  create-game-list

  play-games

  adapt-strategies

  update-visuals

  tick
end 



;; The following sets up a gamelist observer variable: a list of lists, each of which lists is the participants in a game

to create-game-list

  set dummy 0
  repeat count turtles
  [
    ask turtle dummy [
      set tracking [ ]
      set tracking lput turtle dummy tracking
      ask my-links [
        ifelse random-float 100 < weight [
          set tracking lput other-end tracking
        ]
        [ ]
      ]
      set gamegroupturtles tracking
      set gamelist lput gamegroupturtles gamelist
    ]
  addone
  ]
end 




;; The following plays each of the games and assigns payoffs to the turtles

to play-games

  ask turtles [
    set gamesplayed 0
    set localpayoff 0
    set totalpayoffs 0
    set averagepayoff 0
    ]

  set dummy 0
  foreach gamelist [ ?1 ->
   set dummy ?1
   set gamegroupturtles turtle-set dummy
     foreach dummy [ ??1 ->
       set player ??1
       ask player [ set otherplayers other gamegroupturtles ]

       set opencooperators 0
       set closedcooperators 0
       ask otherplayers [

         if strategy = 1 [
           set closedcooperators ( closedcooperators + 1 )
         ]
         if strategy = 2 [
           set opencooperators ( opencooperators + 1 )
         ]
       ]


       ask player [
         if strategy = 0 [
           ;; payoff when playing defect
           set localpayoff ( ( contribution * ( opencooperators + closedcooperators ) ) / ( count gamegroupturtles ) )
         ]
         if strategy = 1 [
           ;; payoff when playing closed cooperate
           set localpayoff ( ( ( contribution * ( opencooperators + closedcooperators + 1 ) ) / ( count gamegroupturtles ) ) - closedcost )
         ]
         if strategy = 2 [
           ;; payoff when playing open cooperate
           set localpayoff ( ( ( contribution * ( opencooperators + closedcooperators + 1 ) ) / ( count gamegroupturtles ) ) - opencost )
         ]


        set gamesplayed ( gamesplayed + 1 )

        set totalpayoffs ( totalpayoffs + localpayoff )

         ]


       ]

    ]


  ask turtles [
    set averagepayoff ( totalpayoffs / gamesplayed )
    if averagepayoff < 0 [
      set averagepayoff 0 ]
  ]
end 



;; The following runs the subprocedures required to 'learn' from other agents

to adapt-strategies

  create-small-groups

  create-large-groups

  create-denominators

  create-numerators

  action-formula
end 



;; The following creates a list for each turtle of adjacent turtles selected stochastically by link weight

to create-small-groups

  ask turtles [
    set dummy [ ]
    ask my-links [
      if random-float 100 < weight [
      set dummy lput [who] of other-end dummy ]
    ]
    set small-group dummy

  ]
end 



;; The following creates a list for each turtle of all the turtles within two links distance

to create-large-groups

  ask turtles [
    set tracking who
    set dummy [ ]
      ask my-links [
        set dummy lput [who] of other-end dummy
        ask other-end [
          ask my-links [
            if ( [who] of other-end ) != tracking
              [ set dummy lput [who] of other-end dummy ]
            ]
        ]
      ]
    set large-group-agentset turtle-set map turtle dummy
    set large-group [ who ] of large-group-agentset

  ]
end 

to create-denominators

;; The following creates the denominator for the local learning equation
  ask turtles [
    set small-group-denominator 0
    foreach small-group [ ?1 ->
      set small-group-denominator ( small-group-denominator + ( ( [ averagepayoff ] of turtle ?1 ) * [weight] of link-with turtle ?1 ) )
    ]
  ]


;; The following creates the denominator for the extended learning equation
  ask turtles [
    set large-group-denominator 0
    foreach large-group [ ?1 ->
      set thispath nw:path-to turtle ?1
      set thispathweight 1
      foreach thispath [ ??1 ->
        set thispathweight (thispathweight * [weight] of ??1)
      ]
      set large-group-denominator ( large-group-denominator + ( thispathweight * ( [ averagepayoff ] of turtle ?1 ) ) )
    ]
  ]
end 

to create-numerators

  ask turtles [
    set small-group-numerator-closed 0
    set small-group-numerator-open 0
    foreach small-group [ ?1 ->
      if [ strategy ] of turtle ?1 = 1 [
        set small-group-numerator-closed ( small-group-numerator-closed + ( ( [ averagepayoff ] of turtle ?1 ) * ( [ weight ] of link-with turtle ?1 ) ) )
      ]
      if [ strategy ] of turtle ?1 = 2 [
        set small-group-numerator-open ( small-group-numerator-open + ( ( [ averagepayoff ] of turtle ?1 ) * ( [ weight ] of link-with turtle ?1 ) ) )
      ]
    ]
  ]



  ask turtles [
    set large-group-numerator-closed 0
    set large-group-numerator-open 0
    foreach large-group [ ?1 ->
     set thispath nw:path-to turtle ?1
     set thispathweight 1
     foreach thispath [ ??1 ->
       set thispathweight (thispathweight * [weight] of ??1)
     ]

      if [ strategy ] of turtle ?1 = 1 [
       set large-group-numerator-closed ( large-group-numerator-closed + ( thispathweight * ( [ averagepayoff ] of turtle ?1 ) ) )
     ]
     if [ strategy ] of turtle ?1 = 2 [
       set large-group-numerator-open ( large-group-numerator-open + ( thispathweight * ( [ averagepayoff ] of turtle ?1 ) ) )
     ]
    ]

  ]
end 

to action-formula


  ifelse fourticks = 4
  [
    ;; extended learning
    ask turtles [
      ifelse large-group-denominator != 0
        [ set probability-closed-cooperate ( large-group-numerator-closed / large-group-denominator )
          set probability-open-cooperate ( large-group-numerator-open / large-group-denominator )
          ]
        [ set probability-closed-cooperate 0
          set probability-open-cooperate 0 ]
;       show probability-closed-cooperate
    set fourticks 0
    ]
  ]
  [
    ;; local learning
    ask turtles [
      ifelse small-group-denominator != 0
        [ set probability-closed-cooperate ( small-group-numerator-closed / small-group-denominator )
          set probability-open-cooperate ( small-group-numerator-open / small-group-denominator ) ]
        [ set probability-closed-cooperate 0
          set probability-open-cooperate 0 ]
    ]
    set fourticks ( fourticks + 1 )
  ]


  ask turtles [
    set pickanumber ( random-float 100 )
    if pickanumber < ( probability-open-cooperate * 100 )
      [ set strategy 2 ]
    if ( probability-open-cooperate * 100 ) <= pickanumber AND pickanumber < ( ( probability-closed-cooperate + probability-open-cooperate ) * 100 )
      [ set strategy 1 ]
    if pickanumber >= ( ( probability-closed-cooperate + probability-open-cooperate ) * 100 )
      [ set strategy 0 ]
  ]
end 

to update-visuals

  ask turtles [
    if strategy = 0
      [ set color red ]
    if strategy = 1
      [ set color green ]
    if strategy = 2
      [ set color blue ]
  ]
end 

to addone

  set dummy ( dummy + 1 )
end 

There is only one version of this model, created almost 6 years ago by Duncan Law.

Attached files

File Type Description Last updated
Cooperation in the republic of science.png preview Preview for 'Cooperation in the republic of science' almost 6 years ago, by Duncan Law Download

This model does not have any ancestors.

This model does not have any descendants.