The Weakness of Strong Ties

No preview image

2 collaborators

Default-person Siebren Kooistra (Author)
Default-person Andreas Flache (Teacher)

Tags

(This model has yet to be categorized with any tags)
Visible to everyone | Changeable by the author
Model was written in NetLogo 6.1.1 • Viewed 191 times • Downloaded 35 times • Run 0 times
Download the 'The Weakness of Strong Ties' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

This model is a NetLogo implementation of the model used in Flache and Macy's 1996 paper "The Weakness of Strong Ties". It elaborates on Homans' (1974) "cohesion-compliance hypothesis", with which Homans tried to explain why people in groups with more social cohesion are more ready to comply with group obligations. The hypothesis states that, for example, members of work teams where workers are close friends should all work harder (if working harder is the group norm) as compared to teams where they are not friends. Homans explains this by assuming that in a cohesive team, members who "work hard" are rewarded by approval. The more cohesive the team, the more valuable is this reward and thus the more people do their best to get it. Flache and Macy (1996) ask what happens if group members can also reward each other just for the sake of getting rewarded in return.

In a more technical sense, this relaxes the assumption that only contributing to the collective good (compliance, c) can cause others to reward an agent (approval, a). So whereas the cohesion-compliance hypothesis assumes only c-a-exchanges (being rewarded for working hard), this model allows for studying the effect of a-a-exchanges (getting rewarded for giving reward).

If it is then assumed, as this model does, that agents are backward-looking (decide what to do in the future exclusively using past results) and self-interested (only consider costs and benefits to themselves to decide whether an action was worthwhile), the possibility of a-a-exchange undermines the ability of informal social control to make agents work together on the level of the team as a whole (enforce contribution to the collective good), because a-a-exchange provides a less coordination-intensive way to achieve satisfaction.

How it works

Each iteration, every agent stochastically decides whether to contribute to the collective good and whether to approve of another agent for all of the N - 1 other agents. For each decision, satisfaction is calculated by detracting the costs of the decision from the payoff of the decision, and comparing this to a preset aspiration level. If the decision is satisfactory (payoff - costs > 0), the propensity of repeating the decision is increased, and in case of a dissatisfactory decision (payoff - costs < 0) the propensity if repeating the decision is decreased.

HOW TO USE IT

The model setup can be determined by Model controls, Time limit settings, Parameter settings and Random seed settings, while model results are shown under General output and Turtle viewer.

Model controls

These buttons give the basic controls for the model. Run model lets the model procedures run indefinitely or until the set time limit is reached, while Run single iteration goes trough all the procedures once. Setup initialises the model, and must be used before running the model.

Time limit settings

These controls determine whether the model stops after a preset number of iterations, and how many iterations should be ran if this is the case. The time limit can be switched On or Off using the Time_Limit? switch, while Number_of_Iterations determines for how many iterations the model is ran in case that the time limit is On. If the option for a time limit is switched Off, the value of Number_of_Iterations is irrelevant.

Parameter settings

These settings determine model dynamics and agent behaviour. initial_cooporation propensity determines how likely an agent is to choose to contribute to the public good in the first iteration, before any learning has taken place.initial_approval_propensity does the same for the decisions to approve of other agents.

The distinguishing parameters in the model are cooperation_approval_value and approvalofothers_value. cooperation_approval_value determines the value of approval from another agent in evaluating the decision to work, while approvalofothers_value determines the value of approval from another agent in deciding whether to approve of that specific agent or not.

approval_cost determines the individual cost of the decision to approve of an agent and the decision, while cooperation_cost determines the individual cost of contributing to the collective good.

The learning_parameter parameter governs the speed and strengh of learning effects. workteam_size sets the number of agents N. As the value obtained if all agents contribute to the collective good is standardised at one, this affects the value of any individual agent's contribution to the collective good (which is 1/N). This is relevant for setting approvalofothers_value and cooperation_approval_value.

Random seed settings

For reproducing the results or a specific model run, the random seed of that model run must be known and used.

If the Use_user_seed? switch is set to Off, the random seed used for the model run will be printed in the Seed_input input screen. If a model run is to be saved, the random seed can be copied from the Seed_input input screen for later useage.

If the Use_user_seed? switch is set to On, the random seed to be used during the run is the one in the Seed_input input screen, and the input screen will not be overwritten by the setup procedure. This means that a chosen or retained random seed can be entered, and that the random seed in the input box will continue to be used until either the Use_user_seed? switch is switched Off or another seed is manually entered.

General output

The two plots show the main model results using summary measures. Plots are updated at the end of an iteration.

The Propensities to approve and cooperate plot shows the agents' mean propensities to contribute to the public good in purple. The mean propensity to approve of other agents is shown in green.

The Compliant control and relational control plot shows the measure of compliant control in purple and relational control in green. Compliant control is the correlation between the propensity of agents to contribute to the public good and the propensity of other agents to approve of the contributing agent (which is used to detect c-a-exchange). Relational control is the correlation between the propensity of one agent to approve of another agent and the propensity of this other agent to approve of the first agent (which is used to detect a-a-exchange).

Turtle viewer

The turtle viewer shows per-iteration behaviour of the agents. Each agent has been assigned a random color, and all links of the same color are links from this agent to other agents. If an agent contributes to the public good, it will grow in size to indicate this. If an agent approves of another agent, the link from the approving agent to the agent recieving approval widens.

THINGS TO NOTICE

The original cohesion-compliance hypothesis assumes approvalofothers_value to be nil, so variation of this parameter is instrumental in examining the generalisability of the hypothesis.

An important dynamic in this model is stochastic collusion, a self-enforcing cooperative equilibrium resulting from stochastic decision procedures with reinforcement learning, as explored by Michael W. Macy (1991). This means that the agents will shift to a cooperative equilibrium on the long run, even if the value of approval is zero. However, this specific result becomes harder to obtain as the number of agents involved in cooperation becomes greater, because the probability decreases that a sufficient number of agents chooses to contribute to cooperate at the same time for thier cooperation to be satisfactory and be reinforced. If the learning parameter is set to a lower value, stochastic collusion also becomes less likely to occur within an unchanged and limited timeframe because the propensity for agents to deviate from satisfactory cooperation remains influential for longer.

The theoretical contribution of this model is that, while a situation of exclusive c-a-exchange leads to increasing cooperation over time, the possibility of a-a-exchange tends to lead to low cooperation rates while keeping satisfaction high. This results from the increasing difficulty in obtaining stochastic collusion regarding collective good contributions as the number of agents rises. Satisfaction from mutual approval requires only two agents to coordinate (also through stochastic collusion, technically speaking), while satisfactory levels of contribution to the collective good requires a substantial proportion of turtles to coordinate (which becomes more difficult in proportion to the level by which the number of turtles exceeds two).

THINGS TO TRY

What happens if approval does play a role in evaluating the decision to approve, but does not affect the decision to contribute to the public good (cooperation_approval_value = 0, approvalofothers_value > 0)? Why is this? This is easiest to see if the Time_Limit? switch is set to Off, as it can take some time to become noticeable.

If the settings are workteam_size = 10, cooperation_approval_value = approvalofothers_value = 0, cooperation_cost = 1 and approval_cost = 0.1, both approval and collective good contributions quickly drop to 0 and remain there. However, if approval_cost is then lowered to 0.09, the propensity to approve suddenly begins trailing around 1.5. Why does this change occur?

EXTENDING THE MODEL

A logical first step in extending the model would be to make the agents heterogenous in the value they attach to approval for contributing to the collective good or approving.

Another interesting way to modify the model would be to change the network structure. The model assumes a completely connected network, but practical barriers in communication might mean that some agents wouldn't even have the option of approving of each other (for example, pairs of work teams collaborating on a project while only communicating through thier team leaders). How might this change model dynamics?

At the moment, visualisation of agents contributing to the public good is functional, but not particularly interesting. Could you write a system that is more pleasing to the eye?

NETLOGO FEATURES

Because the variance of the propensities to approve and to cooperate fall to zero in the self-enforcing equilibria, an artificial limit had to be set to calculation of compliant control and relational control. As soon as the denominator of either calculation becomes smaller than 1.0E-10, the affected correlation is taken to be equal to the last value calculated beforehand for as long as the denominator remains smaller than 1.0E-10.

In this implementation, the decision to approve of a turtle j is not taken by turtle i (which is what happens on a conceptual level) but by the link from i to j. This was done because assigning the decision to approve to the turtle would have required every turtle to have an approval propensity for every other turtle, which was deemed prohibitively more complex without substantially improving the model.

RELATED MODELS

As of yet, we do not know of other NetLogo models to refer to, but feel free to contact Siebren Kooistra (sillykoalas@tutanota.com) with any suggestions.

CREDITS AND REFERENCES

Flache, A., & Macy, M. W. (1996). The Weakness of strong ties: Collective action failure in a highly cohesive group. Journal of Mathematical Sociology, 21, 3-28. http://doi.org/10.1080/0022250X.1996.9990172

Homans, G. C. (1974). Social behavior. Its elementary forms. New York: Harcourt Brace Jovanovich.

Macy, M. W. (1991). Learning to cooperate: Stochastic and tacit collusion in social exchange. American Journal of Sociology, 97, 808–843. http://doi.org/10.1086/229821

HOW TO CITE

To correctly reference this model in a text, please refer both to this model and to the NetLogo software in general. The APA-style reference format of both would be:

Kooistra, S. & Flache, A. (2020) The weakness of strong ties [NetLogo program]. Retrieved from http://modelingcommons.org/browse/one_model/6569

Wilensky, U. (1999). NetLogo. Evanston, IL: Center for Connected Learning and Computer-Based Modeling, Northwestern University. Retrieved from http://ccl.northwestern.edu/netlogo/

COPYRIGHT AND LICENSE

Copyright 2020 Siebren Kooistra

Credit to: prof. dr. Andreas Flache for extensive advice, co-authorship of the model description and the "weakness of strong ties" concept with accompanying papers on which this model is based. prof. dr. Michael W. Macy for co-authoring the 1996 paper on which this model is based.

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

Concerning the program, the copyright holder can be contacted via: sillykoalas@tutanota.com

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

globals [collectivegood_contributions compliant_control relational_control collectivegood_aspirations approval_aspirations seed]
turtles-own [cooperation_propensity satisfaction_cooperation workstate approval_recieved cooperation_reinforcer incoming_approval_propensities compliancy_numerator squared_deviation_approval squared_deviation_cooperation]
links-own [approval_propensity satisfaction_approval approvalstate approval_reinforcer in_deviation_squared out_deviation_squared deviations_product in_approval out_approval in_deviation out_deviation]

to setup
  ca
  ifelse Use_user_seed? = TRUE [set seed Seed_input] [set seed new-seed set Seed_input seed]
  random-seed seed
  ask patches [set pcolor white]                                                                                         ;For as much visual clarity as is possible given the number of links, the background is set a white colour.
  create-turtles workteam_size                                                                                           ;workteam_size is variable, so as to allow experimentation with the effect of group size.
  layout-circle turtles 15                                                                                               ;The turtles are arranged in a circle as fits a completely connected network.
  ask turtles [
    facexy 0 0
    set shape "square"                                                                                                   ;Because the facing of the turle does not matter, the turtle is made to face the center of the world and given a square shape.
    set color ((who * 10) + 5)                                                                                           ;The turtles are given a colour which contrasts with the white background, and each turtles is assigned a colour arbitrarily related to its who number.
    create-links-to other turtles [                                                                                      ;All turtles are connected to each other...
      set color [color] of myself                                                                                        ; ...the links are given the colour of thier turtle of origin...
      set approval_propensity initial_approval_propensity                                                                ; ...and because approval is controlled by the links, approval propensity is determined for them instead of the turtles.
    ]
    set cooperation_propensity initial_cooperation_propensity                                                            ;The turtles, meanwhile, are granted a certain propensity to contribute to the collective good (cooperate).
  ]
  set collectivegood_aspirations (0.5 * (1 + cooperation_approval_value * (workteam_size - 1) - cooperation_cost))       ;The aspirations by which turtles evaluate the costs and benefits of cooperation are the midpoint of the reward distribution.
  set approval_aspirations (0.5 * (1 / workteam_size + approvalofothers_value - approval_cost))                          ;The same applies to the aspirations by which the costs and benefits of approval of alter are evaluated.
  reset-ticks
end 

to go
  if Time_Limit? = TRUE and ticks >= Number_of_Iterations [stop]
  cooperate
  approve
  reward_cooperation
  reward_approval
  learn_cooperation
  learn_approval
  measure-compliant-control
  measure-relational-control
  tick
end 

to cooperate
  ask turtles [                                                                                         ;The turtle decides whether to cooperate or not based on the propensity for cooperation it adopted in the last round.
    ifelse random 1000001 > 1000000 * cooperation_propensity [set workstate 0 set size 1] [set workstate 1 set size 2]        ;The turtle does not cooperate if the quasi-randomly generated number between 0 and 100 is greater than its cooperation_propensity value,
  ]                                                                                                     ;because this setup means that a higher cooperation propensity is less likely to be surpassed and so more likely to lead to the actor cooperating.
  set collectivegood_contributions ((count turtles with [workstate = 1]) / workteam_size)               ;This calculates the total result produced by all cooperating turtles.
end 

to approve
  ask turtles [
    ask my-out-links [                                                                                                                    ;To approve of other turtles, each turtle has a link to every other turtle and this link approves or does not approve of the turtle at the other end of the link.
      ifelse random 1000001 > 1000000 * approval_propensity [set thickness 0 set approvalstate 0] [set thickness 0.3 set approvalstate 1] ;The decision to approve is taken in a similar manner to the decision to cooperate, except that the link takes the decision to allow unique approval propensities within each dyad. If approval if given, the width of the link increases to provide visual feedback.
    ]
    set approval_recieved (count my-in-links with [approvalstate = 1]) * cooperation_approval_value                                       ;The total approval recieved is calculated by the turtles, not by the links (who are confined to a single dyad).
  ]
end 

to reward_cooperation
  ask turtles [
    set satisfaction_cooperation (collectivegood_contributions + approval_recieved - cooperation_cost * workstate - collectivegood_aspirations)                                                              ;Each turtle decides whether it is satisfied with the cooperation decision it made this iteration by comparing its cost-benefit balance with its aspiration level. The costs are the cost of cooperation (the cost of approval is irrelevant for cooperating, as it does not incur those costs). The benefits are the actor's share in the collective good and the approval it has recieved (regardless of whether this results from its work decision).
    set cooperation_reinforcer (learning_parameter * satisfaction_cooperation) / (((workteam_size - 2) / workteam_size + cooperation_approval_value * (workteam_size - 1) + cooperation_cost) / 2)           ;The turtle then uses its satisfaction as input to its learning reinforcer (whose denominator is the maximum possible satifaction from cooperation). This makes it so that a more dissatisfactory or dissatisfactory result has a larger effect on decisions on future iterations.
    if cooperation_reinforcer > 1 [set cooperation_reinforcer 1]
    if cooperation_reinforcer < -1 [set cooperation_reinforcer -1]
  ]
end 

to reward_approval
  ask turtles [
    ask my-out-links [
      set satisfaction_approval ([workstate] of end2 / workteam_size + approvalofothers_value * [approvalstate] of link ([who] of end2) ([who] of myself) - approval_cost * approvalstate - [approval_aspirations] of myself)   ;Again, approval is processed by links instead of turtles, in order to allow for dyad-specific satisfaction calculations. In this case, what is weighted against the aspiration level is the other turtle's contribution to the collective good, whether alter (the other turtle) approved of ego (the turtle whose out-link is doing the calculation) and the cost of approval.
      set approval_reinforcer (2 * learning_parameter * satisfaction_approval) / (1 / workteam_size + approvalofothers_value + approval_cost)                                                                                   ;Just as with the satifaction with cooperation, the satifaction with approving is used in a reinforcer function. The formula differs from the formula used for cooperation insofar as the formula for the maximum satisfaction is simplified into it.
      if approval_reinforcer > 1 [set approval_reinforcer 1]
      if approval_reinforcer < -1 [set approval_reinforcer -1]
    ]
  ]
end 

to learn_cooperation
  ask turtles [
    ifelse workstate = 1
    [                                                                                                                                                                                                          ;If the turtle cooperated this iteration...
      ifelse satisfaction_cooperation >= 0
      [set cooperation_propensity (cooperation_propensity + cooperation_reinforcer * (1 - cooperation_propensity) * workstate - cooperation_reinforcer * (1 - cooperation_propensity) * (1 - workstate))]      ; ...and was satisfied with its payoff, the cooperation reinforcer is positive, the first part of this formula is activated, and the propensity to cooperate is updated upward.
      [set cooperation_propensity (cooperation_propensity + cooperation_reinforcer * cooperation_propensity * workstate - cooperation_reinforcer * cooperation_propensity * (1 - workstate))]                  ; ...and was not satisfied with its payoff, the cooperation reinforcer is negative, the first part of the formula is activated but has a negative outcome (+ - = -), and the propensity to cooperate is updated downward.
    ]
    [                                                                                                                                                                                                          ;If the turtle did not cooperate this iteration...
      ifelse satisfaction_cooperation >= 0
      [set cooperation_propensity (cooperation_propensity + cooperation_reinforcer * cooperation_propensity * workstate - cooperation_reinforcer * cooperation_propensity * (1 - workstate))]                  ; ...and was satisfied with its payoff, the cooperation reinforcer is positive, the second part of the formula is activated, and and the propensity to cooperate is updated downward.
      [set cooperation_propensity (cooperation_propensity + cooperation_reinforcer * (1 - cooperation_propensity) * workstate - cooperation_reinforcer * (1 - cooperation_propensity) * (1 - workstate))]      ; ...and was not setisfied with its payoff, the cooperation reinforcer is negative, the second part of the formula is activated and becomes positive through a  double negative (- - = +), and the propensity to cooperate is updated upward.
    ]
  ]
end 

to learn_approval
  ask links [
    ifelse approvalstate = 1
    [                                                                                                                                                                                                     ;If ego did approve of alter this iteration...
      ifelse satisfaction_approval >= 0
      [set approval_propensity (approval_propensity + approval_reinforcer * (1 - approval_propensity) * approvalstate - approval_reinforcer * (1 - approval_propensity) * (1 - approvalstate))]           ; ...and was satisfied with its payoff, the approval reinforcer is positive, the first part of this formula is activated, and the propensity to approve of alter is updated upward.
      [set approval_propensity (approval_propensity + approval_reinforcer * approval_propensity * approvalstate - approval_reinforcer * approval_propensity * (1 - approvalstate))]                       ; ...and was not satisfied with its payoff, the approval reinforcer is negative, the first part og this formula is activated but has a negative outcome (+ - = -), and the propensity to approve of alter is updated downward.
    ]
    [                                                                                                                                                                                                     ;If ego did not approve of alter...
      ifelse satisfaction_approval >= 0
      [set approval_propensity (approval_propensity + approval_reinforcer * approval_propensity * approvalstate - approval_reinforcer * approval_propensity * (1 - approvalstate))]                       ; ...and was satisfied with its payoff, the approval reinforcer is positive, the second part of this formula is activated, and the propensity to approve of alter is updated downward.
      [set approval_propensity (approval_propensity + approval_reinforcer * (1 - approval_propensity) * approvalstate - approval_reinforcer * (1 - approval_propensity) * (1 - approvalstate))]           ; ...and was not satisfiec with its payoff, the approval reinforcer is negative, the second part of this fomula is activated and the propensity to approve of alter is updated upward.
    ]
  ]
end 

to measure-compliant-control
 ask turtles [
    set incoming_approval_propensities sum([approval_propensity] of my-in-links)                                                                                                        ;To calculate compliant control as the correlation between the propensity of others to approve of an actor and the actor's propensity to cooperate, the first step is calculating the sum of others' approval propensities for each turtle.
  ]
  ask turtles [
    set compliancy_numerator (incoming_approval_propensities - mean [incoming_approval_propensities] of turtles) * (cooperation_propensity - mean [cooperation_propensity] of turtles)  ;This approval mass is then used to calculate the numerator of the correlation between approval and cooperation, which consist of the sum of all turtles' products of deviations for approval mass and propensity to cooperate.
    set squared_deviation_approval (incoming_approval_propensities - mean [incoming_approval_propensities] of turtles) ^ 2                                                              ;In order to calculate the first part of the denominator, each turtle calculates the squared deviation of its approvall mass from the mean approval mass. Problem: if every actor approves of every other actor with complete certainty (the self-reinforcing equilibrium show in figure 4 of Flache & Macy, 1996), each actor has N incoming approvals and the deviation becomes zero (whether squared or not). The sum of squares will also be zero. Under this condition, the sum of squares for the approval is multiplied by zero and the denominator of the correlation becomes zero. This becomes a problem when calculating the correlation.
    set squared_deviation_cooperation (cooperation_propensity - mean [cooperation_propensity] of turtles) ^ 2                                                                           ;In order to calculate the second part of the denominator, each turtle calculates the squared deviation of its propensity to cooperate from the mean propensity to cooperate. Under conditions of complete, self-reinforcing coooperation (iff), the problem decribed in the line above could also occur with the propensity to cooperate, because if p = 100 for all turtles and mean(p) = 100 the deviation is zero.
  ]
  if sqrt(sum([squared_deviation_approval] of turtles) * sum([squared_deviation_cooperation] of turtles)) > 1.0E-10 [set compliant_control (sum([compliancy_numerator] of turtles) / sqrt(sum([squared_deviation_approval] of turtles) * sum([squared_deviation_cooperation] of turtles)))] ;Using the squared deviations of expected approval, the squared deviations of cooperation propensities, and the numerator terms calculated by compliancy_numerator, the formula for a correlation is filled in and the compliant control measure is calculated. If the denominator becomes smaller than 1.0E-10, the value for compliant control from last iteration is retained to avoid miscalculations due to finite precision (or the aformentioned division by zero).
end 

to measure-relational-control
  ask links [                                                                                   ;Because relational control is measured using the correlation between the approval propensities of a pair of links (link x y and link y x), the calculation is rather long-winded.
    set in_approval [approval_propensity] of link [who] of end2 [who] of end1                   ;Each link searches the approval propensity of the link that goes the other way.
  ]
  ask links [
    set in_deviation (in_approval - mean([approval_propensity] of links))                       ;The deviation of incoming approval is calculated by comparing a link's incoming approval to the mean propensity to approve. The general mean propensity to approve is used because every outgoing approval propensity is also an incoming approval propensity, which means that the mean incoming propensity to approve is identical to the mean propensity to approve.
    set out_deviation (approval_propensity - mean([approval_propensity] of links))              ;The deviation of outgoing approval is calculated in a similar vein, again using the general mean propensity to approve.
    set deviations_product (in_deviation * out_deviation)                                       ;To calculate the contribution of a link to the numerator term of the correlation, the product of the two deviations is calculated.
    set in_deviation_squared (in_deviation ^ 2)                                                 ;The in_deviation is squared for usage in one of the sums of squares in the denominator...
    set out_deviation_squared (out_deviation ^ 2)                                               ; ...and the same is done for the out_deviation.
  ]
  let relational_numerator sum([deviations_product] of links)                                   ;The numerator term is calculated by summing the turtles' contributions to this value.
  let sum_of_squares_in sum([in_deviation_squared] of links)                                    ;The sum of squares of in_deviations is the first term in the denominator...
  let sum_of_squares_out sum([out_deviation_squared] of links)                                  ; ...and the sum of squares of out_deviationa is the second term.
  if sqrt(sum_of_squares_in * sum_of_squares_out) > 1.0E-10 [set relational_control (relational_numerator / sqrt(sum_of_squares_in * sum_of_squares_out))] ;All terms necessary for the relational control correlation are calculated in advance, in order to make the calculation more transparent. If the denominator is smaller than 1.0E-10, last iteration's calculation is used (again, to avoid miscalculation).
end 

;; Copyright (c) 2020 Siebren Kooistra
;; See Info tab for full copyright and license.

There are 4 versions of this model.

Uploaded by When Description Download
Siebren Kooistra over 4 years ago Second attempt to fix info tab encoding Download this version
Siebren Kooistra over 4 years ago Attempt to fix info tab encoding Download this version
Siebren Kooistra over 4 years ago Added link to suggested reference Download this version
Siebren Kooistra over 4 years ago Initial upload Download this version

Attached files

No files

This model does not have any ancestors.

This model does not have any descendants.