Leonardi Model 1

Leonardi Model 1 preview image

1 collaborator

Default-person Eleanor Anderson (Author)


(This model has yet to be categorized with any tags)
Part of project 'Technology Use'
Model group MAM-2013 | Visible to everyone | Changeable by the author
Model was written in NetLogo 5.0.5 • Viewed 206 times • Downloaded 10 times • Run 0 times
Download the 'Leonardi Model 1' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


This model is based on Leonardi, P. M. (2012). Car Crashes Without Cars: Lessons About Simulation Technology and Organizational Change from Automotive Design. MIT Press (MA).

It is intended to represent what happens when a new technology is introduced into an organization and people are trying to figure out what it is good for. It models a cyclical set of dynamics between people's interactions with one another ("social interactions"), and interactions with the technology itself ("material interactions") working together to determine whether the tool is actually taken up and used.

The cycle works as follows: Social interactions (i.e. people talking to one another about the new tool) shape people’s expectations for the technology. These expectations turn shape their material interactions with the tool, as people tend to find only the affordances they know to look for. Information from these material interactions (i.e. whether the technology worked as a person expected or not) is then incorporated into future social interactions.

In Leonardi's ethnographic work, he found that these dynamics led to two different patterns:

1. People begin with a widespread expectation that the technology affords something it does not. In the end, most people abandon the technology.

2. People begin without clear expectations of what the technology affords. In the end most people use the technology.

This model is the first and simplest in a series designed to model these dynamics.


The agents in this model are people and technologies. People have three important attributes:

1. A set of expectations about what the technology affords, including both a set of features (“a”, “b”, both, or neither) and a valence for each feature present (1 or -1). A valence of 1 represents a belief that the technology does afford that feature. A valence of -1 represents a belief that the technology does not afford that feature. People can also lack any expectation for a particular feature.

2. A usage, which refers to the features of the technology that they use. Usage can also take values of “a,” “b,” both or neither, however it does not hold valence.

3. A level of persistence. This refers to how many times people will interact with the technology before their expectations and usage become permanently fixed.

The technology (there is never more than one technology in a given simulation) holds one attribute: its affordances. The technology’s set of affordances can include “a”, “b”, both, or neither. If a particular feature is present its valence is always positive.

When the model begins, each person follows the following rules at each tick:

1. Turn a little and take a step forward

2. If I am close to any other agents (people or technology) and my persistence has not run out, pick a nearby agent to interact with

1. If I am interacting with another person who has an expectation about a feature I didn’t know about, take on that person’s expectation

2. If I am interacting with the technology, try to use it for one thing I expect it will be able to do

1. If that feature is among the technology’s affordances, add it to my usage. If not, change my expectations to not believing the technology affords that feature

3. If I have no expectations, learn one feature from the technology itself (the same way I would from a person)

4. Increment my persistence down

5. (If I have only negative expectations do nothing)


Select a set of affordances for the technology.

Select the number of people who will start with expectations of “a,” “b,” both, or neither. All starting expectations are positive.

If auto-total people is set to a number, the model will automatically add people with no expectations to the model until the number of people equals auto-total number. If the user has already selected more than that number of people with starting expectations of "a", "b" or both, an error message will pop up.

Optionally “limited-persistence” can be turned on, which sets starting persistence at 3 for all people. When “limited persistence” is off, people start with a persistence of 1000—-an arbitrarily high number.


People's expectations are represented by little symbols next to their head. Blue represents "a", and yellow represents "b". Dots indicate a positive expectation. Xs indicate a negative expectation.

People's usage is represented by the color of their body. Gray represents no usage, blue represents "a," yellow represents "b" and green represents both "a" and "b."


Try to match the reference patterns Leonardi found in his ethnographic work!


Try making "persistence" a slider instead of a switch.


Note that both expectations and affordances are coded here using the Tables extension.


Leonardi Model 2: http://modelingcommons.org/browse/one_model/4074

Leonardi Model 3: http://modelingcommons.org/browse/one_model/4075



Leonardi, P. M. (2012). Car Crashes Without Cars: Lessons About Simulation Technology and Organizational Change from Automotive Design. MIT Press (MA).

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

extensions [table]

breed [people person]
breed [expectations expectation]
breed [technologies technology]

people-own [usage understanding influencer persistence]
expectations-own [feature strength]
technologies-own [affordances]

directed-link-breed [holds hold]

to setup

  ;; If auto-total-people is on, make sure the number of people expecting a, b and both doesn't exceed
  ;; the total number set. 
  ;; Populate the rest of the world (up to the total number set) with people with no expectations.
  ifelse auto-total-people != "off" and #-expecting-a + #-expecting-b + #-expecting-both > auto-total-people
  [ user-message (word "There are too many people. Turn auto-total off, or reduce the number of people expecting a, b or both.")]
  [ if auto-total-people != "off"
    [ set #-no-expectations auto-total-people - (#-expecting-a + #-expecting-b + #-expecting-both) ]
  ;; create the total number of people who will be in the model
  ;; distribute them randomly around the world
  ;; format their size and shape
  ;; set their usage blank
  ;; format their color to reflect their usage
  ;; give each of them a blank table that will hold their expectations about the technology
  create-people total-# 
  [ setxy random-pxcor random-pycor
    set shape "person"
    set size 1.6
    set usage []
    color-code usage .8 
    set understanding table:make ]  
  ;; for the number of people expecting only a, pick people with blank tables, and set their understanding of feature a to 1 
  ask n-of #-expecting-a people with [ table:length understanding < 1 ]
  [ table:put understanding "a" 1 ]
  ;; for the number of people expecting only b, pick people with blank tables, and set their understanding of feature b to 1 
  ask n-of #-expecting-b people with [ table:length understanding < 1 ]
  [ table:put understanding "b" 1 ]
  ;; for the number of people expecting a and b, pick people with blank tables, and set their understanding of 
  ;; features a and b to 1   
  ask n-of #-expecting-both people with [ table:length understanding < 1 ]
  [ table:put understanding "a" 1
    table:put understanding "b" 1 ] 
  ;;  build a network of expectations to provide a visual representation of each person's table of understanding
  ask people
  [ update-expectations 
    ;; if limited persistence is on, give people a persistence of 3
    ;; if it's off, give people a persistence that is extremely high   
    ifelse limited-persistence
    [ set persistence 3 ]
    [ set persistence 1000 ] ]

  ;; format the network of expectations
  ask expectations
  [ format-expectations ]

  ;; set up the technology 

  reset-ticks ]

to update-expectations
  ;; ask people whose table of understandings doesn't match the visual representation of their expectations
  ask people with [table:length understanding != count out-hold-neighbors]

  ;; create a blank slate 
  [ ask out-hold-neighbors 
    [ die ]
  ;; build the network of expectations up from the understanding table  
  let instructions table:keys understanding
  while [instructions != []]
  [ hatch-expectations 1
    [ create-hold-from myself
      ask my-in-holds [hide-link] 
      set feature first instructions
      set strength [table:get understanding first instructions] of myself 
      set instructions but-first instructions  ] ] ]

  ask expectations
  [ format-expectations ] 

to format-expectations
  ;; make expectations color, shape and placement reflect what they indicate

  if strength > 0
  [ show-turtle 
    set shape "circle"
    set size .5 ]

  if strength < 0
  [ show-turtle
    set shape "x"
    set size .6 ]

  if strength = 0
  if feature = "a"
  [ setxy ( [ xcor ] of one-of in-hold-neighbors - .5) ( [ ycor ] of one-of in-hold-neighbors + .5)  ] 
  if feature = "b"
  [ setxy ( [ xcor ] of one-of in-hold-neighbors + .5) ( [ ycor ] of one-of in-hold-neighbors + .5) ]
  color-code feature 1.5

to adopt-technology
create-technologies 1
  [ set shape "box"
    set size 3
    ;; create a blank table that will hold affordances
    set affordances table:make
    ;; fill in the table from the features selected by the user
    let instructions (sentence technology-affordances)
    while [instructions != [] ]
  [ table:put affordances first instructions 1
    set instructions but-first instructions ] 
  color-code (table:keys affordances) -2.5]

;; color code expectations, usage and affordances to  provide a visual indication of whether they match.  

to color-code [thing number]

;; I tried to let the number (i.e. the amount of color change) be set within this procedure, but couldn't get it to work.
;; When the procedure was called by something that didn't hold one of the possible "things" it would
;; stop during the if statements and return an error message. Not sure why it wouldn't just return false and keep going...
;; But anyway, I relented and made the number I want each element's color adjusted by its own input, that I just list
;; when I call the procedure.
;; It's still much more parsimonious this way than having entirely separate procedures to color code expectations,
;; usage, and affordances which is what I used to have...

;let number 1
;if thing = usage 
;[set number .8]
;if thing = (table:keys affordances)
;[set number -2.5]
;if thing = feature
;[set number 1.5] 

  if member? "a" thing and not member? "b" thing
    [ set color blue + number ] 
  if member? "b" thing and not member? "a" thing
    [ set color yellow + number ] 
  if member? "a" thing and member? "b" thing
    [ set color green + number] 
  if not member? "a" thing and not member? "b" thing
    [ set color gray + number ]

to go
  ;; stop conditions
  if all? people [persistence = 0] 
    [ stop ]
  if ticks > 99 and (all? people [usage = []] 
    or all? people [usage = ["a"]] 
    or all? people [usage = ["b"]] 
    or all? people [usage = ["a" "b"] or usage = ["b" "a"]]) 
    [ stop ]
  ;; running the model  
  ask people
  [ move
    update-expectations ] 

to move
let path random-normal 0 30
    rt path
    fd 1 
    ask out-hold-neighbors
    [ rt path
      fd 1 ]     

to interact
  set influencer nobody
  if persistence > 0
  ;; if I am near the technology or a person, pick one of those entities to influence me  
  [ let potential-influencers (turtle-set technologies in-radius 3 other people in-radius 1)
    ;show potential-influencers
    set influencer one-of potential-influencers
    ;show influencer
    ;; if I am being influenced by a person (social interaction), then learn from their expectations
    if is-person? influencer 
    [ learn-from ([understanding] of influencer) ]
    ;; if I am being influenced by the technology (material interaction), then if I have some understanding of
    ;; what the technology is for I will try to use it for that.
    ;; otherwise I will learn from the technology
    if is-technology? influencer 
    [ ifelse table:length understanding > 0
      [ use-technology ] 
      [ learn-from ([affordances] of influencer) ]  
      ;; increment my persistence down 
      set persistence persistence - 1 ] ]

to learn-from [source]
  ;; if the influencer has any features to learn from,
  ;; pick one to be the insight I learn
  if table:length source > 0
  [ let insight one-of table:keys source
    ;; if that feature is new to me, put the insight into my table of understandings with the same value
    ;; that my influencer has for that feature
    if not table:has-key? understanding insight
    [ table:put understanding insight table:get source insight ] ]

to use-technology

;; if I have any positive expectations, pick one of those to be the way I try to use the technology 
if any? out-hold-neighbors with [strength > 0] 
[ let use [feature] of one-of out-hold-neighbors with [ strength > 0 ]
;; Note: this is the only time I use the expectations breed for more than just visualization. I tried 
;; several ways of doing this without appealing to the expectations, but none of them worked. Can't ask 
;; about values in a table directly (only keys), can't flatten a table to a non-nested list using sentence 
;; (not sure why)... I couldn't think of any other reasonably parsimonious way, using just the tables, 
;; that I could pick a feature with a positive value. I wanted to be consistent about either using the 
;; expectations turtles for functionality or not. I found them awkward to use for some of the more central 
;; interactional procedures--in particular I wanted to create learning procedures that would be "symmetric"
;; (ie work the same) with both people and technology, so I wanted a way to store expectations and 
;; affordances the same way for both people and technology...plus it felt strange for expectations to be 
;; "held" by people, but not be a directly accessible property of those people, so on balance I decided to 
;; switch to using tables. But in this case, I just couldn't figure out a way to do what I needed
;; with tables. I think the functionality should be the same, just a bit less elegant to switch.

;; if the feature I'm trying to use is one of the technology's affordances, then put that feature into my usage
;; if not, set my expectation for that feature negative (and ask my expectation network to adjust accordingly) 
;; and remove it from my usage 
  ifelse table:has-key? [affordances] of influencer use
[ set usage lput use usage
  set usage remove-duplicates usage
  color-code usage .8 ]
[ table:put understanding use -1 
  ask out-hold-neighbors with [feature = use ]
  [ set strength [table:get understanding use] of myself ] 
  set usage remove use usage ] ]

to-report total-#
report #-expecting-a + #-expecting-b + #-expecting-both + #-no-expectations

There are 2 versions of this model.

Uploaded by When Description Download
Eleanor Anderson over 4 years ago finishing touches Download this version
Eleanor Anderson over 4 years ago Initial upload Download this version

Attached files

File Type Description Last updated
Leonardi Model 1.png preview Preview for 'Leonardi Model 1' over 4 years ago, by Eleanor Anderson Download

This model does not have any ancestors.

This model does not have any descendants.