Language Change

Language Change preview image

1 collaborator

Uri_dolphin3 Uri Wilensky (Author)


social science 

Tagged by Reuven M. Lerner almost 11 years ago

Model group CCL | Visible to everyone | Changeable by group members (CCL)
Model was written in NetLogo 5.0.4 • Viewed 818 times • Downloaded 88 times • Run 1 time
Download the 'Language Change' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


This model explores how the properties of language users and the structure of their social networks can affect the course of language change.

In this model, there are two linguistic variants in competition within the social network -- one variant generated by grammar 0 and the other generated by grammar 1. Language users interact with each other based on whom they are connected to in the network. At each iteration, each individual speaks by passing an utterance using either grammar 0 or grammar 1 to the neighbors in the network. Individuals then listen to their neighbors and change their grammars based on what they heard.


The networks in this model are constructed through the process of "preferential attachment" in which individuals enter the network one by one, and prefer to connect to those language users who already have many connections. This leads to the emergence of a few "hubs", or language users who are very well connected; most other language users have very few connections.

There are three different options to control how language users listen and learn from their neighbors, listed in the UPDATE-ALGORITHM chooser. For two of these options, INDIVIDUAL and THRESHOLD, language users can only access one grammar at a time. Those that can only access grammar 1 are white in color, and those that can access only grammar 0 are black. For the third option, REWARD, each grammar is associated with a weight, which determines the language user's probability of accessing that grammar. Because there are only two grammars in competition here, the weights are represented with a single value - the weight of grammar 1. The color of the nodes reflect this probability; the larger the weight of grammar 1, the lighter the node.

  • INDIVIDUAL: Language users choose one of their neighbors randomly, and adopt that neighbor's grammar.

  • THRESHOLD: Language users adopt grammar 1 if some proportion of their neighbors is already using grammar 1. This proportion is set with the THRESHOLD-VAL slider. For example, if THRESHOLD-VAL is 0.30, then an individual will adopt grammar 1 if at least 30% of his neighbors have grammar 1.

  • REWARD: Language users update their probability of using one grammar or the other. In this algorithm, if an individual hears an utterance from grammar 1, the individual's weight of grammar 1 is increased, and they will be more likely to use that grammar in the next iteration. Similarly, hearing an utterance from grammar 0 increases the likelihood of using grammar 0 in the next iteration.


The NUM-NODES slider determines the number of nodes (or individuals) to be included in the network population. PERCENT-GRAMMAR-1 determines the proportion of these language learners who will be initialized to use grammar 1. The remaining nodes will be initialized to use grammar 0.

Press SETUP-EVERYTHING to generate a new network based on NUM-NODES and PERCENT-GRAMMAR-1.

Press GO ONCE to allow all language users to "speak" and "listen" only once, according to the algorithm in the UPDATE-ALGORITHM dropdown menu (see the above section for more about these options). Press GO for the simulation to run continuously; pressing GO again will halt the simulation.

Press LAYOUT to move the nodes around so that the structure of the network easier to see.

When the HIGHLIGHT button is pressed, rolling over a node in the network will highlight the nodes to which it is connected. Additionally, the node's initial and current grammar state will be displayed in the output area.

Press REDISTRIBUTE-GRAMMARS to reassign grammars to all language users, under the same initial condition. For example, if 20% of the nodes were initialized with grammar 1, pressing REDISTRIBUTE-GRAMMARS will assign grammar 1 to a new sample of 20% of the population.

Press RESET-STATES to reinitialize all language users to their original grammars. This allows you to run the model multiple times without generating a new network structure.

The SINK-STATE-1? switch applies only for the INDIVIDUAL and THRESHOLD updating algorithms. If on, once an individual adopts grammar 1, then he can never go back to grammar 0.

The LOGISTIC? switch applies only for the REWARD updating algorithm. If on, an individual's probability of using one of the grammars is pushed to the extremes (closer to 0% or 100%), based on the output of the logistic function. For more details, see

The ALPHA slider also applies only for the REWARD updating algorithm, and only when LOGISTIC? is turned on. ALPHA represents a bias in favor of grammar 1. Probabilities are pushed to the extremes, and shifted toward selecting grammar 1. The larger the value of ALPHA, the more likely a language user will speak using grammar 1.

The plot "Mean state of language users in the network" calculates the average weight of grammar 1 for all nodes in the network, at each iteration.


Over time, language users tend to arrive at using just one grammar all of the time. However, they may not all converge to the same grammar. It is possible for sub-groups to emerge, which may be seen as the formation of different dialects.


Under what conditions is it possible to get one grammar to spread through the entire network? Try manipulating PERCENT-GRAMMAR-1, the updating algorithm, and the various other parameters. Does the number of nodes matter?


Whether or not two language users interact with each other is determined by the network structure. How would the model behave if language users were connected by a small-world network rather than a preferential attachment network?

In this model, only two grammars are in competition in the network. Try extending the model to allow competition between three grammars.

The updating algorithm currently has agents updating asynchronously. Currently, the grammar may spread one step or several within one tick, depending on the links. Try implementing synchronous updating.

Regardless of the updating algorithm, language users always start out using one grammar categorically (that is, with a weight of 0 or 1). Edit the model to allow some language users to be initialized to an intermediate weight (e.g., 0.5).


Networks are represented using turtles (nodes) and links. In NetLogo, both turtles and links are agents.


Preferential Attachment


This model was also described in Troutman, Celina; Clark, Brady; and Goldrick, Matthew (2008) "Social networks and intraspeaker variation during periods of language change," University of Pennsylvania Working Papers in Linguistics: Vol. 14: Iss. 1, Article 25.


If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:

  • Troutman, C. and Wilensky, U. (2007). NetLogo Language Change model. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
  • Wilensky, U. (1999). NetLogo. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.


Copyright 2007 Uri Wilensky.


This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

breed [nodes node]

nodes-own [
  state            ;; current grammar state (ranges from 0 to 1)
  orig-state       ;; each person's initially assigned grammar state 
  spoken-state     ;; output of person's speech (0 or 1)


to setup
  set-default-shape nodes "circle"
  ask patches [ set pcolor gray ]
  repeat num-nodes [ make-node ]
  repeat 100 [ layout ]

;; Create a new node, initialize its state

to make-node
  create-nodes 1 [
    ;; start in random position near edge of world
    rt random-float 360
    fd max-pxcor
    set size 2
    set state 0.0

;; Initialize a select proportion of individuals to start with grammar 1

to distribute-grammars
  ask nodes [ set state 0 ]
  ;; ask a select proportion of people to switch to 1
  ask n-of ((percent-grammar-1 / 100) * num-nodes) nodes
    [ set state 1.0 ]
  ask nodes [
    set orig-state state     ;; used in reset-states
    set spoken-state state   ;; initial spoken state, for first timestep

to create-network
  ;; make the initial network of two nodes and an edge
  let partner nobody
  let first-node one-of nodes
  let second-node one-of nodes with [self != first-node]
  ;; make the first edge
  ask first-node [ create-link-with second-node [ set color white ] ]
  ;; randomly select unattached node to add to network
  let new-node one-of nodes with [not any? link-neighbors]
  ;; and connect it to a partner already in the network
  while [new-node != nobody] [
    set partner find-partner
    ask new-node [ create-link-with partner [ set color white ] ]
    set new-node one-of nodes with [not any? link-neighbors]

to update-color
  set color scale-color red state 0 1

to reset-nodes
  ask nodes [
    set state orig-state

to redistribute-grammars

;; reports a string of the agent's initial grammar

to-report orig-grammar-string
  report ifelse-value (orig-state = 1.0) ["1"] ["0"]


to go
  ask nodes [ communicate-via update-algorithm ]
  ask nodes [ update-color ]

to communicate-via [ algorithm ] ;; node procedure
  ;; Discrete Grammar ;;
  ifelse (algorithm = "threshold") 
  [ listen-threshold ] 
  [ ifelse (algorithm = "individual") 
    [ listen-individual ] 
    [ ;; Probabilistic Grammar ;;
      ;; speak and ask all neighbors to listen
      if (algorithm = "reward") 
      [ speak
        ask link-neighbors 
        [ listen [spoken-state] of myself ]

;; Speaking & Listening

to listen-threshold ;; node procedure
  let grammar-one-sum sum [state] of link-neighbors
  ifelse grammar-one-sum >= (count link-neighbors * threshold-val) 
  [ set state 1 ]
  [ ;; if there are not enough neighbors with grammar 1, 
    ;; and 1 is not a sink state, then change to 0
    if not sink-state-1? [ set state 0 ]

to listen-individual 
  set state [state] of one-of link-neighbors

to speak ;; node procedure
  ;; alpha is the level of bias in favor of grammar 1
  ;; alpha is constant for all nodes. 
  ;; the alpha value of 0.025 works best with the logistic function
  ;; adjusted so that it takes input values [0,1] and output to [0,1]
  if logistic? 
  [ let gain (alpha + 0.1) * 20
    let filter-val 1 / (1 + exp (- (gain * state - 1) * 5))
    ifelse random-float 1.0 <= filter-val 
    [ set spoken-state 1 ]
    [ set spoken-state 0 ]
  ;; for probabilistic learners who only have bias for grammar 1
  ;; no preference for discrete grammars (i.e., no logistic)
  if not logistic? 
  [ ;; the slope needs to be greater than 1, so we arbitrarily set to 1.5
    ;; when state is >= 2/3, the biased-val would be greater than or equal to 1
    let biased-val 1.5 * state
    if biased-val >= 1 [ set biased-val 1 ]
    ifelse random-float 1.0 <= biased-val 
    [ set spoken-state 1 ]
    [ set spoken-state 0 ]

;; Listening uses a linear reward/punish algorithm

to listen [heard-state] ;; node procedure
  let gamma 0.01 ;; for now, gamma is the same for all nodes
  ;; choose a grammar state to be in
  ifelse random-float 1.0 <= state 
    ;; if grammar 1 was heard
    ifelse heard-state = 1 
    [ set state state + (gamma * (1 - state)) ]
    [ set state (1 - gamma) * state ]
    ;; if grammar 0 was heard
    ifelse heard-state = 0 
    [ set state state * (1 - gamma) ]
    [ set state gamma + state * (1 - gamma) ]

;; Making the network
;; This code is borrowed from Lottery Example, from the Code Examples section of the Models Library.
;; The idea behind this procedure is as the following.
;; The sum of the sizes of the turtles is set as the number of "tickets" we have in our lottery.
;; Then we pick a random "ticket" (a random number), and we step through the
;; turtles to find which turtle holds that ticket.

to-report find-partner
  let pick random-float sum [count link-neighbors] of (nodes with [any? link-neighbors])
  let partner nobody
  ask nodes 
  [ ;; if there's no winner yet
    if partner = nobody 
    [ ifelse count link-neighbors > pick 
      [ set partner self]
      [ set pick pick - (count link-neighbors)]
  report partner

to layout
  layout-spring (turtles with [any? link-neighbors]) links 0.4 6 1

to highlight
  ifelse mouse-inside?
    [ do-highlight ]
    [ undo-highlight ]

;; remove any previous highlights

to undo-highlight
  ask nodes [ update-color ]
  ask links [ set color white ]

to do-highlight
  let highlight-color blue
  let min-d min [distancexy mouse-xcor mouse-ycor] of nodes
  ;; get the node closest to the mouse
  let the-node one-of nodes with 
  [any? link-neighbors and distancexy mouse-xcor mouse-ycor = min-d]
  ;; get the node that was previously the highlight-color
  let highlighted-node one-of nodes with [color = highlight-color]
  if the-node != nobody and the-node != highlighted-node 
  [ ;; highlight the chosen node
    ask the-node 
    [ undo-highlight
      output-print word "original grammar state: "  orig-grammar-string
      output-print word "current grammar state: " precision state 5
      set color highlight-color
      ;; highlight edges connecting the chosen node to its neighbors
      ask my-links [ set color cyan - 1 ]
      ;; highlight neighbors
      ask link-neighbors [ set color blue + 1 ]

; Copyright 2007 Uri Wilensky.
; See Info tab for full copyright and license.

There are 11 versions of this model.

Uploaded by When Description Download
Uri Wilensky about 11 years ago Updated to NetLogo 5.0.4 Download this version
Uri Wilensky over 11 years ago Updated version tag Download this version
Uri Wilensky over 11 years ago Updated to version from NetLogo 5.0.3 distribution Download this version
Uri Wilensky about 12 years ago Updated to NetLogo 5.0 Download this version
Uri Wilensky over 12 years ago Updated to NetLogo 5.0 Download this version
Uri Wilensky almost 14 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky almost 14 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky almost 14 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky almost 14 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky almost 14 years ago Model from NetLogo distribution Download this version
Uri Wilensky almost 14 years ago Language Change Download this version

Attached files

File Type Description Last updated
Language Change.png preview Preview for 'Language Change' about 11 years ago, by Uri Wilensky Download

This model does not have any ancestors.

This model does not have any descendants.