Matching Law Simulation

Matching Law Simulation preview image

1 collaborator

Default-person Allen Karsina (Author)

Tags

choice 

Tagged by Allen Karsina almost 6 years ago

matchinglaw 

Tagged by Allen Karsina almost 6 years ago

Visible to everyone | Changeable by the author
Model was written in NetLogo 6.0.4 • Viewed 371 times • Downloaded 37 times • Run 0 times
Download the 'Matching Law Simulation' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


## WHAT IS IT?

This model simulates choice allocation between two behaviors using an early formulation of the Matching Law ([Herrnstein, 1961](link.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1404074/pdf/jeabehav00196-0083.pdf)). According to this formulation, the ratio of engagement in a specific behavior as opposed to all other behavior is equal to the ratio of reinforcement for engaging in that behavior as opposed to reinforcement for all other behaviors.

This formula is expressed for two behaviors as:

>B1/(B1 + B2) = R1/(R1 + R2)

>where

>B1 = behavior 1

>B2 = behavior 2

>R1 = reinforcement received for behavior 1

>R2 = reinforcement received for behavior 2

This model uses the equation above to simulate behavior allocation across different schedules of reinforcement when two behaviors are available. The model can be used to explore choice behavior in a limited context, but is designed to simulate early experiments on the Matching Law.

## HERRNSTEIN (1961) AND THE VARIABLE INTERVAL SCHEDULE OF REINFORCEMENT

In his study, Herrnstein recorded the pecking behavior of three pigeons across different schedules of reinforcement. Each pigeon was trained to peck two keys. During sessions, pecking a key after a variable amount of time passed resulted in reinforcement (access to food). **Pecking on a key before reinforcement was available resulted in no programmed changes. When the time interval passed and reinforcement was available, then the first peck to that key resulted in access to food.**

**Each key was independent from the other.** That is, each key had its own time interval in effect, and pecks to the other key or reinforcement delivered due to a peck to the other key had no effect on the first key. **When a time interval was over, the "clock" essentially stopped for that key until the key was pecked, at which time reinforcement was delivered and a new interval started.**

**Within a session, the intervals were variable, but were programmed to average out to a specific interval (e.g., 3 minutes). The keys could be set to the same average interval, or different average schedules.** That is, both keys could be set to 3-minute intervals (on average), or one key could be set to a 3-minute interval and the other to a 1.5-minute interval (again, on average), and so one.

This arrangement is known as a **Variable-interval schedule.** It has several advantages for studying choice behavior - specifically, it results in steady, relatively rapid responding and is sensitive to programmed schedule changes.

>In summary, a variable interval schedule is a schedule in which a reinforcer is delivered for the **FIRST RESPONSE AFTER** a time interval has passed. This time interval is programmed to **vary across an average length**. In Herrnstein's experiement, each key the pigeons pecked had its own variable interval schedule. In our model, each behavior has its own variable interval schedule*.

>*Technically, our model uses a random interval schedule, because rather than programming each interval in advance, we are using a random number generator. Given the nature or random number generators, this means our schedule may not precisely match the average, but with enough opportunities it should be very close.

After training, Herrnstein ran each pigeon through a series of sessions with a specified schedule of reinforcement for each key. Each session ended when a total of 60 reinforcements had been delivered (about an hour and a half, on average). Sessions of the same schedules were conducted until the pigeon responded consistently. At that point, Herrnstein took the mean of the last five sessions and plotted the percentage of reinforcements for responding to the first key on the x-asis and the percentage of responses to the first key along the y-axis.

Based upon his findings, Herrnstein developed the formula above to describe the behavior of the pigeons in his experiment. Subsequent experiments have extended these findings across different animals (including humans; see Borrero & Vollmer, 2002; Reed & Martens, 2008; Romanowich, Bourret, & Vollmer, 2007 for examples) and have refined this initial equation (see Baum, 1974, 1979 for examples). This original equation still describes behavior allocation well enough to serve as the starting point for our model.

## HOW IT WORKS

**Agent Properties.** An agent is randomly seeded on the screen and given the following properties:

* BEHAVIOR1 - a running tally of the number of times the agent has drawn a square

* BEHAVIOR2 - a running tally of the number of times the agent has drawn a circle

* REINFORCEMENT1 - a running tally of the number of reinforcements that have followed BEHAVIOR1

* REINFORCEMENT2 - a running tally of the number of reinforcements that have followed BEHAVIOR2

* REINFORCEMENT-RATIO - REINFORCEMENT1 / (REINFORCEMENT1 + REINFORCEMENT2).

* BEHAVIOR-RATIO - BEHAVIOR1 / (BEHAVIOR1 + BEHAVIOR2)

* INTERVAL1-CHECK and INTERVAL2-CHECK - these function as the interval lengths for reinforcement delivery for the respective behaviors. They are set at (2 * Random Interval) + 1, where "Interval" is the slider setting for Interval1 or Interval2.

* INT-COUNTER1 and INT-COUNTER2 - these variables track the number of ticks that pass without reinforcement being delivered for a specific behavior.

* SET-COUNTER - tracks number of reinforcements delivered within a session

* SESSION-COUNTER - tracks number of sessions completed

**Initial Settings.** At the beginning of the simulation, behavior and reinforcement properties are all set to 1. If the simulation started with both reinforcement properties set to 0, then the first reinforcement delivery would force the agent to select only the behavior that had been reinforced first for the entire session. Conceptually, these settings are equivalent to one "forced exposure" to each behavior each followed by reinforcement. These same settings are used to intiate each session.

**Sessions.** Each session lasts until the number of reinforcements set by the SESSION-LENGTH slider (plus 2 to acount for the initial settings) is reached. When a session ends, the simulation continues into the next session until the number of sessions indicated on the SESSIONS slider is reached. At this point, new interval values may be assigned and the simulation can continue, or the simulation may be reset using the SETUP button.

**Behaving.** Each tick, the agent must engage in one of the behaviors. Which of the two behaviors selected is determined by comparing a random number from 0 to just less than 1 to the ratio: REINFORCEMENT1 / (REINFORCEMENT1 + REINFORCEMENT2). If the random number is less than the ratio, then the agent engages in BEHAVIOR1 and an orange square is drawn on the screen. Otherwise, it engages in BEHAVIOR2 and a violet circle is drawn instead.

**Reinforcement.** At the beginning of the simulation and each session, a random interval length is determined for each behavior based on the INTERVAL1 and INTERVAL2 sliders. This interval length may be as short as 1 tick to as long as nearly twice as many ticks as indicated by the slider. When an agent engages in a behavior, the interval length is checked as well as how many ticks have passed since that behavior was reinforced. If as many ticks or more have passed as the interval length, reinforcement is delivered and the interval is reset. Otherwise, a counter keeps track of the number of intervals and no reinforcement is delivered. Reinforcement consists of increasing the reinforcement value by 1, and is represented visually by "filling in" the shape that was reinforced. If an **Extincition?** switch has been turned on, no reinforcement is delivered for the corresponding behavior regardless of the schedule.

## HOW TO USE IT

**Sessions.** This slider determines how many sessions are run at a time. If you select 3, for example, the simulation will keep running until 3 sessions have been plotted. Useful if you want to see how much variation there is from one session to another under the same schedule of reinforcement.

**Session-length.** This slider determines the length of the session, _based on the total number of reinforcements delivered_. In Herrnstein (1961), each session lasted until a total of 60 reinforcers had been delivered.

**Schedules of Reinforcement.**

* Adjust the INTERVAL1 slider to set the _minimum average_ number of ticks between reinforcement being available for engaging in BEHAVIOR1.

* Adjust the INTERVAL2 slider to set the _minimum average_ number of ticks between reinforcement being available for engaging in BEHAVIOR2.

* Set EXTINCTION1? to ON if you want to make sure reinforcement is **never** delivered within session for BEHAVIOR1. Otherwise, leave this switch **OFF**.

* Set EXTINCTION2? to ON if you want to make sure reinforcement is **never** delivered within session for BEHAVIOR2. Otherwise, leave this switch **OFF**.

**Setup.** Click set up to reset the simulation. If desired, also select **Clear** in the Command Center to erase the Command Center.

**Go.** Press go to start the simulation after setup, and to continue running more sessions when the simulation stops.

## THINGS TO NOTICE

**Actual and Predicted Behavior.** As your simulations runs, this plot shows you the frequency of BEHAVIOR1 and BEHAVIOR2 (solid lines). It also shows you the predicted frequency (red and blue dots) based upon the equation:

>B1 = (B1 + B2) * (R1 / (R1 + R2))

For a description of the terms, see the original equation above. Note that this equation is simply removing the denominator from the left side of the equation over to the right side of the equation.

**Proportion Selection.** This plot updates after each session (if it is set to produce lines rather than points, at least 2 sessions need to be run before you see anything). If you keep the interval settings equal, you should expect to see a cluster of dots or lines in the middle of the graph. As you change the interval schedules so that INTERVAL1 is lower than INTERVAL2, you will see the dots or lines move towards the origin. As INTERVAL1 becomes greater than INTERVAL2, you will see the dots/lines moves towards the upper right of the graph.

To the extent these dots or lines fall along an imaginary line from (0,0) to (1,1), Herrnstein's formula holds true.

**The Monitors.** If you have the simulation running at normal speed, the monitors will change very quickly and reset to the start values before you know it. No worries, you can see the results of each session in the Command Center. If the schedule of reinforcement is rather lean or you slow the simulation down, then the monitors can help you track what is going on.

**The Command Center.** The command center gives you a print out of the variables after each session.

## THINGS TO TRY

**Running the same schedule over multiple sessions.** The results should be fairly similar, but not identical. For example, if INTERVAL1 and INTERVAL2 are set to the same values, you may find slightly more responses to BEHAVIOR1 in one session, and the opposite pattern in the next. If the Interval settings are very different, though, you should expect to see the interval with the lowest setting reliably generate the most responding.

**Keep INTERVAL1 and INTERVAL2 equal, and start with low settings and gradually move to higher and higher settings across sessions.** Observe what this does to the frequency and patterns of responding across sessions.

**Experiment with Extinction.** Try turning extinction on for one of the interval settings. What happens? Note that this simulation is not designed to simulate actual extinction - the agent will continue drawing shapes until the the simulation is shut down or the world ends, whichever comes first, with or without reinforcement.

**Set different values for INTERVAL1 and INTERVAL2 across sessions.** Watch the Proportion Selection plot and see if a line forms at a 45 degree angle (more or less) out from the origin.

**Set SESSION-LENGTH high and adjust the schedules within a session.** This model isn't designed to evaluate schedule changes within a session, but it doesn't mean you can experiment with it.

## SIMULATING FIGURE 1 OF HERRNSTEIN (1961)

In order to simulate Herrnstein (1961), follow the instructions below.

1. Set the SESSIONS slider to 1.

2. Set the SESSION-LENGTH slider to 60.

3. Set the INTERVAL1 slider to 405.

4. Set the INTERVAL2 slider to 81.

5. Make sure the EXTINCTION? switches are off.

6. Click SETUP.

7. Click GO.

8. Wait untl the simulation stops.

9. Click GO again. Wait until the simulation stops.

10. Set both INTERVAL sliders to 135.

11. Click GO. Wait until the simulation stops.

12. Set INTERVAL1 to 101 and INTERVAL2 to 202.

13. Click GO. Wait until the simulation stops.

14. Set INTERVAL1 to 81 and INTERVAL2 to 405.

15. Click GO. Wait until the simulation stops.

16. Set INtERVAL1 to 67.

17. Set EXTINCTION2? to ON.

18. Click GO. Wait until the simulation stops.

At this point, you have simulated all of the data points for the pigeon #055. You should see a series of dots in the PROPORTION SELECTIONS plot that should cluster around an imaginary line from (0,0) to (1,1).

You may continue without hitting SETUP to add the remaining pigeons data to the plot, or hit SETUP to clear the plot and continue with the next pigeon.

If you keep going without hitting SETUP, change the color in the PROPORTION SELECTIONS plot to orange - this will help you differentiate the data points simulation pigeon #055 from the next pigeon.

1. Keep all of the sliders as they were at the end of the last step.

2. Click GO. Wait for the simulation to run.

3. Set INTERVAL1 to 81 and INTERVAL2 to 405.

4. Set EXTINCTION2? to OFF.

5. Click GO. Wait.

6. Set INTERVAL1 to 135 and INTERVAL2 to 135.

7. Click GO. Wait.

8. Set INTERCAL1 to 202 and INTERVAL2 to 101.

9. Click GO. Wait.

10. Set INTERvAL1 to 405 and INTERVAL2 to 81.

11. Click GO. Wait.

12. Click GO again. Wait.

You have now simulated pigeon #231's data. Again, you may click on SETUP to clear the plots and start fresh for the next pigeon, or simply move on to the last pigeon. If you move on without hitting SETUP, change the color of the PROPORTION SELECTIONS PLOT again to something like blue or green.

1. Keep everything as it was at the end of the last step.

2. Set INTERVAL1 to 135 and INTERVAL2 to 135.

2. Click GO. Wait.

4. Set INTERVAL1 to 101 and INTERVAL2 to 202.

5. Click GO. Wait.

6. Compare your results to Figure 1.

If it is hard to see, change the setting from "POINT" to "LINE" in the PROPORTION SELECTIONS plot. If you would like, copy the information in the Command Center and save it into a word document. It contains all the information you need to make a figure of your own using favorite figure-generating software. Exporting the plot to Excel actually exports ALL of the data points in a session, so this method is not recommended.

## EXTENDING THE MODEL

This model lends itselt to a number of extensions. A few are listed below.

**Pre-Session History.** In the current model, pre-session history is reset to 1 for each behavior and reinforcement value. This essentially gives the agent a minimal behavior history. Research has shown that the history of reinforcement for a behavior does effect response allocation (Karsina, Thompson, & Rodriguez, 2011). Future models could investigate allowing pres-session behavior and reinforcement settings to be manipulated by the user and allowing behavior histories to carry over from one session to the next.

**Different Schedules of Reinforcement.** This model evaluates a simulation of the random-interval schedule. Extensions could investigate how response allocations varies across different schedules of reinforcement (e.g., ratio schedules, fixed schedules, etc).

**Strategies.** The current model relies upon the ratios of reinforcement to determine behavior. For humans especially, strategies or rules may effect responding as well. Future models could examine how rules might work against, or with, the Matching Law.

**More Current Versions of the Matching Law.** This model examines the first formulation of what is now known as the Matching Law. As of this writing, this equation is well over 50 years old. As noted earlier, researchers have studied and refined this equation over the years. Extensions of this model could evaluate these equations.

## NETLOGO FEATURES

This model uses the LOOP command to "draw in" the individual shapes. As the length of each side of a drawn shape was 4, the following loop was used to draw in the square:

```

let n 4

loop [

set n n - 0.25

repeat 4 [

fd n

rt 90

]

if n = 0.25 [stop]

]

```

## RELATED MODELS

This is the first in what is intended to be a series of models investigating behavioral principles. This section will be updated when subsequent models are published.

## CREDITS AND REFERENCES

This model can be found on the web at https://app.box.com/s/b2awrje7tly279axn0zstjf0srw67of7

Baum, W. M. (1974). On two types of deviation from the matching law: bias and undermatching. _Journal of the Experimental Analysis of Behavior, 22,_ 231-242.

Baum, W. M. (1979). Matching, undermatching, and overmatching in studies of choice. _Journal of the Experimental Analysis of Behavior, 32,_ 269-281.

Borrero, J. C., & Vollmer, T. R. (2002). An application of the matching law to severe problem behavior. _Journal of Applied Behavior Analysis, 35,_ 13-27.

>Herrnstien, R. J. (1961). Relative and absolute strength of response as a function of frequency of reinforcement. _Journal of the Experimental Analysis of Behavior, 4,_ 267-272.

>Article available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1404074/pdf/jeabehav00196-0083.pdf

Karsina, A., Thompson, R. H., & Rodriguez, N. M. (2011). Effects of a history of differential reinforcement on preference for choice. _Journal of the Experimental Analysis of Behavior, 95(2),_ 189-202.

Reed, D. D. & Martens, B. K. (2008). Sensitivity and bias under conditions of equal and unequal academic task difficulty. _Journal of Applied Behavior Analysis, 41,_ 39-52.

Romanowich, P., Bourret, J., & Vollmer, T. R. (2007). Further analysis of the matching law to describe two- and three-point shot allocation by professional basketball players. _Journal of Applied Behavior Analysis, 40,_ 311-3115.

## COPYRIGHT AND LICENSE

Copyright 2018 Allen Karsina

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of the license, visit https://creativecommons.org/licenses/by-nc-sa/3.0.

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

turtles-own [

  ;; these variables reflect selection and reinforcement
  behavior1               ;; counts all moves to black patches from any patch
  behavior2               ;; tcounts all moves to white patches from any patch
  reinforcement1          ;; tracks all reinforcement for moves to black patches
  reinforcement2          ;; tracks all reinforcement for moves to white patches


  reinforcement-ratio     ;; this represents reinforcement1/(reinforcement1 + reinforcement2)
  behavior-ratio          ;; this represents behavior1/(behavior1 + behavior2)

  interval1-check         ;; temporary value for use with the random interval schedule
  interval2-check
  int-counter1             ;; counts intervals in a session for behavior1
  int-counter2             ;; counts intervals in a session for behavior2
  set-counter             ;; identifies end of set
  session                 ;; keeps track of number of
]

to setup
  clear-all
  setup-turtles
  reset-ticks
end 

to setup-turtles
  ;; creates an adjustable number of turtles to engage in behavior1 or behavior2

  create-turtles 1

  ask turtles [
    setxy random-xcor random-ycor   ;;each turtle starts on a random patch
    set shape "bird"
    set size 4

;; set initial parameters
    set behavior1 1
    set reinforcement1 1
    set behavior2 1
    set reinforcement2 1
    set reinforcement-ratio reinforcement1 / (reinforcement1 + reinforcement2)
    set behavior-ratio behavior1 / (behavior1 + behavior2)
    set interval1-check (2 * random interval1) + 1
    set interval2-check (2 * random interval2) + 1
    set int-counter1 1
    set int-counter2 1
]
end 

to go

  clear-drawing

  ask turtles [
    if session >= sessions [set session 0]
    if set-counter = 0 [
      set interval1-check (2 * random interval1) + 1
      set interval2-check (2 * random interval2) + 1
    ]

    ifelse random-float 1 < reinforcement-ratio
        [
          behave1
          if interval1-check <= int-counter1
          [
            reinforce1
            set interval1-check (2 * random interval1) + 1
           ]
        ]

        [
          behave2
          if interval2-check <= int-counter2
          [reinforce2
          set interval2-check (2 * random interval2) + 1
        ]
        ]
    ]

  ask turtles [
    set behavior-ratio behavior1 / (behavior1 + behavior2)
    set reinforcement-ratio (reinforcement1) / (reinforcement1 + reinforcement2)
    set int-counter1 int-counter1 + 1
    set int-counter2 int-counter2 + 1
  ]

    tick

        ;; criteria for ending session in Herrnstein (1961)

  ask turtles [
    if set-counter = session-length [

      ;; Command Center Display
      type "end of session " print session + 1
      type "interval B1 " type interval1 type " interval B2 " print interval2
      type "session R1/(R1+R2) " print (reinforcement1 - 1) / (reinforcement1 - 1 + reinforcement2 - 1)
      type "session B1/(B1+B2) " print (behavior1 - 1) / (behavior1 - 1 + behavior2 - 1)
      type "Total Behavior1 " type behavior1 - 1 type " Total Behavior2 "
      type behavior2 - 1 type " Total Reinforcement1 "
      type reinforcement1 - 1 type " Total Reinforcement2 "
      print reinforcement2 - 1
      type "Ticks at end of session " print ticks
      print " "

      ;; reset settings
      set set-counter 0
      set session session + 1
      set behavior1 1
      set behavior2 1
      set reinforcement1 1
      set reinforcement2 1
      set reinforcement-ratio 0.5
      set int-counter1 1
      set int-counter2 1
      ]
    ]

  if [session] of turtle 0 = sessions [stop]
end 

to behave1
;; executes behavior1 and updates settings for behavior1 and b1set
  pu
  set color orange
  rt random 360
  fd random 20
  pd
  repeat 4 [
    fd 4
    rt 90
   ]
  set behavior1 behavior1 + 1
end 

to behave2
;; executes behavior2 and updates settings for behavior2 and b2set
  pu
  set color violet
  rt random 360
  fd random 20
  pd
   repeat 9 [
      fd 4
      left 40
      ]
    set behavior2 behavior2 + 1
end 

to reinforce1

    if not extinction1? [
      set reinforcement1 reinforcement1 + 1
      set set-counter set-counter + 1
      set int-counter1 1
      let n 4
      loop [
        set n n - 0.25
        repeat 4 [
          fd n
          rt 90
        ]
        if n = 0.25 [stop]
      ]
    ]
end 

to reinforce2

    if not extinction2? [
      set reinforcement2 reinforcement2 + 1
      set set-counter set-counter + 1
      set int-counter2 1
      pd
      let n 4
      loop [
        set n n - 0.25
        repeat 9 [
          fd n
          left 40
        ]
        if n = 0.25 [stop]
      ]
    ]
end 

There are 2 versions of this model.

Uploaded by When Description Download
Allen Karsina almost 6 years ago Minor Coding Fix Download this version
Allen Karsina almost 6 years ago Initial upload Download this version

Attached files

File Type Description Last updated
Matching Law Simulation.png preview Preview for 'Matching Law Simulation' almost 6 years ago, by Allen Karsina Download

This model does not have any ancestors.

This model does not have any descendants.